Progress OpenEdge Database Administration
Progress OpenEdge Database Administration
Database Administration
Progress software products are copyrighted and all rights are reserved by Progress Software Corporation. This manual is also
copyrighted and all rights are reserved. This manual may not, in whole or in part, be copied, photocopied, translated, or reduced to any
electronic medium or machine-readable form without prior consent, in writing, from Progress Software Corporation.
The information in this manual is subject to change without notice, and Progress Software Corporation assumes no responsibility for any
errors that may appear in this document.
The references in this manual to specific platforms supported are subject to change.
A (and design), Actional, Actional (and design), Affinities Server, Allegrix, Allegrix (and design), Apama, Business Empowerment,
ClientBuilder, ClientSoft, ClientSoft (and Design), Clientsoft.com, DataDirect (and design), DataDirect Connect, DataDirect
Connect64, DataDirect Connect OLE DB, DataDirect Technologies, DataDirect XQuery, DataXtend, Dynamic Routing Architecture,
EasyAsk, EdgeXtend, Empowerment Center, eXcelon, Fathom, IntelliStream, Neon, Neon New Era of Networks, O (and design),
ObjectStore, OpenEdge, PDF, PeerDirect, Persistence, Persistence (and design), POSSENET, Powered by Progress, PowerTier,
ProCare, Progress, Progress DataXtend, Progress Dynamics, Progress Business Empowerment, Progress Empowerment Center,
Progress Empowerment Program, Progress Fast Track, Progress OpenEdge, Progress Profiles, Progress Results, Progress Software
Developers Network, ProVision, PS Select, SequeLink, Shadow, ShadowDirect, Shadow Interface, Shadow Web Interface, ShadowWeb
Server, Shadow TLS, SOAPStation, Sonic ESB, SonicMQ, Sonic Orchestration Server, Sonic Software (and design), SonicSynergy,
SpeedScript, Stylus Studio, Technical Empowerment, Voice of Experience, WebSpeed, and Your Software, Our Technology
Experience the Connection are registered trademarks of Progress Software Corporation or one of its subsidiaries or affiliates in the U.S.
and/or other countries. AccelEvent, Apama Dashboard Studio, Apama Event Manager, Apama Event Modeler, Apama Event Store,
AppsAlive, AppServer, ASPen, ASP-in-a-Box, BusinessEdge, Cache-Forward, DataDirect Spy, DataDirect SupportLink, DataDirect
XML Converters, Future Proof, Ghost Agents, GVAC, Looking Glass, ObjectCache, ObjectStore Inspector, ObjectStore Performance
Expert, Pantero, POSSE, ProDataSet, Progress ESP Event Manager, Progress ESP Event Modeler, Progress Event Engine, Progress
RFID, PSE Pro, SectorAlliance, SmartBrowser, SmartComponent, SmartDataBrowser, SmartDataObjects, SmartDataView,
SmartDialog, SmartFolder, SmartFrame, SmartObjects, SmartPanel, SmartQuery, SmartViewer, SmartWindow, Sonic, Sonic Business
Integration Suite, Sonic Process Manager, Sonic Collaboration Server, Sonic Continuous Availability Architecture, Sonic Database
Service, Sonic Workbench, Sonic XML Server, The Brains Behind BAM, WebClient, and Who Makes Progress are trademarks or
service marks of Progress Software Corporation or one of its subsidiaries or affiliates in the U.S. and other countries. Vermont Views
is a registered trademark of Vermont Creative Software in the U.S. and other countries. IBM is a registered trademark of IBM
Corporation. JMX and JMX-based marks and Java and all Java-based marks are trademarks or registered trademarks of Sun
Microsystems, Inc. in the U.S. and other countries. Any other trademarks or service marks contained herein are the property of their
respective owners.
Third party acknowledgements See the Third party acknowledgements section on page Preface10.
February 2008
For the latest documentation updates see the OpenEdge Product Documentation category on PSDN (http://www.psdn.com/
library/kbcategory.jspa?categoryID=129).
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Preface1
Part I Database Basics
1.
11
12
13
13
110
111
113
115
116
117
119
119
120
121
124
124
126
2.
21
22
23
24
27
28
29
210
211
212
213
Contents
3.
File Handles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Shared memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Data types and values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
214
215
216
31
32
32
32
33
35
35
36
36
38
38
310
310
310
311
312
312
315
4.
Backup Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Identifying files for backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Determining the type of backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Full backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Incremental backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Online backups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Offline backups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Choosing backup media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating a backup schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database size. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Unscheduled backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
42
43
44
44
44
45
46
47
47
47
48
48
5.
Backing Up a Database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using PROBKUP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performing an online full backup with PROBKUP . . . . . . . . . . . . . . . . . .
Testing backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Archiving backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performing an offline backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performing an online backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using database quiet points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performing an operating system backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database backup examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Incremental backup example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Full backup example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Verifying a backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CRC codes and redundancy in backup recovery . . . . . . . . . . . . . . . . . . . . . . . . . .
51
52
52
53
54
56
58
59
511
513
513
515
518
519
Contents2
Contents
CRC codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Error-correction blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Restoring a database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using the PROREST utility to restore a database . . . . . . . . . . . . . . . . .
Important rules for restoring backups . . . . . . . . . . . . . . . . . . . . . . . . . . .
Obtaining storage area descriptions using PROREST . . . . . . . . . . . . .
Database restore examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
519
519
520
520
521
522
522
6.
Recovering a Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Introduction to recovery mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Crash recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Roll-forward recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-phase commit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
File locations that ensure safe recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Developing a recovery plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Time needed for recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovery guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sample recovery plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Example 1: Low availability requirements . . . . . . . . . . . . . . . . . . . . . . . .
Example 2: Moderate availability requirements . . . . . . . . . . . . . . . . . . .
Example 3: Moderate-to-high availability requirements . . . . . . . . . . . . .
Example 4: high availability requirements . . . . . . . . . . . . . . . . . . . . . . .
Sample recovery scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
After-imaging and roll-forward recovery commands . . . . . . . . . . . . . . . . . . . . . . .
Recovering from system failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
System crash while running RFUTIL ROLL FORWARD. . . . . . . . . . . . .
System crash while running other utilities . . . . . . . . . . . . . . . . . . . . . . . .
System crash while backing up the database . . . . . . . . . . . . . . . . . . . . .
System crash while database is up. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovering from media failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Loss of the DB files, BI files, or both . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Loss of the AI file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Loss of database backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Loss of transaction log file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovering from a full disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
After-image area disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Control or primary recovery area disk . . . . . . . . . . . . . . . . . . . . . . . . . . .
Transaction log file disk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Truncating the BI file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Releasing shared memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovering from a lost or damaged control area . . . . . . . . . . . . . . . . . . . . . . . . .
Unlocking damaged databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Dumping tables from a damaged database . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Forcing access to a damaged database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
62
62
64
65
66
67
67
67
69
69
69
610
611
612
617
618
618
618
619
619
620
620
621
621
622
623
623
623
624
625
626
627
628
629
630
7.
After-imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
After-image areas and extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Estimating after-imaging disk space requirements . . . . . . . . . . . . . . . . . . . . . . . .
Creating after-image areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enabling after-imaging offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enabling after-imaging online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Managing after-imaging files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Monitoring AI file status. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Switching to a new AI file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Archiving an AI file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Making an AI file available for reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
72
74
75
77
78
79
79
711
713
715
Contents3
Contents
AI File Management utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Automatic extent archiving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enabling your database for automated AI file management . . . . . . . . . .
Monitoring and adjusting automated AI File Management . . . . . . . . . . . .
Archived extents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Archive log file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Add and reorder AI extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performing roll-forward recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
After-image sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sequence not required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sequence required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Disabling after-imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
716
716
719
720
721
722
725
727
728
729
730
731
8.
Maintaining Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Establishing an OpenEdge user ID and password . . . . . . . . . . . . . . . . . . . . . . . . .
OpenEdge user ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
OpenEdge password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Validating an OpenEdge user ID and password . . . . . . . . . . . . . . . . . . .
Establishing authentication for your OpenEdge database . . . . . . . . . . . . . . . . . . .
ABL tables only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SQL tables only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Both ABL and SQL tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Connection security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Designating valid users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Designating a security administrator . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Deleting a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Changing a password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Running a user report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Schema security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Operating systems and database security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81
82
82
82
83
84
84
84
84
85
85
86
87
88
89
810
811
9.
Auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Auditable events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Audit events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Security events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Schema events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Data events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Administration events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Utility events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Application events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Auditing states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enabling and disabling auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enabling auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Disabling auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Auditing tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Indexes on auditing tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Archiving audit data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Audit archive process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Auditing impact on database resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Auditing impact on database utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Identifying the privileged user. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Utility modifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91
92
92
93
93
93
94
94
94
95
96
97
97
98
910
910
913
913
915
916
916
919
Contents4
Contents
10.
Replicating Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Replication schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Trigger-based replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Log-based site replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Replication models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database ownership models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Distribution model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Consolidation model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Implementing log-based site replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Log-based replication procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
101
102
102
102
102
103
103
103
106
106
11.
Failover Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
111
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
112
Related software and hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
112
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
113
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
114
Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
115
Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
115
Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
115
Required cluster components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
116
Network considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
118
Terms and concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
119
Resources and dependencies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
119
Failure and recovery action. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
119
Fail over policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
119
Using the PROCLUSTER command-line interface . . . . . . . . . . . . . . . . . . . . . . . . 1111
Cluster-enabling a database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1111
Disabling a cluster-enabled database . . . . . . . . . . . . . . . . . . . . . . . . . . . 1112
Starting a cluster-enabled database . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1113
Stopping a cluster-enabled database . . . . . . . . . . . . . . . . . . . . . . . . . . . 1113
Terminating a cluster-enabled database . . . . . . . . . . . . . . . . . . . . . . . . 1114
Isalive and looksalive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1114
Results of enabling an OpenEdge database for fail over . . . . . . . . . . . . . . . . . . . 1115
Database UUID file (HPUX 32 and 64 bit only). . . . . . . . . . . . . . . . . . . . 1115
Changing the structure of the database . . . . . . . . . . . . . . . . . . . . . . . . . 1115
Adding extents on a volume group or file system different from the database (AIX
only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1115
Platform-specific considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1117
Adding nodes where the database can be run for AIX . . . . . . . . . . . . . . 1117
Upper limit on the number of packages for HPUX 32 bit and 64 bit . . . . 1117
Directory of registered packages for HPUX 32 bit and 64 bit . . . . . . . . . 1118
Using a cluster-enabled database with the AdminServer . . . . . . . . . . . . . . . . . . . 1119
Using a cluster-enabled database with standard commands . . . . . . . . . . . . . . . . 1121
Using the Windows Cluster Administrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1122
Emergency disabling of a cluster-enabled database . . . . . . . . . . . . . . . . . . . . . . . 1124
UNIX cluster management commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1125
12.
121
122
123
123
126
126
127
127
128
Contents5
Contents
Limbo transactions with two-phase commit . . . . . . . . . . . . . . . . . . . . . . .
Resolving limbo transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Resolving limbo transaction scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-phase commit case study. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Java Transaction API (JTA) support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
JTA resource impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
JTA processing impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enabling JTA support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Disabling JTA support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Monitoring JTA transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Resolving JTA transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
129
1210
1214
1215
1218
1218
1219
1219
1219
1220
1220
13.
Managing Performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Introduction to performance management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tools for monitoring performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROMON utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Virtual system tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Windows Performance tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Server performance factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CPU usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Disk I/O. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Before-image I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
After-image I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Direct I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Memory usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Monitoring memory use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Controlling memory use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Shared memory allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Operating system resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Processes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Semaphores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Spin locks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
File descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database fragmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Analyzing database fragmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Eliminating database fragmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Managing fragmentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Index use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Analyzing index use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Compacting indexes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Rebuilding indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Activating a single index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Virtual system tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
131
132
133
133
133
133
134
134
135
135
1315
1323
1325
1326
1326
1326
1327
1328
1328
1328
1331
1331
1332
1332
1333
1333
1336
1336
1337
1338
1343
1345
14.
141
142
143
146
148
149
1410
Contents6
Contents
OpenEdge Structure Add Online utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Area numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Area number assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Trimming unused area memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Error checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Validating structure description files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
OpenEdge Structure Remove utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Maintaining indexes and tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Moving tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Moving indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Compacting indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using virtual system tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1411
1415
1415
1419
1421
1422
1424
1425
1425
1426
1427
1428
15.
151
152
152
153
155
156
157
158
159
159
1511
1513
1514
1516
1516
1517
1518
1518
1520
1520
1522
1523
1523
1524
1525
1525
1526
1527
1529
1530
1530
1531
1531
1532
1533
1534
1534
16.
Logged Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
OpenEdge Release 10 database log file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Managing log file size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
161
162
164
Contents7
Contents
Saving key database events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Defining key events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Saving key events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enabling save key events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Disabling save key events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Stored key events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
_KeyEvt table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Event logging in Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Managing OpenEdge RDBMS events in Windows . . . . . . . . . . . . . . . . .
Understanding the event log components . . . . . . . . . . . . . . . . . . . . . . . .
The Event Log and the registry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Client database-request statement caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performing chain analysis online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Roll forward mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Important considerations when using the OPLOCK qualifier . . . . . . . . . .
165
165
165
166
167
167
1611
1613
1613
1614
1615
1616
1618
1619
1619
Part IV Reference
17.
18.
Contents8
181
182
183
185
186
187
188
189
1811
1812
1812
1812
1813
1813
1814
1814
1815
1815
1816
1817
1817
1818
1818
1819
1819
Contents
Case Table (-cpcase) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Collation Table (-cpcoll) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Internal Code Page (-cpinternal) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Log File Code Page (-cplog) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Print Code Page (-cpprint) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
R-code in Code Page (-cprcodein) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Stream Code Page (-cpstream) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terminal Code Page (-cpterm) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Direct I/O (-directio). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Event Level (-evtlevel). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Before-image Cluster Age (-G) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Group Delay (-groupdelay) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Host Name (-H) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hash Table Entries (-hash) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
No Crash Protection (-i) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Increase Startup Parameters Online (-increaseto) . . . . . . . . . . . . . . . . .
Index Range Size (-indexrangesize) . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Internet Protocol (-ipver) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Key Alias (-keyalias) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Key Alias Password (-keyaliaspasswd) . . . . . . . . . . . . . . . . . . . . . . . . .
Lock Table Entries (-L) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Lock release (-lkrela). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Auto Server (-m1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Manual Server (-m2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Secondary Login Broker (-m3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Maximum Clients per Server (-Ma) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Maximum area number (-maxAreas) . . . . . . . . . . . . . . . . . . . . . . . . . . .
Maximum Dynamic Server (-maxport) . . . . . . . . . . . . . . . . . . . . . . . . . .
Maximum JTA Transactions (-maxxids) . . . . . . . . . . . . . . . . . . . . . . . . .
Delayed BI File Write (-Mf) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Minimum Clients per Server (-Mi) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Minimum Dynamic Server (-minport) . . . . . . . . . . . . . . . . . . . . . . . . . . .
Maximum Servers (-Mn) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Servers per Protocol (-Mp) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Maximum Servers per Broker (-Mpb) . . . . . . . . . . . . . . . . . . . . . . . . . . .
VLM Page Table Entry Optimization (-Mpte) . . . . . . . . . . . . . . . . . . . . .
Shared-memory Overflow Size (-Mxs) . . . . . . . . . . . . . . . . . . . . . . . . . .
Network Type (-N). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Number of Users (-n) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
No Session Cache (-nosessioncache) . . . . . . . . . . . . . . . . . . . . . . . . . .
Pending Connection Time (-PendConnTime) . . . . . . . . . . . . . . . . . . . .
Parameter File (-pf) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Pin Shared Memory (-pinshm) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuration Properties File (-properties) . . . . . . . . . . . . . . . . . . . . . . .
Buffered I/O (-r) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Service Name (-S) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Semaphore Sets (-semsets) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Server Group (-servergroup). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Session Timeout (-sessiontimeout) . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Shared memory segment size (-shmsegsize). . . . . . . . . . . . . . . . . . . . .
Spin Lock Retries (-spin). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SSL (-ssl) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Table Range Size (-tablerangesize) . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Century Year Offset (-yy) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1820
1820
1821
1821
1822
1822
1823
1824
1824
1825
1825
1826
1826
1827
1827
1828
1828
1829
1830
1831
1831
1832
1833
1833
1833
1834
1834
1835
1835
1836
1837
1837
1838
1838
1839
1839
1839
1840
1841
1841
1842
1843
1843
1844
1844
1845
1846
1846
1847
1847
1848
1849
1850
1850
Contents9
Contents
19.
PROMON Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROMON utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROMON User Control option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROMON Locking and Waiting statistics option . . . . . . . . . . . . . . . . . . . . . . . . . .
PROMON Block Access option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROMON Record Locking Table option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROMON Activity option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROMON Shared Resources option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROMON Database Status option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROMON Shut Down Database option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROMON R&D Advanced Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
R&D Status Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Processes/Clients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Lock Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Buffer Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Logging Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
BI Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
AI Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-phase Commit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Startup Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Shared Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Shared Memory Segments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
AI Extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database Service Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Servers by broker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Client Database-Request Statement Caching . . . . . . . . . . . . . . . . . . . . .
R&D Activity Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Buffer Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Page Writers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
BI Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
AI Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Lock Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
I/O Operations By Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
I/O Operations by File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Space Allocation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Record . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
R&D Other Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performance Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
I/O Operations by Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Lock Requests by User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Checkpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
I/O Operations by User by Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
I/O Operations by User by Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
R&D Administrative Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Check Active Transaction Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Check Two-Phase Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Resolve Limbo Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Adjust Latch Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Adjust Page Writer Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents10
191
192
194
199
1911
1913
1916
1920
1922
1924
1925
1926
1927
1929
1929
1930
1934
1935
1936
1937
1938
1939
1940
1940
1941
1942
1943
1944
1945
1946
1954
1955
1958
1959
1960
1961
1963
1964
1966
1967
1968
1969
1969
1970
1971
1971
1973
1974
1975
1976
1977
1979
1979
1980
1980
1981
1981
Contents
20.
Restricted Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terminate a Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enable/disable block level consistency check. . . . . . . . . . . . . . . . . . . . .
R&D Adjust Monitor Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROMON 2PC Transactions Control option . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROMON Resolve 2PC Limbo Transactions option . . . . . . . . . . . . . . . . . . . . . . .
PROMON 2PC Coordinator Information option . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROMON Resolve JTA Transactions option . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROMON Modify Defaults option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1982
1982
1982
1985
1986
1988
1989
1990
1991
PROUTIL Utility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL utility syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL 2PHASE BEGIN qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL 2PHASE COMMIT qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL 2PHASE END qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL 2PHASE MODIFY qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL 2PHASE RECOVER qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL AUDITARCHIVE qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL AUDITLOAD qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL BIGROW qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL BULKLOAD qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL BUSY qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL CHANALYS qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL CODEPAGE-COMPILER qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL CONV910 qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL CONVCHAR qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL CONVFILE qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL DBANALYS qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL DBAUTHKEY qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL DESCRIBE qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL DBIPCS qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL DISABLEAUDITING qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL DISABLEJTA qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL DISABLEKEYEVENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL DISPTOSSCREATELIMITS qualifier . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL DUMP qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL DUMPSPECIFIED qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL ENABLEAUDITING qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL ENABLEJTA qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL ENABLEKEYEVENTS qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL ENABLELARGEFILES qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL ENABLELARGEKEYS qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL ENABLESEQ64 qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL ENABLESTOREDPROC qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL HOLDER qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL IDXACTIVATE qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL IDXANALYS qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL IDXBUILD qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL IDXCHECK qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using the IDXCHECK qualifier online . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL IDXCOMPACT qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL IDXFIX qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL IDXMOVE qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL INCREASETO qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL IOSTATS qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL LOAD qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
201
202
206
207
208
209
2010
2011
2014
2016
2017
2018
2019
2020
2021
2022
2025
2027
2028
2029
2033
2034
2035
2036
2037
2038
2041
2043
2045
2046
2047
2048
2049
2050
2051
2053
2054
2055
2059
2061
2062
2064
2068
2070
2071
2073
Contents11
Contents
PROUTIL MVSCH qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL RCODEKEY qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL REVERT qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL SETAREACREATELIMIT qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL SETAREATOSSLIMIT qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL SETBLOBCREATELIMIT qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL SETBLOBTOSSLIMIT qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL SETTABLECREATELIMIT qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL SETTABLETOSSLIMIT qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL TABANALYS qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL TABLEMOVE qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL TRUNCATE AREA qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL TRUNCATE BI qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL UPDATESCHEMA qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL UPDATEVST qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROUTIL WBREAK-COMPILER qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2075
2076
2077
2080
2081
2082
2083
2084
2085
2086
2088
2090
2092
2094
2095
2096
21.
PROSTRCT Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROSTRCT utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROSTRCT ADD qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROSTRCT ADDONLINE qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROSTRCT BUILDDB qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROSTRCT CLUSTER CLEAR qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROSTRCT CREATE qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROSTRCT LIST qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROSTRCT REMOVE qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROSTRCT REORDER AI qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROSTRCT REPAIR qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROSTRCT STATISTICS qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROSTRCT UNLOCK qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
211
212
214
215
216
217
218
2110
2111
2112
2113
2114
2115
22.
RFUTIL Utility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL utility syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIARCHIVER DISABLE qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIARCHIVER ENABLE qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIARCHIVER END qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIARCHIVER SETDIR qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIARCHIVER SETINTERVAL qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIARCHIVE EXTENT qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIMAGE AIOFF qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIMAGE BEGIN qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIMAGE END qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIMAGE EXTENT EMPTY qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIMAGE EXTENT FULL qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIMAGE EXTENT LIST qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIMAGE EXTRACT qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIMAGE NEW qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIMAGE QUERY qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIMAGE SCAN qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIMAGE TRUNCATE qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIVERIFY PARTIAL qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL AIVERIFY FULL qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL MARK BACKEDUP qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL ROLL FORWARD qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
221
222
224
225
226
227
228
229
2210
2211
2212
2213
2214
2215
2216
2217
2218
2220
2221
2222
2223
2224
2225
Contents12
Contents
RFUTIL ROLL FORWARD OPLOCK qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL ROLL FORWARD RETRY qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RFUTIL SEQUENCE qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2227
2228
2230
23.
231
232
236
2310
2311
2313
2317
2320
2322
2324
2325
2326
2327
24.
SQL Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SQLDUMP utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SQLLOAD utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SQLSCHEMA utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
241
242
246
2410
25.
251
252
253
258
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Index1
Contents13
Contents
Figures
Figure 11:
Figure 61:
Figure 62:
Figure 63:
Figure 64:
Figure 65:
Figure 66:
Figure 67:
Figure 71:
Figure 72:
Figure 73:
Figure 74:
Figure 75:
Figure 91:
Figure 101:
Figure 102:
Figure 103:
Figure 104:
Figure 111:
Figure 112:
Figure 113:
Figure 121:
Figure 122:
Figure 123:
Figure 124:
Figure 131:
Figure 132:
Figure 133:
Figure 134:
Figure 135:
Figure 136:
Figure 137:
Figure 138:
Figure 139:
Figure 1310:
Figure 1311:
Figure 1312:
Figure 141:
Figure 142:
Figure 143:
Figure 151:
Figure 152:
Figure 153:
Figure 154:
Figure 155:
Figure 171:
Figure 191:
Figure 192:
Figure 193:
Figure 194:
Figure 195:
Figure 196:
Figure 197:
Figure 198:
Contents14
Contents
Figure 199:
Figure 1910:
Figure 1911:
Figure 1912:
Figure 1913:
Figure 1914:
Figure 1915:
Figure 1916:
Figure 1917:
Figure 1918:
Figure 1919:
Figure 1920:
Figure 1921:
Figure 1922:
Figure 1923:
Figure 1924:
Figure 1925:
Figure 1926:
Figure 1927:
Figure 1928:
Figure 1929:
Figure 1930:
Figure 1931:
Figure 1932:
Figure 1933:
Figure 1934:
Figure 1935:
Figure 1936:
Figure 1937:
Figure 1938:
Figure 1939:
Figure 1940:
Figure 1941:
Figure 1942:
Figure 1943:
Figure 1944:
Figure 1945:
Figure 1946:
Figure 1947:
Figure 1948:
Figure 1949:
Figure 1950:
Figure 1951:
Figure 1952:
Figure 1953:
Figure 1954:
Figure 1955:
Figure 1956:
Figure 1957:
Figure 1958:
Figure 1959:
Figure 1960:
Figure 1961:
Figure 201:
Figure 231:
1924
1927
1929
1929
1932
1932
1933
1933
1933
1934
1934
1934
1935
1936
1937
1938
1939
1940
1940
1941
1942
1943
1955
1958
1959
1960
1961
1963
1964
1966
1967
1968
1969
1969
1970
1971
1973
1974
1975
1976
1977
1979
1980
1981
1981
1982
1983
1985
1986
1988
1989
1990
1991
2029
236
Contents15
Contents
Tables
Table 11:
Table 12:
Table 21:
Table 22:
Table 23:
Table 24:
Table 25:
Table 26:
Table 27:
Table 28:
Table 29:
Table 210:
Table 31:
Table 32:
Table 41:
Table 42:
Table 51:
Table 61:
Table 62:
Table 63:
Table 64:
Table 65:
Table 66:
Table 71:
Table 72:
Table 73:
Table 74:
Table 91:
Table 92:
Table 93:
Table 94:
Table 95:
Table 111:
Table 112:
Table 131:
Table 132:
Table 133:
Table 134:
Table 135:
Table 136:
Table 151:
Table 152:
Table 153:
Table 154:
Table 155:
Table 161:
Table 162:
Table 163:
Table 164:
Table 165:
Table 166:
Table 171:
Table 172:
Table 181:
Table 182:
Contents16
ST file tokens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Calculating extent size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Storage area types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Maximum rows per data storage area (approximate) . . . . . . . . . . . . . .
Maximum number of sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Maximum primary recovery (BI) area size . . . . . . . . . . . . . . . . . . . . . .
Maximum number of connections per database . . . . . . . . . . . . . . . . . .
Maximum number of simultaneous transactions . . . . . . . . . . . . . . . . . .
Database name limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SQL data type limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ABL data type limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ABL and SQL data type correspondence . . . . . . . . . . . . . . . . . . . . . . .
PROSHUT menu options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROSHUT Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Backup options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Backup media questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PROREST utility verification parameters . . . . . . . . . . . . . . . . . . . . . . .
Sample low availability requirements . . . . . . . . . . . . . . . . . . . . . . . . . .
Sample moderate availability requirements . . . . . . . . . . . . . . . . . . . . .
Sample moderate-to-high availability requirements . . . . . . . . . . . . . . .
Sample high availability requirements . . . . . . . . . . . . . . . . . . . . . . . . . .
Utilities used with roll-forward recovery . . . . . . . . . . . . . . . . . . . . . . . . .
Shared-memory segment status fields . . . . . . . . . . . . . . . . . . . . . . . . .
File tokens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Format of an archival log startup line . . . . . . . . . . . . . . . . . . . . . . . . . .
Format of an AI extent archive line in the archive log file . . . . . . . . . . .
Format of a backup line in the archive log file . . . . . . . . . . . . . . . . . . . .
Auditing meta-schema tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Indexes for auditing schema tables . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Encrypted password components . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Protected tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Protected utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
UNIX cluster management commands . . . . . . . . . . . . . . . . . . . . . . . . .
UNIX Shared disk commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Startup parameters for I/O by object . . . . . . . . . . . . . . . . . . . . . . . . . . .
Startup parameters that affect memory allocation . . . . . . . . . . . . . . . .
UNIX kernel parameters that affect semaphores . . . . . . . . . . . . . . . . .
PROUTIL qualifiers for changing create and toss limits . . . . . . . . . . . .
Create and toss limit situations and solutions . . . . . . . . . . . . . . . . . . . .
Factor values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Definitions dumped to the definition file . . . . . . . . . . . . . . . . . . . . . . . .
Data definitions file trailer values . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DUMPSPECIFIED syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents file trailer variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Modifying the description file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database event log format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Key events and stored records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
_KeyEvt table schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Event logging components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Event level values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Event Log components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Command components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database startup and shutdown commands . . . . . . . . . . . . . . . . . . . . .
Server performance parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Server-type parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
18
24
25
28
29
211
212
213
216
217
218
313
314
43
46
54
69
69
610
611
617
626
75
722
723
723
910
911
919
919
920
1125
1125
1315
1326
1330
1334
1334
1337
153
156
1511
1515
1526
162
167
1612
1613
1613
1614
172
173
183
185
Contents
Table 183:
Table 184:
Table 185:
Table 186:
Table 187:
Table 188:
Table 189:
Table 1810:
Table 1811:
Table 191:
Table 192:
Table 193:
Table 194:
Table 195:
Table 196:
Table 197:
Table 198:
Table 199:
Table 1910:
Table 1911:
Table 1912:
Table 1913:
Table 1914:
Table 1915:
Table 1916:
Table 1917:
Table 201:
Table 202:
Table 203:
Table 204:
Table 205:
Table 206:
Table 207:
Table 221:
Table 222:
Table 223:
Table 251:
Table 252:
Table 253:
Table 254:
Table 255:
Table 256:
Table 257:
Table 258:
Table 259:
Table 2510:
Table 2511:
Table 2512:
Table 2513:
Table 2514:
Table 2515:
Table 2516:
Table 2517:
Table 2518:
Table 2519:
Table 2520:
Table 2521:
186
187
188
189
1811
1819
1845
1848
1850
194
196
1914
1914
1917
1919
1927
1928
1928
1930
1930
1931
1936
1945
1952
1954
1958
202
2032
2057
2060
2061
2065
2067
222
2218
2219
253
258
258
259
2510
2511
2511
2512
2512
2513
2514
2514
2515
2516
2517
2517
2517
2518
2518
2519
2519
Contents17
Contents
Table 2522:
Table 2523:
Table 2524:
Table 2525:
Table 2526:
Table 2527:
Table 2528:
Table 2529:
Table 2530:
Table 2531:
Table 2532:
Table 2533:
Table 2534:
Table 2535:
Table 2536:
Table 2537:
Table 2538:
Table 2539:
Table 2540:
Table 2541:
Table 2542:
Table 2543:
Table 2544:
Table 2545:
Table 2546:
Contents18
2521
2522
2523
2524
2524
2524
2525
2525
2526
2528
2529
2529
2529
2529
2530
2531
2531
2531
2532
2532
2533
2533
2534
2534
2537
Preface
This Preface contains the following sections:
Purpose
Audience
Organization
Typographical conventions
OpenEdge messages
Preface
Purpose
This manual describes OpenEdge RDBMS administration concepts, procedures, and utilities.
The procedures allow you to create and maintain your OpenEdge databases and manage their
performance. This manual assumes that you are familiar with the OpenEdge RDBMS planning
concepts discussed in OpenEdge Getting Started: Database Essentials.
Audience
This manual is designed as a guide and reference for OpenEdge Database Administrators.
Organization
Part I, Database Basics
Chapter 1, Creating and Deleting Databases
Describes how to create and delete OpenEdge databases.
Chapter 2, OpenEdge RDBMS Limits
Catalogs limits of the OpenEdge RDBMS, including all aspects of database size, operating
system limits, naming conventions, and data types.
Chapter 3, Starting Up and Shutting Down
Describes the commands required to start up and shut down an OpenEdge database.
Part II, Protecting Your Data
Chapter 4, Backup Strategies
Discusses various approaches to backing up your database.
Chapter 5, Backing Up a Database
Describes the mechanics of backing up your database with the PROBKUP utility.
Chapter 6, Recovering a Database
Examines recovery strategies and how to use the PROREST utility to restore an OpenEdge
database.
Chapter 7, After-imaging
Presents after-imaging and how to use it for data recovery. Also, describes how to
implement after-imaging with after-image extents.
Chapter 8, Maintaining Security
Describes how to implement database security, including assigning user IDs and
designating database administrators.
Preface2
Preface
Chapter 9, Auditing
Introduces auditing. How to enable and disable auditing on your database, and what can
be audited is discussed.
Chapter 10, Replicating Data
Examines replication schemes and how to implement log-based replication.
Chapter 11, Failover Clusters
Explains how to configure and manage a cluster-enabled database.
Chapter 12, Distributed Transaction Processing
Explains distributed transaction processing, and discusses support for two-phase commit
and the Java Transaction API (JTA).
Part III, Maintaining and Monitoring Your Database
Chapter 13, Managing Performance
Discusses how to monitor tune database performance.
Chapter 14, Maintaining Database Structure
Describes methods to manage the database structure and alter it as necessary to improve
storage and performance.
Chapter 15, Dumping and Loading
Explains how to dump and load databases, including tables, indexes, and sequences.
Chapter 16, Logged Data
Examines the process of logging significant database events.
Part IV, Reference
Chapter 17, Startup and Shutdown Commands
Catalogs the OpenEdge RDBMS commands for starting up and shutting down database
sessions and processes.
Chapter 18, Database Startup Parameters
Lists and details the OpenEdge RDBMS startup parameters.
Chapter 19, PROMON Utility
Details the PROMON Utility used for monitoring your database.
Chapter 20, PROUTIL Utility
Details the PROUTIL Utility used for maintaining your database.
Preface3
Preface
Chapter 21, PROSTRCT Utility
Details the PROSTRCT Utility used for creating and updating the physical structure of
your database.
Chapter 22, RFUTIL Utility
Details the RFUTIL Utility used for managing after imaging.
Chapter 23, Other Database Administration Utilities
Details other database utilities including PROBKUP, PROREST, PROCOPY, PRODEL,
and PROLOG.
Chapter 24, SQL Utilities
Details the utilities used for maintaining your database for use with SQL.
Chapter 25, Virtual System Tables
Describes the Virtual System Tables that allow ABL and SQL applications to examine the
status of a database and monitor its performance.
Part I, Database basics, describes the basic commands for creating and deleting, and
starting up and shutting down databases, along with detailing database limits.
Part II, Protecting your data, describes the procedures a database administrator uses to
protect a database in a flexible business environment. Each chapter discusses a particular
administrative activity.
Part III, Maintaining and monitoring your database, describes the procedures and tools
a database administrator employs to keep a database functioning efficiently.
Part IV, Reference, describes the OpenEdge RDBMS commands, startup parameters,
utilities, and system tables. Refer to the chapters in Part IV when you need to access
specific descriptive information, such as the syntax of an administration utility.
For the latest documentation updates see the OpenEdge Product Documentation category on
PSDN http://www.psdn.com/library/kbcategory.jspa?categoryID=129.
Preface4
Preface
Like most other keywords, references to specific built-in data types appear in all
using a font that is appropriate to the context. No uppercase reference ever
includes or implies any data type other than itself.
UPPERCASE,
Wherever integer appears, this is a reference to the INTEGER or INT64 data type.
Wherever numeric appears, this is a reference to the INTEGER, INT64, or DECIMAL data type.
References to pre-defined class data types appear in mixed case with initial caps, for example,
References to user-defined class data types appear in mixed case, as
specified for a given application example.
Progress.Lang.Object.
Typographical conventions
This manual uses the following typographical conventions:
Convention
Description
Bold
Italic
SMALL, BOLD
CAPITAL LETTERS
Preface5
Preface
Convention
Description
KEY1+KEY2
KEY1 KEY2
Syntax:
Fixed width
Fixed-width italics
Fixed-width bold
UPPERCASE
fixed width
Preface6
Period (.)
or
colon (:)
[]
[]
{}
Large braces indicate the items within them are required. They are
used to simplify complex syntax diagrams.
{}
Small braces are part of the ABL. For example, a called external
procedure must use braces when referencing arguments passed by
a calling procedure.
...
Preface
is one of the statements that can end with either a period or a colon, as in this example:
STREAM stream
] [
UNLESS-HIDDEN
][
NO-ERROR
In this example, the outer (small) brackets are part of the language, and the inner (large) brackets
denote an optional item:
Syntax
INITIAL [ constant
, constant
A called external procedure must use braces when referencing compile-time arguments passed
by a calling procedure, as shown in this example:
Syntax
{ &argument-name }
In this example, EACH, FIRST, and LAST are optional, but you can choose only one of them:
Syntax
PRESELECT
EACH
FIRST
LAST
record-phrase
In this example, you must include two expressions, and optionally you can include more.
Multiple expressions are separated by commas:
Syntax
MAXIMUM ( expression , expression
, expression
] ...
Preface7
Preface
In this example, you must specify MESSAGE and at least one expression or SKIP [ (n)
any number of additional expression or SKIP [ ( n )
], and
] is allowed:
Syntax
MESSAGE
expression
SKIP
( n )
] } ...
In this example, you must specify {include-file, then optionally any number of argument or
and then terminate with }:
&argument-name = "argument-value",
Syntax
{ include-file
argument
&argument-name = "argument-value"
] ...
[
[
ACCUM max-length
] [
STREAM-IO ]
CENTERED
][
n COLUMNS
expression DOWN
][
SIDE-LABELS
]
]
Preface8
Preface
OpenEdge messages
OpenEdge displays several types of messages to inform you of routine and unusual occurrences:
Compile messages inform you of errors found while OpenEdge is reading and analyzing
a procedure before running it; for example, if a procedure references a table name that is
not defined in the database.
Startup messages inform you of unusual conditions detected while OpenEdge is getting
ready to execute; for example, if you entered an invalid startup parameter.
Continues execution, subject to the error-processing actions that you specify or that are
assumed as part of the procedure. This is the most common action taken after execution
messages.
Returns to the Procedure Editor, so you can correct an error in a procedure. This is the
usual action taken after compiler messages.
Halts processing of a procedure and returns immediately to the Procedure Editor. This
does not happen often.
OpenEdge messages end with a message number in parentheses. In this example, the message
number is 200:
If you encounter an error that terminates OpenEdge, note the message number before restarting.
Choose Help Recent Messages to display detailed descriptions of the most recent
OpenEdge message and all other messages returned in the current session.
Choose Help Messages and then type the message number to display a description of a
specific OpenEdge message.
Preface9
Preface
On UNIX platforms, use the OpenEdge pro command to start a single-user mode character
OpenEdge client session and view a brief description of a message by providing its number.
To use the pro command to obtain a message description by message number:
1.
OpenEdge-install-dir/bin/pro
2.
3.
Type the message number and press ENTER. Details about that message number appear.
4.
Press F4 to close the message, press F3 to access the Procedure Editor menu, and choose
File Exit.
Preface
MERCHANTABILITY OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR
PURPOSE.
OpenEdge includes the RSA Data Security, Inc. MD5 Message-Digest Algorithm. Copyright
1991-2, RSA Data Security, Inc. Created 1991. All rights reserved.
OpenEdge includes software developed by the World Wide Web Consortium. Copyright
1994-2002 World Wide Web Consortium, (Massachusetts Institute of Technology, European
Research Consortium for Informatics and Mathematics, Keio University). All rights reserved.
This work is distributed under the W3C Software License
[http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231] in the hope
that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty
of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
OpenEdge includes Sonic software, which includes software developed by Apache Software
Foundation (http://www.apache.org/). Copyright 1999-2000 The Apache Software
Foundation. All rights reserved. The names Ant, Axis, Xalan, FOP, The Jakarta
Project, Tomcat, Xerces and/or Apache Software Foundation must not be used to
endorse or promote products derived from the Product without prior written permission. Any
product derived from the Product may not be called Apache, nor may Apache appear in
their name, without prior written permission. For written permission, please contact
[email protected].
OpenEdge includes Sonic software, which includes the JMX Technology from Sun
Microsystems, Inc. Use and Distribution is subject to the Sun Community Source License
available at http://sun.com/software/communitysource.
OpenEdge includes Sonic software, which includes software developed by the ModelObjects
Group (http://www.modelobjects.com). Copyright 2000-2001 ModelObjects Group. All
rights reserved. The name ModelObjects must not be used to endorse or promote products
derived from this software without prior written permission. Products derived from this
software may not be called ModelObjects, nor may ModelObjects appear in their name,
without prior written permission. For written permission, please contact
[email protected].
OpenEdge includes Sonic software, which includes files that are subject to the Netscape Public
License Version 1.1 (the License); you may not use this file except in compliance with the
License. You may obtain a copy of the License at http://www.mozilla.org/NPL/. Software
distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF
ANY KIND, either express or implied. See the License for the specific language governing
rights and limitations under the License. The Original Code is Mozilla Communicator client
code, released March 31, 1998. The Initial Developer of the Original Code is Netscape
Communications Corporation. Portions created by Netscape are Copyright 1998-1999
Netscape Communications Corporation. All Rights Reserved.
OpenEdge includes software Copyright 2003-2006, Terence Parr All rights reserved. Neither
the name of the author nor the names of its contributors may be used to endorse or promote
products derived from this software without specific prior written permission. Software
distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or
implied. See the License for the specific language governing rights and limitations under the
License agreement that accompanies the product.
OpenEdge includes ICU software 1.8 and later - Copyright 1995-2003 International Business
Machines Corporation and others All rights reserved. Permission is hereby granted, free of
Preface11
Preface
charge, to any person obtaining a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, and/or sell copies of the Software, and to permit
persons to whom the Software is furnished to do so, provided that the above copyright notice(s)
and this permission notice appear in all copies of the Software and that both the above copyright
notice(s) and this permission notice appear in supporting documentation.
OpenEdge includes software developed by the OpenSSL Project for use in the OpenSSL
Toolkit (http://www.openssl.org/). Copyright 1998-2007 The OpenSSL Project. All
rights reserved. This product includes cryptographic software written by Eric Young
([email protected]). This product includes software written by Tim Hudson
([email protected]). Copyright 1995-1998 Eric Young ([email protected]) All rights
reserved. The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to endorse
or promote products derived from this software without prior written permission. For written
permission, please contact [email protected]. Products derived from this software may
not be called "OpenSSL" nor may "OpenSSL" appear in their names without prior written
permission of the OpenSSL Project. Software distributed on an "AS IS" basis, WITHOUT
WARRANTY OF ANY KIND, either express or implied. See the License for the specific
language governing rights and limitations under the License agreement that accompanies the
product.
OpenEdge includes Sonic software which includes a version of the Saxon XSLT and XQuery
Processor from Saxonica Limited that has been modified by Progress Software Corporation.
The contents of the Saxon source code and the modified source code file (Configuration.java)
are subject to the Mozilla Public License Version 1.0 (the License); you may not use these
files except in compliance with the License. You may obtain a copy of the License at
http://www.mozilla.org/MPL/ and a copy of the license (MPL-1.0.html) can also be found
in the installation directory, in the Docs7.5/third_party_licenses folder, along with a copy of the
modified code (Configuration.java); and a description of the modifications can be found in the
Progress SonicMQ and Progress Sonic ESB v7.5 README files. Software distributed under
the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND,
either express or implied. See the License for the specific language governing rights and
limitations under the License. The Original Code is The SAXON XSLT and XQuery Processor
from Saxonica Limited. The Initial Developer of the Original Code is Michael Kay
http://www.saxonica.com/products.html). Portions created by Michael Kay are Copyright
2001-2005. All rights reserved. Portions created by Progress Software Corporation are
Copyright 2007. All rights reserved.
OpenEdge includes software developed by IBM. Copyright 1995-2003 International
Business Machines Corporation and others. All rights reserved. Permission is hereby granted,
free of charge, to any person obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, and/or sell copies of the Software,
and to permit persons to whom the Software is furnished to do so, provided that the above
copyright notice(s) and this permission notice appear in all copies of the Software and that both
the above copyright notice(s) and this permission notice appear in supporting documentation.
Software distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY KIND, either
express or implied. See the License for the specific language governing rights and limitations
under the License agreement that accompanies the product. Except as contained in this notice,
the name of a copyright holder shall not be used in advertising or otherwise to promote the sale,
use or other dealings in this Software without prior written authorization of the copyright holder.
OpenEdge includes Sonic software, which includes software developed by ExoLab Project
(http://www.exolab.org/). Copyright 2000 Intalio Inc. All rights reserved. The names
Preface12
Preface
Castor and/or ExoLab must not be used to endorse or promote products derived from the
Products without prior written permission. For written permission, please contact
[email protected]. Exolab, Castor and Intalio are trademarks of Intalio Inc.
OpenEdge includes Sonic software, which includes software Copyright 1999 CERN European Organization for Nuclear Research. Permission to use, copy, modify, distribute and
sell this software and its documentation for any purpose is hereby granted without fee, provided
that the above copyright notice appear in all copies and that both that copyright notice and this
permission notice appear in supporting documentation. CERN makes no representations about
the suitability of this software for any purpose. It is provided "as is" without expressed or
implied warranty.
OpenEdge includes Sonic software, which includes software developed by the University
Corporation for Advanced Internet Development http://www.ucaid.edu Internet2 Project.
Copyright 2002 University Corporation for Advanced Internet Development, Inc. All rights
reserved. Neither the name of OpenSAML nor the names of its contributors, nor Internet2, nor
the University Corporation for Advanced Internet Development, Inc., nor UCAID may be used
to endorse or promote products derived from this software and products derived from this
software may not be called OpenSAML, Internet2, UCAID, or the University Corporation for
Advanced Internet Development, nor may OpenSAML appear in their name without prior
written permission of the University Corporation for Advanced Internet Development. For
written permission, please contact [email protected].
OpenEdge includes DataDirect products for the Microsoft SQL Server database which contains
a licensed implementation of the Microsoft TDS Protocol.
OpenEdge includes Sonic software, which includes code licensed from Mort Bay Consulting
Pty. Ltd. The Jetty Package is Copyright 1998 Mort Bay Consulting Pty. Ltd. (Australia) and
others.
Preface13
Preface
Preface14
Part I
Database Basics
Chapter 1, Creating and Deleting Databases
Chapter 2, OpenEdge RDBMS Limits
Chapter 3, Starting Up and Shutting Down
1
Creating and Deleting Databases
This chapter describes the methods to create and delete an OpenEdge database, as detailed in
the following sections:
Copying a database
Deleting a database
The Data Dictionary tool if you are using a graphical interface or a character interface
When you create a database, you can create either of the following:
Note:
12
Do not create your database in the OpenEdge Install directory or in any subdirectory
of the Install directory. Databases residing in these directories cannot be opened.
Create a structure description (.st) file to define storage areas and extents
Optionally, the number of blocks per cluster (for Type II data storage areas)
13
line
= comment
comment
= *
CR
= blank line
type
= a
type path
areainfo =
areaname
areanum
recsPerBlock
= string
= numeric value
= numeric value
blksPerCluster = 1
path
[sizeinfo]
[areainfo]
areaname[:areanum][,recsPerBlock][;blksPerCluster]
CR
64
512
= string
Note:
14
= f | v
= numeric value > 32
You can comment a .st file and use blank lines. Precede comments with a pound sign
(#), colon (:), or asterisk (*).
ST file tokens
Token
type
Description
Indicates the type of storage area. Possible values are:
After-image area
Before-image area
areaname
areanum
recsPerBlock
blcksPerCluster
For the schema area, records per block is fixed at 32 for 1K,
2K, and 4K database block sizes, The records per block is
64 for an 8K database block size.
If you leave this value blank, or specify 1, the data area will
be Type I. All other values are valid for Type II data areas
only.
path
extentType
size
15
The control area (.db) and the log file (.lg) are placed in the directory specified by the
command line dbname parameter.
If a relative pathname is provided, including using common dot (.) notation, the relative
pathname will be expanded to an absolute pathname. Relative paths begin in your current
working directory.
For before-image extents, the filename is the database name with a .bn extension, where
represents the order in which the extents were created, beginning with 1.
For after-image extents, the filename is the database name with a .an extension, where n
represents the order in which the extents were created, beginning with 1.
For transaction log extents, the filename is the database name with a .tn extension, where
represents the order in which the extents were created, beginning with 1.
For schema area extents, the filename is the database name with a .dn extension, where n
represents the order in which the extents were created and will be used.
For application data area extents, the filename is the database name followed by an
underscore and the area number (for example, customer_7.d1). The area number is a
unique identifier that differentiates between different areas. The application data area
extent filenames also have a .dn extension, where n represents the order in which the
extents were created and will be used.
Note:
16
In a structure description (.st) file, to specify a pathname that contains spaces (such
as \usr1\misc data), precede the pathname with an exclamation point (!) and wrap
the pathname in quotation marks ( ). For example, !\usr1\misc data.
The minimum information required in a .st file is one schema area extent definition
statement and one primary recovery (BI) area extent definition statement.
The minimum information needed to specify any extent is the storage area type and extent
pathname. For example:
#
b
#
d
If you do not define a primary recovery extent path in the .st file, the PROSTRCT
CREATE utility generates an error.
You cannot use any of the reserved storage area names as application data storage area
names.
Extent length
You can specify a fixed-length or variable-length extent:
Fixed-length When you create a fixed-length extent, its blocks are preallocated and
preformatted specifically for the database. If you want the extent to be fixed length, the
extent type token of the extent description line is f. This token must be lowercase. If the
extent is fixed length, use the extent size token to indicate its length in kilobytes.
The size of the extent, in kilobytes, must be a multiple of (16 * database-blocksize). If
you specify a size that is not a multiple of this, PROSTRCT CREATE displays a warning
message and rounds the size up to the next multiple of (16 * database-blocksize). The
minimum length for a fixed-length file is 32K, and the maximum length of a file depends
on the size of the file system and the physical volume containing the extent.
17
Database block
size
Formula
Extent size
2*(16 * 1)
32K
3*(16 * 1)
48K
1*(16 * 2)
32K
2*(16 * 2)
64K
3*(16 * 2)
96K
1*(16 * 4)
64K
2*(16 * 4)
128K
3*(16 * 4)
192K
1*(16 * 8)
128K
2*(16 * 8)
256K
3*(16 * 8)
384K
Notes: Regardless of whether the extent is fixed- or variable-length, if it is in a data area with
clusters its size must be large enough to hold an entire cluster, or PROSTRCT
generates and error.
Example structure description file
The example that follows shows a .st file named sports2000.st that defines a database with:
18
One transaction log area with a fixed-length extent used with two-phase commit.
Six application data areas each with one fixed- and one variable-length extent. The area
names for the six application data areas are: Employee, Inventory, Cust_Data, Cust_Index,
Order, and Misc. Note that the Cust_Data, Cust_Index, and Order areas have cluster sizes
assigned to them, and are therefore Type II storage areas. The area numbers are not
specified. The areas will be numbered sequentially starting at 7. Blocks per cluster is not
specified, creating Type I data areas.
19
#
#
#
#
#
#
#
b
b
#
#
#
#
d
d
#
#
#
#
d
d
#
#
#
a
a
#
For more information on enabling large file processing, see the PROUTIL
ENABLELARGEFILES qualifier section on page 2047.
[structure-description-file]
[-blocksize blocksize]
[-validate]
In the syntax block, structure-description-file represents the name of the .st file. If you
do not specify the name of the .st file, PROSTRCT uses db-name.st. The database blocksize
in kilobytes is represented by blocksize. Specify 1024, 2048, 4096, or 8192. If you specify
-validate, PROSTRCT will check that the contents of your structure description file are
accurate without creating the database.
110
To create a database named sports2000 from the sports2000.st structure description file
using PROSTRCT CREATE:
1.
First verify that your structure description file is accurate, use the -validate option to
PROSTRCT CREATE as follows:
If there are no errors in your structure description file, PROSTRCT returns a status similar
to the following:
2.
Use the PROCOPY utility to copy the system tables from an empty OpenEdge database
into the void database you created with PROSTRCT CREATE as shown:
where emptyn is the source database and db-name is the target database. n indicates the
block size of the OpenEdge-supplied empty.db. See the PRODB utility section on
page 2322 for more information about the empty and emptyn databases.
111
Use the PROSTRCT LIST utility to verify that you have the correct database files in the
correct locations as shown:
Caution: If you do not specify an output file when using PROSTRCT LIST, your
existing .st file will be overwritten to reflect the current structure.
3.
Use the Data Dictionary to load the existing user tables (.df file) into your database.
Note: The empty .db file and the database you want copied to it must have the same block
size. Similarly, if the blocks per cluster token was used with any data areas, the data
areas of the empty .db file and those of the database you want copied to it must
have the same blocks per cluster value.
112
When using PRODB to create a copy of a database, all the files of the database copy
will reside in the same directory, unless a .st file already exists for the target database.
See the PRODB utility section on page 2322 for more information about PRODB.
Examples
To create an empty database called mysample from a copy of the default empty database, enter
the following:
prodb mysample empty
To create a new database called mysports2000 from a copy of the sports2000 database, enter
the following:
prodb mysports2000 sports2000
To create a new database called pastinfo from a copy of an existing database named
currentinfo, enter the following:
prodb pastinfo currentinfo
PRODB does not copy the external triggers associated with the database you are copying.
Note:
See Chapter 14, Maintaining Database Structure, for information about changing
pathname conventions when adding storage areas and extents to a structure description
file.
113
In the following sample output of the PROSTRCT LIST utility, note dot (.) notation of the
relative pathname of the database, example1.db:
See OpenEdge Getting Started: Database Essentials for an explanation of relative- and
absolute-pathnames.
114
115
2.
3.
Truncate your before-image file. PROUTIL will not convert your Version 9 database
schema if you do not truncate the before-image file before you start the conversion.
4.
5.
Verify that your Version 9 backup exists, then install OpenEdge Release 10 following the
instructions in OpenEdge Getting Started: Installation and Configuration.
6.
7.
After you have successfully converted your database, back up your OpenEdge Release 10
database.
You should back up your Release 10 database in case it is damaged due to some failure. If
you have only a Progress Version 9 backup of your database, then you would need to go
through the conversion process again to convert your Version 9 backup to an OpenEdge
Release 10 database.
116
Truncate the databases BI file. PROUTIL will send an error message if you do not.
2.
3.
Enter the following syntax to begin the schema move, where dbname is the name of the
converted database:
117
Without
Schema
Mover
OldDB. db
NewDB. db
1
NewDB.d1
OldDB. d1
PROUTIL
CONV910
60 GB
Dump and Load;
Index rebuild
OldDB. db
NewDB. db
NewDB.d1
60 GB
PROUTIL
CONV 910
Schema moved
to next available
area
Old default
area
NewDB_7.d1
60 GB
schema
area
NewDB.d1
Unused space in
Old default area
can now be
recovered
4
118
NewDB_7.d1
3
Areas renamed
Figure 11:
NewDB_8.d1
39 GB
NewDB.d 1 continues to
hold 60 GB of disk
space after the Dump
and Load
With
Schema
Mover
OldDB. d1
NewDB_7.d1
20 GB
Data in the Old Default Area not moved prior to area truncation will be lost.
An untruncated BI file
After-imaging enabled
JTA enabled
Therefore, prior to opening an existing 10.1A database with Release 10.1C, you must:
Once you have migrated a database to Release 10.1C, you cannot open it with 10.1A without
first reverting its format. Some databases can be reverted with PROUTIL REVERT, but not all
databases can be reverted without a dump and load. For information on PROUTIL REVERT,
see the Reverting your database to Release 10.1A from Release 10.1C section on page 121.
119
Backup formats Backup formats are incompatible between Release 10.1A and
Release 10.1C. The following restrictions exist:
...
The 10.1A utilities are found in a Release 10.1C installation in the directory
OpenEdge-install-dir/bin/101dbutils.
You can restore a 10.1A backup with the 10.1C PROREST provided that:
The target database will be a 10.1C database. If the target exists, and is a 10.1A
database, it will be converted to 10.1C format as part of the PROREST restore.
120
An older client cannot access a row that is addressed beyond the 2-billion row limit.
An older client cannot update or delete any row or field that is referenced by a large index
key.
An older client cannot reference a sequence with a value greater than can be stored in a
32-bit integer.
When a client tries to access data beyond the 2-billion row limit or a large sequence value, the
server disconnects the client from the database with this error:
When a client tries to update a large index key, the client is shut down with this error:
During the shut down, the client generates a core dump and stack trace. When the 2-billion row
limit will be exceeded is not predictable, and is not limited to the actual data. For example, you
could encounter this error by expanding an area while creating a record or during the retrieval
of an existing record with a ROWID beyond the 2-billion row limit.
The database has enabled support for large key entries for indexes.
The database has a Type II area with a high water mark utilizing 64-bit DB Keys, this
includes a LOB with segments utilizing 64-bit block values.
121
It determines that the user has sufficient privilege to execute the command. The privilege
check is limited to file system access to the database.
2.
It analyzes the features of the database to determine if the database can be reverted by the
utility. If not, the utility issues messages indicating why the database cannot be reverted,
and exits.
The following sample output is from an attempt to revert a database that does not meet the
reversion requirements:
Revert Utility
--------------------Feature
-------------------------------64 Bit DBKEYS
Large Keys
64 Bit Sequences
Revert:
Revert:
Revert:
Revert:
122
The
The
The
The
database
database
database
database
Enabled
------Yes
Yes
Yes
Active
-----Yes
Yes
Yes
3.
It prompts the user to confirm that the database has been backed up.
4.
It performs the physical fixes necessary to revert the database. Fixes include the following:
5.
6.
Revert Utility
--------------------Feature
-------------------------------Database Auditing
64 Bit DBKEYS
Enabled
------Yes
Yes
Active
-----Yes
Yes
When PROUTIL REVERT completes successfully, the database is in 10.1A format. You
should perform a backup with 10.1A; previous backups in 10.1C format are incompatible with
the reverted database.
For more information on PROUTIL REVERT, see the PROUTIL REVERT qualifier section
on page 2077.
Reverting your database manually
If PROUTIL REVERT cannot revert your database, you must dump and reload it, after ensuring
that the contents are backward-compatible. This will require that you:
Delete any indexes with keys wider than the supported 10.1A maximum of 200 characters.
Remove any sequences that have values larger than a 32-bit signed integer.
Break up your tables so that no table has more than 2 billion rows.
Remove or renumber areas so that there are no areas with an area number greater than
1000.
123
Copying a database
To copy a source database to a target database, use one of the following:
PROCOPY utility
PRODB utility
These utilities copy the database structure as well as its contents. Consequently, a target
database must contain the same physical structure as the source database. For example, it must
have the same number of storage areas, records, blocks, blocks per cluster and block size.
Note:
See the PRODB utility section on page 2322 for more information on using PRODB to copy
a database. For more information about using the Data Dictionary or Data Administration tool
in a graphical interface to copy a database, see the applicable online help system. For more
information about using the Data Dictionary to copy a database in a character interface, see
OpenEdge Development: Basic Database Tools.
PROCOPY supports storage areas. Therefore, if a target database exists, it must contain at a
minimum the same type and number of storage areas and same extent types as the source
database. However, the number of extents in the storage areas of the target database do not need
to match the number of extents in the source database. PROCOPY attempts to extend the
existing extents in the target database to accommodate the possible increase in size.
If a target database does not exist, PROCOPY creates one using an existing structure description
(.st) file in the target database directory. If a .st file does not exist, PROCOPY creates the
target database using the structure of the source database and places all of the extents in the same
directory as the target database structure (.db) file, even when the source database resides in
multiple directories.
124
Copying a database
PROCOPY uses absolute pathnames
When you use the PROCOPY utility, the target database you create always has an absolute
pathname regardless of the pathname convention used by the source database.
For example, if you use PROCOPY to create a database, name it example1, and use a relative
path database such as sports2000 as the source database, example1 will have an absolute
pathname even though the source database, sports2000, uses a relative pathname. Use
PROSTRCT LIST to verify the absolute pathname of your target database, as shown:
In the following sample output of the PROSTRCT LIST utility, note the absolute pathname of
the database, example1.db:
125
Deleting a database
Use the PRODEL utility to delete a database. For example:
prodel mysports2000
When you delete a database, PRODEL displays a message that it is deleting all files that start
with db-name (the name of the database). PRODEL prompts you to confirm the deletions,
depending on your system.
When you delete a database, PRODEL deletes all associated files and extents that were created
using the structure description file (database, log, before-image and after-image files, and with
two-phase commit, the transaction log file).
Note:
126
The PRODEL utility does not delete the structure description file so that a file of your
database structure is retained.
2
OpenEdge RDBMS Limits
This chapter lists the limits you need to know when configuring a database and supporting an
application development environment, as described in the following sections:
File Handles
Shared memory
The default block size is 4K for Windows and LINUX, and 8K for UNIX.
22
You cannot change the number of records per block for the schema data storage areas. It will
always remain 32 for 1K, 2K, and 4K database block sizes and 64 for an 8K database block size.
The records per block are only tunable for data areas.
23
Contents
File extension
Control
.db
Primary Recovery
.bn
Transaction Log
Two-phase Commit
Transaction Log
.tn
After Image
.an
Schema
Schema Data
.dn
Application Data
.dn
The maximum size of a storage area is fixed at approximately one petabyte when large files are
enabled. A maximum of 1024 extents per area and a maximum size of one terabyte per extent,
yield the maximum area size calculated as:
240
50
= 2
bytes
= 1PB
The maximum number of records per area is calculated with the following equation:
Maximum records per area = Maximum area size * records-per-block / block size
24
The number of records per area is governed by the maximum area size rather than the
addressable rows.
Table 22:
Database block
size
8192 bytes
(8K)
4096 bytes
(4K)
4096 bytes
(4K)
Continued
Records per
block
Records
per area
(1 of 2)
Approximate maximum
records per area (in M)1
237
137,439 M
238
274,878 M
239
549,756 M
240
1,099,512 M
16
241
2,199,024 M
32
242
4,398,048 M
64 (default)
243
8,796,096 M
128
244
17,592,192 M
256
245
35,184,384 M
238
274,878 M
239
549,756 M
240
1,099,512 M
241
2,199,024 M
16
242
4,398,048 M
32 (default)
243
8,796,096 M
64
244
17,592,192 M
128
245
35,184,384 M
256
246
70,368,768 M
25
Database block
size
2048 bytes
(2K)
1024 bytes
(1K)
1. 1 M = 1 million or 1,000,000
26
Records per
block
Records
per area
(2 of 2)
Approximate maximum
records per area (in M)1
239
549,756 M
240
1,099,512 M
241
2,199,024 M
242
4,398,048 M
16
243
8,796,096 M
32 (default)
244
17,592,192 M
64
245
35,184,384 M
128
246
70,368,768 M
256
247
140,737,536 M
240
1,099,512 M
241
2,199,024 M
242
4,398,048 M
243
8,796,096 M
16
244
17,592,192 M
32 (default)
245
35,184,384 M
64
246
70,368,768 M
128
247
140,737,536 M
256
248
281,475,072 M
Tables have a maximum number of fields: SQL supports 500, ABL supports 1000.
In Release 10.1B and forward, and for new databases with 4K and 8K block sizes,
total variable-length storage requirements of all fields in an index entry must be less
than 2000 characters.
Note: Because the 2000 character limit includes storage overhead, the actual index
key is limited to approximately 1970 characters.
Databases with 1K and 2K block sizes adhere to the previous index entry size of
approximately 200 characters.
Databases migrated to Release 10.1C adhere to the previous index entry size of
approximately 200 characters unless explicitly enabled to accept larger index entries.
Enable your migrated database with PROUTIL ENABLELARGEKEYS. For more
information, see the PROUTIL ENABLELARGEKEYS qualifier section on
page 2048.
27
250
500
1000
2000
Databases created with OpenEdge Release 10.1B and later have 64-bit sequences. Databases
migrated from a previous release, can enable support for 64-bit sequences with the PROUTIL
ENABLESEQ64 command. See the PROUTIL ENABLESEQ64 qualifier section on
page 2049 for details.
Existing sequences with the upper limit specified as the Unknown value (?) were bounded by
the maximum of a signed 32-bit integer in prior releases. When 64-bit sequences are enabled,
they are bounded by the maximum of a signed 64-bit integer.
For more information on 64-bit sequences, see the online Help for the Data Dictionary or Data
Admin.
28
32TB
16TB
8TB
4TB
2TB
29
The maximum number of areas a database can support is 32,000. The first six areas are reserved,
leaving 31,994 available data areas.
If fully utilized, the resulting database size is calculated as:
In prior releases, the maximum size of the database was constrained by the 2-billion row limit
which represents the maximum number of rows addressable with a 32-bit ROWID. In Release
10.1B and forward, ROWIDs in Type II storage areas are 64-bit values. The number of rows
addressable by a 64-bit ROWID in a single table correspondingly expands; the maximum number
of rows is now governed by the maximum size of an area (see Table 22). This increase in
addressable rows is supported in Type II storage areas only.
210
Database type
Limit
Single-user
Multi-user
211
Database type
212
Limit
Single-user
Multi-user
Name type
Limit
Database names
Pathnames
Database names can consist of any combination of English letters and numbers, beginning with
AZ or az. They cannot include ABL or SQL reserved words, any accented letters, or the
following special characters:
<
>
213
File Handles
The OpenEdge RDBMS uses file handles (a UNIX term, roughly equivalent to the number of
open files) when reading and writing to the database and related files. Most operating systems
limit the number of file handles a user process can allocate at one time.
Use the following formula to determine the number of file handles used:
Static Handles
Client Requires nine file handles (PROMSGS + LG + DB + LBI + SRT+ STDIN STDOUT
+ 2). The file handles used for the input and output devices (STDIN and STDOUT) are
allocated by the operating system.
# of .dn files
# of .bn files
# of .an files
214
If you are running a server in a UNIX environment that uses sockets for interprocess
communication, add one file handle for each user.
Application programs use additional file handles when reading and writing text files and
when compiling programs. The maximum number of file handles supported by the AVM
(ABL Virtual Machine) is 256.
Shared memory
Shared memory
The OpenEdge RDBMS uses shared memory to hold database buffers, latches, and control
information including the lock table, AI buffers, and BI buffers. The amount of shared memory
allocated for database buffers is specified by the -B startup parameter. The maximum value for
-B is 125,000,000 for 32-bit platforms and 1,000,000,000 for 64-bit platforms, but in practice,
you will never set -B that high.
The maximum amount of shared memory that can be allocated is operating system dependent
and will never exceed the following maximums:
32 on 32-bit systems
Actual maximum values are constrained by available system resources. For more information
on shared memory segments, see the Shared memory allocation section on page 1327.
215
216
Limit
BIGINT
9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
BINARY
2000 bytes.
BIT
0 or 1
CHAR
2000 characters.
DATE
DECIMAL
DOUBLE PRECISION
2.2250738585072014E308 through
1.7976931348623157E+308
FLOAT
2.2250738585072014E308 through
1.7976931348623157E+308
INTEGER
2,147,483,648 to 2,147,483,647
NUMERIC
REAL
1.175494351E38F to 3.402823466E+38F
SMALLINT
32,768 to 32,767
TIME
00:00:00 to 23:59:59
TIMESTAMP
TINYINT
128 to 127
VARBINARY
31,995 bytes.
VARCHAR
31,995
Limit
BLOB
1GB
CHARACTER
CLOB
1GB
DATE
DATE-TIME
DATE-TIME-TZ
DECIMAL
INTEGER
2,147,483,648 to 2,147,483,647
INT64
9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
LOGICAL
TRUE/FALSE, YES/NO.
Notes: Data columns created using the OpenEdge SQL environment and having a data type
that is not supported in an ABL environment are not accessible by ABL applications.
Data columns created using an ABL environment can be accessed by OpenEdge SQL
applications and utilities.
Arrays of data can contain a maximum of 255 elements.
217
218
CHARACTER
VARCHAR
DATE
DATE
DECIMAL
DECIMAL
INTEGER
INTEGER
INT64
BIGINT
LOGICAL
BIT
RAW
VARBINARY
RECID
INTEGER
DATE-TIME
TIMESTAMP
or NUMERIC
3
Starting Up and Shutting Down
This chapter describes how to start up and shut down an OpenEdge database, as detailed in the
following sections:
Progress Explorer A graphical user interface that provides an easy way for you to
manage OpenEdge servers
AdminServer
An AdminServer is installed on every system where you install an OpenEdge database. The
AdminServer grants access to each instance of an installed OpenEdge product. The
AdminServer must be running in order to use the Progress Explorer configuration tools or
command-line configuration utilities to manage your database.
In Windows-based systems, the AdminServer starts automatically and runs as a service. For
UNIX-based systems, a command-line utility (PROADSV) is used to start and stop the
AdminServer. For more information about the AdminServer, see OpenEdge Getting Started:
Installation and Configuration.
Progress Explorer
Progress Explorer is a graphical administration utility that runs in Windows platforms. To use
the Progress Explorer configuration tools, you must first start Progress Explorer and connect to
a running AdminServer. Explorer then presents you with a view of all the products to which the
AdminServer grants you administrative access.
You can select an instance of each of the products displayed and manage its operation or modify
its configuration. For example, you can do the following:
Connect to an AdminServer
Start, stop, and query the status of OpenEdge databases and associated server groups
For more information about working with the Progress Explorer tool, click the Help icon in the
Progress Explorer application.
Starting Progress Explorer
To launch Progress Explorer choose Programs OpenEdge Progress Explorer Tool from
the Windows Start menu.
32
DBMAN Starts, stops, and queries the current configuration of an OpenEdge database.
dbman
[-host
-database db-name
[-config
config-name -start|-stop|-query]
Specifies the name of the database you want to start. It must match the name of a database
in the conmgr.properties file.
-config config-name
Specifies the name of the configuration with which you want to start the database.
33
Queries the Connection Manager for the status of the database db-name.
-host host-name
Identifies the host machine where the AdminServer is running. The default is the local
host. If your AdminServer is running on a remote host, you must use the -host host-name
parameter to identify the host where the remote AdminServer is running.
-port port-number|service-name
Identifies the port that the AdminServer is listening on. If your AdminServer is running on
a remote host, you must use the -port port-number parameter to identify the port on
which the remote AdminServer is listening. The default port number is 20931.
-user user-name
If your AdminServer is running on a remote host, you must use the -user user-name
parameter to supply a valid user name for that host. You will be prompted for the
password.
34
-servergroup server-group-name
parameters
-db db-name
Specifies the database you want to start a server for (-db is implicit).
-servergroup server-group-name
Specifies the logical collection of server processes to start. The server-group-name you
specify must match the name of a server group in the conmgr.properties file. You create
server groups using the Progress Explorer Database Configuration Tools, which save them
in the conmgr.properties file.
parameters
Specifies the startup parameters for the broker/server. See Chapter 18, Database Startup
Parameters, for a list of broker/server startup parameters.
For more information about the PROSERVE command see Chapter 17, Startup and Shutdown
Commands.
35
For more information on character sets and character conversion, see Chapter 20, PROUTIL
Utility.
A reserved word that specifies that the database server communicates only with clients on
the database server machine. Not applicable for DataServers.
36
db-name
Specifies the database you want to start. If the database is not in the current directory, you
must specify the full pathname of the database.
-S service-name
Specifies the database server or broker process service name. You must specify the service
name in a TCP network.
-H host-name
Specifies the maximum number of remote client servers and login brokers that the broker
process can start.
-Mpb n
Specifies the number of servers that the login broker can start to serve remote users. This
applies to the login broker that is being started.
-m3
As the example shows, the -Mn value must be large enough to account for each additional broker
and all servers. If you do not specify -Mpb, the value of -Mn becomes the default.
You must include the -m3 parameter with every secondary broker startup command. While the
-Mpb parameter sets the number of servers a broker can start, the -m3 parameter actually starts
the secondary broker.
If you start multiple brokers, you should also run the Watchdog process (PROWDOG).
PROWDOG enables you to restart a dead secondary broker without shutting down the database
server. For more information on PROWDOG, see the PROWDOG command section on
page 1714.
37
A client requesting a connection from the first broker, demosv1, is assigned a port number in the
range of 4000 to 4040. The 4000-to-4040 range limits access to the server by limiting
communication to just 41 ports.
The default for -minport is 1025 for all platforms. Ports lower than 1025 are usually reserved
for system TCP and UDP. The default for -maxport is 2000 for all platforms. Remember that
some operating systems choose transient client ports in the 32,768-to-65,535 range. Choosing
a port in this range might produce unwanted results.
The private key that corresponds to the digital certificate the server uses to assert its
identity to an SSL client
A valid digital certificate that asserts the servers identity and contains the Public Key
corresponding to the private key
Note:
38
SSL incurs heavy performance penalties, depending on the client, server, and network
resources and load. For more information on SSL and the security features of
OpenEdge, see OpenEdge Getting Started: Core Business Services.
[-H host-name]
[-keyalias key-alias-name]
[-keyaliaspasswd password]
[-nosessioncache][-sessiontimeout n]
-ssl
db-name
Specifies the database you want to start. If the database is not in the current directory, you
must specify the full pathname of the database.
-S service-name
Specifies that all database and SQL client connections will use SSL.
-keyalias key-alias-name
Specifies the alias name of the SSL private key/digital certificate key-store entry to use.
The default is default_server.
-keyaliaspasswd password
Specifies the SSL key alias password to use to access the servers private key/digital
certificate key-store entry. The default is the encrypted value of password. If you use a
value other than the default, it must be encrypted. You can use the genpassword utility,
located in your installations bin directory, to encrypt the password.
-nosessioncache
Specifies that SSL session caching is disabled. Session caching allows a client to reuse a
previously established session if it reconnects before the session cache time-out expires.
Session caching is enabled by default.
-sessiontimeout n
Specifies in seconds the length of time an SSL session will be held in the session cache.
The default is 180 seconds.
39
To start an APW process, use Progress Explorer or enter the following command on the local
machine:
proapw db-name
Each APW counts as a process connected to a database and uses resources associated with a
user. You might have to increase the value of the Number of Users (-n) parameter to allow for
APWs. However, APWs are not counted as licensed users.
Stop an APW by disconnecting the process with the PROSHUT command.
For detailed information on the PROAPW and PROSHUT commands, see Chapter 17, Startup
and Shutdown Commands.
310
probiw db-name
The BIW counts as a process connected to a database and uses resources associated with a user.
You might have to increase the value of the Number of Users (-n) parameter to allow for the
BIW. However, the BIW is not counted as a licensed user.
Stop the BIW by disconnecting the process with the PROSHUT command.
proaiw db-name
You must have after-imaging enabled to use an AIW. For more information on after-imaging,
see Chapter 7, After-imaging.
The AIW counts as a process connected to a database and uses resources associated with a user.
You might have to increase the value of the Number of Users (-n) parameter to allow for the
AIW. However, the AIW is not counted as a licensed user.
To stop the AIW process, disconnect it with the PROSHUT command.
311
Progress Explorer
DBMAN utility
PROSHUT command
PROMON utility
To shut down the database, you must either be the user who started the database or have root
privileges.
Note:
PROSHUT command
To shut down a database server with the PROSHUT command, enter one of the following:
proshut db-name
[
-F
-b
-by
-bn
-H host-name
-S service-name
-Gw
db-name
Indicates a batch shutdown will be performed. When no client is connected, the database
automatically shuts down. When one or more clients are connected, PROSHUT prompts
the user to enter yes to perform an unconditional batch shutdown and to disconnect all
active users; or no to perform a batch shutdown only if there are no active users. The -b
parameter combines the functionality of the -by or -bn parameters.
-by
Directs the broker to perform an unconditional batch shutdown and to disconnect all active
users.
312
Directs the broker to perform a batch shutdown only if there are no active users.
-H host-name
Specifies the machine where the database server runs. You must specify the host name if
you issue the shutdown command from a machine other than the host.
-S service-name
Specifies the database server or broker process service name. A TCP network requires the
-S parameter.
-F
Forces an emergency shutdown, on UNIX systems only. To use this parameter, you must
run PROSHUT on the machine where the server resides. This parameter is not applicable
for remote shutdowns or DataServer shutdowns.
Caution: Using -by with -F causes an emergency shutdown.
-Gw
1
2
3
x
Disconnect a User
Unconditional Shutdown
Emergency Shutdown (Kill All)
Exit
(1 of 2)
Action
Prompts you for the number of the user you want to disconnect.
Stops all users and shuts down the database. If you have multiple servers,
PROSHUT stops them all. To stop a specific server process, use the
appropriate operating system command.
313
(2 of 2)
Option
Action
Prompts you to confirm your choice. If you cancel the choice, you cancel the
shutdown. If you confirm the choice, PROSHUT waits for five seconds before
taking any action, then displays the following message:
Emergency shutdown initiated...
PROSHUT marks the database for abnormal shutdown and signals all
processes to exit. After 10 more seconds, PROSHUT kills all remaining
processes connected to the database, and deletes shared-memory segments and
semaphores. The database is in a crashed state. The database engine performs
normal crash recovery when you restart the database and backs out any active
transactions.
This option is available only if the database is on the same machine where you
are logged in.
x
If you want to execute the shutdown command noninteractively and avoid the PROSHUT
menu, issue the PROSHUT command using either of the parameters described in Table 32.
Table 32:
PROSHUT Parameters
Parameter
Action
When using the shutdown command from a machine other than the host, in a TCP/IP network,
you must use the Host Name (-H) and Service Name (-S) parameters. The Host Name is the
machine where the database server is running. The Service Name is the name of the database
server or broker process, as defined in the /etc/services file on UNIX. For example, the
following command shuts down the sports database from a remote machine in a BSD UNIX
network:
314
promon db-name
When you start the monitor, the Database Monitor main menu appears:
User Control
Locking and Waiting Statistics
Block Access
Record Locking Table
Activity
Shared Resources
Database Status
Shut Down Database
R&D.
T.
L.
C.
Advanced options
2PC Transactions Control
Resolve 2PC Limbo Transactions
2PC Coordinator Information
J.
M.
Q.
Modify Defaults
Quit
315
Choose option 8, Shut Down Database. The following figure shows an example of this
options output:
PID
6358
7007
Time of login
Dec 14 15:10:52
Dec 14 15:25:09
1
2
3
4
Userid
sue
mary
tty
/dev/ttyp0
/dev/ttyp5
Limbo?
no
no
Disconnect a User
Unconditional Shutdown
Emergency Shutdown (Kill All)
Exit
Enter choice>
3.
Choose an option.
If you choose 1 (Disconnect a User), the system prompts you for a user number. Choose
2 (Unconditional Shutdown) to stop all users and shut down the database. If you have
multiple remote-user servers, this stops all the servers.
316
Part II
Protecting Your Data
Chapter 4, Backup Strategies
Chapter 5, Backing Up a Database
Chapter 6, Recovering a Database
Chapter 7, After-imaging
Chapter 8, Maintaining Security
Chapter 9, Auditing
Chapter 10, Replicating Data
Chapter 11, Failover Clusters
Chapter 12, Distributed Transaction Processing
4
Backup Strategies
Backup and recovery strategies work together to restore a database that is lost due to a system
failure. It is important to develop backup and recovery strategies that you can follow
consistently and effectively. This chapter lists the steps needed to develop effective backup
plans, as detailed in the following sections:
Backup Strategies
42
Backup options
OpenEdge
Operating system
Online or offline
Offline only
Full or incremental
Full only
Is the database active 24 hours a day, 7 days a week? Is it possible to shut down the
database for backing up?
If the database must run 24 hours a day, 7 days a week, an online backup is necessary. If
it does not, an offline backup can be used.
Does the entire database fit on one volume of backup media? If not, will someone be
present to change volumes during the backup?
If the database fits on one volume or someone is present to change volumes, a full backup
can be performed. If not, consider using incremental backups.
PROBKUP allows both online and incremental backups, in addition to offline and full
backups.
If you choose not to use PROBKUP, you can use an operating system backup utility, but you
cannot perform online or incremental backups. The exception to this rule is splitting a disk
mirror during a quiet point to perform an operating system backup on the broken mirror. See the
Using database quiet points section on page 59 for information on using quiet points in a
backup. Be sure that the backup utility you choose backs up the entire set of files. Backing up a
partial database provides an invalid result.
Note: Regardless of the backup method chosen, perform a complete database restore to test all
backups and validate that the database is correct.
43
Backup Strategies
Full backups
A full backup backs up all of the data of a database, including the BI files. You can perform a
full backup using either the PROBKUP utility or an operating system utility. Ideally, you should
perform a full backup of your database every day. However, depending on your recovery plan,
you might decide to do less frequent full backups and more frequent incremental backups, or
use after-imaging.
Incremental backups
An incremental backup backs up only the data that has changed since the last full or incremental
backup. Incremental backups might take less time and media to back up the database; the
amount of time you can save depends on the speed of your backup device. You must use
PROBKUP to perform an incremental backup.
In an OpenEdge database, the master block and every database block contains a backup counter.
The counter in the master block is incremented each time the database is backed up with an
online, offline, full, or incremental backup. When a database block is modified, PROBKUP
copies the backup counter in the master block to the backup counter in the modified database
block. When you perform an incremental backup, PROBKUP backs up every database block
where the counter is greater than or equal to the master block counter. If you specify the Overlap
(-io) parameter, PROBKUP backs up every database block where the counter is greater than or
equal to the master block counter, less the overlap value. For more information on the -io
parameter, see the PROBKUP utility section on page 2313.
You must perform a full backup of a database before you can perform the first incremental
backup. You should also perform full backups regularly in conjunction with incremental
backups. If you do not perform full backups regularly, you will use increasing amounts of
backup media for incremental backups and increase recovery time restoring multiple
incremental backups in addition to the full backup.
Online backups
An online backup lets you back up the database while it is in use. You must use PROBKUP to
perform online backups. Perform an online backup if the database cannot be taken offline long
enough to perform an offline backup. You can perform both full and incremental online
backups.
When deciding whether to use online backups, consider the following:
44
If you have enabled after-imaging, when you perform an online backup of a database, the
database engine automatically switches over to the next AI file. Before you start an online
backup, you must make sure that the next AI file is empty. If the file is not empty,
PROBKUP aborts and notifies you that it cannot switch to the new file. For information
about after-imaging, see Chapter 7, After-imaging.
If you want to enable after-imaging while your database is online, you must perform a full
online backup. The resulting backup is the baseline for applying after-image extents. For
more information about enabling after-imaging, see the Enabling after-imaging online
section on page 78.
When you begin an online backup, database activity pauses until the backup header is
written. Make sure that your backup media is properly prepared before issuing the
PROBKUP command in order to minimize the duration of the pause. Until the database
engine writes the header information, you cannot update the database. If you use more than
one volume to back up the database, there is a similar delay each time you switch volumes.
You cannot use the PROBKUP parameters Scan (-scan) or Estimate (-estimate) for
online backups.
Offline backups
To perform offline backups, you must first shut down the database. If you perform an offline
backup with an operating system utility on a database while it is running, the backup is invalid.
You can use PROBKUP or an operating system utility to perform offline backups.
45
Backup Strategies
Disk files
Tape cartridges
Disk cartridges
When choosing backup media, consider the medias speed, accessibility, location, and capacity
for storage. Table 42 lists questions to ask yourself when choosing backup media.
Table 42:
To consider this . . .
46
Ask this . . .
Storage capacity
Accessibility
Can you use the media at the time you need to run the
backup?
Location
Database integrity
Database size
Time
Unscheduled backups
Database integrity
To preserve database integrity:
Use the PROREST verify parameters Partial Verify (-vp) and Full Verify (-vf) to verify
that a backup is valid
Database size
If the database is very large, it might be impractical to fully back up the database daily. You
might choose to back up the database file every other day, or even once a week. Instead of
frequent full backups, consider performing daily incremental backups or, if you have
after-imaging enabled, only backing up the after-image files.
You can perform daily incremental backups. Incremental backups only back up the blocks that
have changed since the previous backup. You can specify an overlap factor to build redundancy
into each backup and help protect the database. However, you should also perform a full backup
at least once a week to limit the amount of backup media used for incremental backups and to
ease data recovery.
If you enable after-imaging, back up the after-image files every time you perform a backup.
Immediately after performing a full or incremental backup, start a new after-image file. When
you back up AI files, you back up whole transactions, but incremental backups back up just the
blocks that have changed. As a result, AI file backups can use more space than incremental
backups.
47
Backup Strategies
If you make many database updates and you are on a weekly full backup schedule, it is possible
that the after-image files will grow very large during the week. If so, backup and empty the AI
files every day. This daily backup approach keeps the AI file relatively small and ensures that
the AI file is archived on a regular schedule. You can also use the After Image File Management
Utility to archive and empty your AI files. See the AI File Management utility section on
page 716 for more information.
Note:
PROBKUP does not back up AI files. You must use an operating system backup
utility.
Time
When creating a backup schedule, consider both the time required to perform backups and the
time required to recover a database from the backups:
Performing daily full backups of your system might require too much time. If you make
few updates to the database each day, a daily full backup might be unnecessary. Thus, you
might want to perform daily incremental backups and a weekly full backup. If you have
after-imaging enabled, remember to back up the after-image files for both incremental and
full backups.
If you perform full backups less than once a week, you must maintain multiple incremental
backups, which makes recovery more complicated and prone to operator error. Restoring
a backup becomes a multi-step process of first restoring the last full backup, then restoring
the subsequent incremental backups.
If you enable after-imaging, you can perform daily backups of the after-image file instead
of performing incremental backups. However, recovering from the AI file backups
requires restoring the AI files then rolling forward through multiple after-image files. This
is more time intensive than restoring a full backup with PROREST. Because backing up
AI files backs up whole transactions instead of just the blocks that have changed since the
most recent backup, restoring a database from incremental backups is quicker than
restoring AI files and rolling forward the AI files.
Unscheduled backups
In addition to scheduled backups, you might have to perform additional backups for the
following reasons:
48
To run a large job with the No Crash Protection (-i) startup parameter. Before running the
job, back up the database and after-image files.
5
Backing Up a Database
Backing up your database is an important part of database maintenance. Regular backups
provide a starting point for recovery of a database lost to hardware or software failure. This
chapter contains the following sections:
Using PROBKUP
Verifying a backup
Restoring a database
Backing Up a Database
Using PROBKUP
Using the OpenEdge Backup utility (PROBKUP) you can perform an online full backup, an
online incremental backup, an offline full backup, or an offline incremental backup. Which you
use is determined by your backup plan. You can also enable after-imaging and AI File
Management as part of a full online backup. The syntax below details the parameters to use with
PROBKUP:
Syntax
probkup
online
db-name
incremental
device-name
For more information on PROBKUP, see the PROBKUP utility section on page 2313. For
information on using PROBKUP to enable after-imaging and AI File Management, see
Chapter 7, After-imaging.
parameters
Verify that the database is not in use by entering the following command:
2.
52
Using PROBKUP
The command, devel identifies the name of the database you are backing up; online
specifies that the backup is an online backup; /dev/rrm/0m specifies the output
destination is a tape drive; -vs 35 indicates that the volume size in database blocks is 35;
-bf 20 specifies that the blocking factor is 20; and -verbose displays information at
10-second intervals during the backup. If you do not specify the volume size, PROBKUP
fills the entire tape before prompting you for a new tape.
As the full offline backup of devel.db begins, the following report appears:
The number of backup blocks is the number of -bf units written to the tape. Backup blocks
contain data, primary recovery, and error-correction blocks.
This example backs up a very small database. Using the -red parameter on a larger
database increases the amount of time and backup media required for the backup. Also,
PROBKUP displays the number of blocks and the amount of backup required for an
uncompressed database because you cannot specify the -scan parameter for an online
backup.
3.
If you enable after-imaging, back up the AI files to a separate tape or disk using a UNIX
backup utility.
Note: If you enable after-imaging, OpenEdge automatically switches AI extents before
beginning an online backup.
Testing backups
Test backups regularly to ensure they are valid and available when you need them. Be sure to
test the backup before you need it to restore a database. Ideally, you should test the backup
immediately after you perform it. Then, if there are problems with the backup, you can make
another backup copy. If you wait until you need the backup to test it, the database will no longer
be available.
53
Backing Up a Database
Table 51 lists the PROREST utility parameters you can use to test backups performed with
PROBKUP.
Table 51:
Parameter
Description
-vp
Specifies that the restore utility read the backup volumes and
compute and compare the backup block cyclic redundancy
checks (CRCs) with those in the block headers. To recover any
data from a bad block, you must have specified a redundancy
factor when you performed the database backup. See the CRC
codes and redundancy in backup recovery section on page 519
for more information about error correction blocks and data
recovery.
You can use the Partial Verify parameter with both online and
offline backups.
-vf
The Partial Verify and Full Verify parameters do not restore or alter the database.
For more information, see the PROREST utility section on page 2327.
If you use an operating system utility to back up the database, with each backup, verify that you
have backed up the entire database. The PROBKUP utility automatically backs up the
appropriate files; with an operating system utility, you must make sure the files are included in
the backup.
Archiving backups
Properly archiving backups helps you ensure database integrity. Follow these guidelines when
archiving backups:
54
Clearly label each backup volume. Information on the label should include:
The volume number and total number of volumes of the media (volume 1 of 4, for
example)
Using PROBKUP
Keep a minimum of 10 generations of full backups. Keep daily backups for at least two
weeks, weekly backups for at least two months, and monthly backups for a year. Buying
extra tapes is much less expensive than manually reconstructing lost databases.
Keep backups in an area other than where the computer is located, preferably in another
building. In the event of building damage, you are less likely to lose both the online and
backup versions of the database.
55
Backing Up a Database
The BUSY qualifier returns a code indicating whether the database is in use. You can use
the codes returned by the BUSY qualifier in scripts, files, or procedures. For more
information see the PROUTIL BUSY qualifier section on page 2018.
2.
3.
parameters
56
When the backup successfully completes, the report displays the total number of bytes on the
backup media and how long it took to complete the backup.
Note:
If a system failure occurs while you are performing the full backup, perform the
backup again.
57
Backing Up a Database
parameters
By default, PROBKUP performs a full backup. To perform an incremental backup, specify the
incremental qualifier:
parameters
Notes: Databases started with the No Crash Protection (-i) parameter cannot be backed up
online.
For databases with after-imaging enabled, online backup will automatically perform
an AI extent switch. For more information on after-imaging, see Chapter 7,
After-imaging.
58
dbname
Specifies the name of the database for which you are enabling a database quiet
processing point.
During a database quiet processing point all file write activity to the database is stopped.
Any processes that attempt to start a transaction while the quiet point is enabled must wait
until you disable the database quiet processing point.
2.
Use an operating system utility to perform the OS mirror fracture or split operation.
Upon successful completion of this command, the fractured disk contains a duplicate of
the active online database.
3.
dbname
Specifies the name of the database for which you are disabling the database quiet
processing point.
For more information and the complete syntax for PROQUIET, see Chapter 17, Startup
and Shutdown Commands.
4.
Update the structure description (.st) file of the fractured version of the database. Replace
the logical location reference (which still references the active database) with the physical
location reference of the fractured mirror.
59
Backing Up a Database
5.
Use the PROSTRCT utility with the REPAIR qualifier to update the shared memory and
semaphore identification information to reflect the offline status of the fractured version
of the database, and to update the file list information for a database with the information
in the updated .st file:
description-file
dbname
Specifies the name of the database for which you are repairing the extent list and
master block.
description-file
Use the PROBKUP utility with the -norecover startup parameter to back up the fractured
version of the database:
dbname
510
Make sure that the database is not used during the backup. Otherwise, the backup will be
invalid. If you have an Enterprise database license, you can do this by using the
PROQUIET command to create a database quiet point. Regardless of your database
license you can shut down the server and make any single-user session inactive.
After you perform and verify the backup, mark the database as backed up.
2.
The BUSY qualifier returns a code indicating whether the database is in use. You can use
the codes returned by the BUSY qualifier in scripts, files, or procedures. For detailed
information, see the PROUTIL BUSY qualifier section on page 2018.
3.
Make a note of the last entry in the log file. You will use this information later to verify
that the database is not used during the backup.
4.
511
Backing Up a Database
5.
6.
512
2.
Verify that the database is not in use by entering the following command:
3.
proshut devel
4.
Run PROBKUP -estimate to determine how much media is necessary for the backup:
5.
513
Backing Up a Database
These are the parameters for the commands:
devel
Indicates that the volume size in database blocks is 35. If you do not specify the
volume size, PROBKUP fills the entire tape before prompting you for a new tape.
-bf 20
Specifies that you can lose one incremental backup and still be able to restore the
database. Specifies that all blocks that have changed since the backup before the last
backup should be archived.
-com
Indicates that the data should be compressed before it is written to the tape drive. If
you specify the -com parameter and do not use -scan, PROBKUP displays the
number of blocks and the amount of backup media required for an uncompressed
database.
-red 5
Specifies that PROBKUP creates one error-correction block for every five blocks
that are backed up.
-scan
Allows the backup utility to scan the database before backing it up to determine the
number of blocks to be backed up.
514
The number of backup blocks is the number of -bf units written to the tape. Backup blocks
contain data, BI, and error-correction blocks.
This example backs up a very small database. Using the -red parameter on a larger
database increases the amount of time and backup media required for the backup.
As the incremental online backup of devel.db executes, the following report appears:
6.
If you have after-imaging enabled, back up the AI files to a separate tape or disk using a
backup utility.
2.
Verify that the database is not in use by entering the following command:
515
Backing Up a Database
3.
proshut devel
4.
Run PROBKUP -estimate to determine how much media is necessary for the backup,
since this is the first time you are making a backup of the database:
The following message tells you about the state of your system, and how much media is
necessary for backup:
5.
Note: You cannot use the -scan parameter for online backups.
These are the parameters for the commands:
devel
516
Indicates that the data should be compressed before it is written to the disk drive. If
you specify the -com parameter and do not use -scan, PROBKUP displays the
number of blocks and the amount of backup required for an uncompressed database.
-red 5
Creates one error-correction block for every five blocks that are backed up.
-scan
Allows the backup utility to scan the database before backing it up to determine the
number of blocks to be backed up.
As the full offline backup of devel.db executes, the following report appears:
As the full online backup of devel.db executes, the following report appears:
The number of backup blocks is the number of -bf units written to the tape. Backup blocks
contain data, primary recovery (BI), and error-correction blocks.
This example backs up a very small database. Using the -red parameter on a larger
database increases the amount of time and backup media required for the backup.
6.
If you have after-imaging enabled, back up the AI files onto a separate disk using a
separate operating system backup utility.
517
Backing Up a Database
Verifying a backup
Immediately after backing up the database, verify that the backup does not contain any
corrupted blocks. Use the Restore (PROREST) utility to verify the integrity of a full or
incremental backup of a database as follows:
Run PROREST with the Partial Verify (-vp) parameter. With this parameter, PROREST
checks the backup for bad blocks and reports whether any exist.
Run PROREST with the Full Verify (-vf) parameter. With this parameter, PROREST
compares the backup to the database block-for-block.
These parameters do not actually restore the database. They only verify the status of the backup,
notify you if there are any bad blocks, and report whether the blocks are recoverable. You must
run the restore utility again (without the partial or full verify parameters) to restore the database.
When you use the -vp parameter, PROREST scans the backup and recalculates the CRC code
for each block. It then compares the newly calculated CRC code with the CRC code stored in
the block header. If the codes do not match, PROREST marks the block as bad and displays the
following message:
If the backup contains error-correction blocks and a redundancy set contains only one bad block,
PROREST uses the error-correction block (and the other blocks in the redundancy set) to
re-create the bad block. The error-correction block is the EXCLUSIVE OR of the backup blocks
in the redundancy set. When PROREST recovers the block, the following message appears:
If the redundancy set contains more than one bad block or if the backup does not include
error-correction blocks, PROREST cannot recover the bad block and displays the following
message:
PROREST also cannot recover a corrupted block if the error-correction block itself has a CRC
check failure. In this case, the following message appears:
If PROREST encounters 10 unrecoverable errors during the verify pass or during the database
restore, you can terminate the verify operation, as shown:
518
CRC codes to identify bad blocks. A CRC code is automatically calculated for each
database backup block whether or not you specify a redundancy factor.
Error-correction blocks to recover bad blocks. Error-correction blocks are included in the
backup only if you explicitly request them with the -red parameter of the backup utility.
CRC codes
When PROBKUP writes a block of data to the backup media, it calculates a CRC code based
on the contents of the block and stores it with the block. When restoring, PROREST
re-examines the contents of the block and verifies that they are consistent with the
accompanying CRC code. If the block contents are not consistent with the CRC code, the
backup block is corrupted.
If the backup includes error-correction blocks, PROREST automatically uses the information in
those blocks to recover the corrupted block. If the backup does not include error-correction
blocks, PROREST cannot recover the corrupted block when you restore the database.
Error-correction blocks
Error-correction blocks contain information about the preceeding set of backup blocks and
allow PROREST to recover corrupted blocks in a backup. The error-correction block and the
blocks it is based on are called a redundancy set. You can provide error-correction blocks in the
backup by specifying the -red parameter in the backup command.
The -red parameter specifies a redundancy factor. The redundancy factor determines how many
backup blocks are in each redundancy set. For example, if you specify a redundancy factor of
2, PROBKUP creates an error-correction block for every two backup blocks. Therefore, every
redundancy set contains two backup blocks and an error-correction block.
PROREST can recover a bad backup block if it is the only corrupted block in the redundancy
set. If a redundancy set contains more than one bad backup block or a bad backup block and a
bad error-correction block, PROREST cannot recover any of the bad blocks in the redundancy
set.
If you specify a very low redundancy factor (for example, 2), the chance of having two or more
bad database blocks in a redundancy set is low. If you specify a higher redundancy factor, the
chances are higher. However, lower redundancy values also produce larger backups that require
more time and media. If the backup media is highly reliable, you might use a high redundancy
factor; if the media is less reliable, you might want to specify a lower redundancy factor.
The size of each backup blockand therefore of each error-correction blockis determined by
the -bf parameter. The default blocking factor is 34. For example, if the database block is 1,024
bytes and the blocking factor is 40, each backup block is 40K; that is, the size of 40 database
blocks.
519
Backing Up a Database
Restoring a database
In the event of database loss or corruption, you can restore the database from a backup. You
must restore a database with the same version of OpenEdge that you used to create the backup.
This section describes:
{-list |
-vp
-vf}
dbname
Specifies the name of the database where you want to restore the backups.
device-name
Identifies the directory pathname of the input device or standard file from which you are
restoring the data.
-list
Provides a description of all application data storage areas contained within a database
backup. Use the information to create a new structure description file and database so you
can restore the backup. For additional information, see the Obtaining storage area
descriptions using PROREST section on page 522.
-vp
Specifies that the restore utility reads the backup volumes and computes and compares the
backup block cyclical redundancy checks (CRCs) with those in the block headers.
To recover any data from a bad block, you must have specified a redundancy factor when
you performed the database backup. See the Error-correction blocks section on
page 519 for more information about error-correction blocks and data recovery.
-vf
Specifies that the restore utility compares the backup to the database block-for-block. Do
not compare the backup to a database that is in use.
Note:
520
When you specify the -vp or -vf parameter, PROREST does not actually restore the
database. You must restore the database in a separate step.
Restoring a database
The first time you start the database after restoring an online backup, normal crash recovery runs
and any transactions that were incomplete at the time of the backup are discarded.
When you restore a full database backup, consider restoring the backup to a new database. This
allows you access to the corrupted database, if necessary. You must restore an incremental
database backup to a restored database.
If PROREST encounters corrupted backup blocks that it is unable to recover, you lose the data
in the corrupted blocks. The amount of lost data is approximately equal to the number of bad
blocks multiplied by the blocking factor.
As you begin the restore procedure for a database, a report appears that indicates the date of the
backup and the number of blocks required to restore the database.
If you restore over an existing database, verify the tapes before doing the restore. If the
existing database is the only copy, back up the existing database before doing the restore.
Restore a backup with the same OpenEdge version that you used to perform the backup.
Create the void database before you restore the backup, or else use the existing structure,
overwriting it.
You must restore a database in the same order that you backed it up. You must first restore
the full backup, followed by the first incremental backup, followed by the second
incremental backup, etc. If you try to restore a database out of sequence, you get an error
message and the restore operation fails.
If you lose the second incremental and you used an overlap factor of 1, the third
incremental correctly restores the data lost in the second incremental.
After you restore a full backup, do not use the database if you want to restore successive
incremental backups. If you make any database changes before completely restoring all
backups, any successive, incremental backups (that were not restored) are rejected unless
you restart the restore procedure beginning with the full backup.
If a system failure occurs while you are restoring the database, restart the restore operation
beginning with the backup volume that you were restoring at the time of the system failure.
If a target database exists, it must have the same block size and storage area configuration
as the source database. The PROREST utility attempts to expand storage areas to allow a
complete restore, but if the storage areas cannot expand, the restore fails.
521
Backing Up a Database
The following example shows the output from the prorest -list command:
Use this information to create a new structure description file and database so you can restore
the backup.
The newdev.db database is an empty database. The 9-track tape drive (/dev/rrm/0m)
specifies the device from which the full backup is being restored.
522
Restoring a database
As the restore begins, the following report appears:
This command restores the database devel.db from a tape to newdev.db. The report
indicates that volume 1 is being processed.
2.
Enter the following command to run an incremental restore of the database from a tape
once the full restore is done:
2.
523
Backing Up a Database
524
6
Recovering a Database
This chapter explains the different ways to recover your OpenEdge database and transactions if
your system or disks fail, as described in the following sections:
Recovering a Database
Crash recovery Uses primary recovery (BI) data to recover from system failures
Roll-forward recovery Uses backups and after-image data to recover from media
failures
Depending on your site requirements, you might choose not to implement all three of these
recovery mechanisms. Figure 61 shows the order of precedence of these mechanisms. Crash
recovery requires use of a recovery (BI) log and occurs without any interaction. Roll-forward
recovery requires use of an after-image (AI) log. Two-phase commit requires use of a
transaction log (TL). If you use two-phase commit, be sure to also use after-imaging.
Figure 61:
Two-phase
commit
(Optional )
Roll-forward
recovery
(Optional ; also requires
backups and after -imaging )
Crash recovery
(Automatically performed )
Each mechanism relies on notes that are written to a file to record database changes. A note is
a record of the smallest unit of change in a database. For example, a record of one change made
to one block in the database. The database engine automatically records database changes as
notes in the primary recovery (BI) log. If after-imaging is enabled, it also records notes to the
after-image (AI) log. If two-phase commit is enabled, it also records transactions and notes to
the transaction log (TL).
Crash recovery
Crash recovery occurs automatically. With this feature, the database engine uses information
from the primary recovery (BI) log to recover from system failures.
The BI files are a vital part of the database. You should treat the files as an integral part of the
database unit. When you back up and restore the database, back up and restore the DB and BI
files together. Never manually delete the BI files.
62
When database records are modified, the changes occur first in memory. When a transaction is
committed, the change is recorded to the BI file. Over time, the database engine makes the
change to the database file on disk. If the system fails, the information stored in the buffer pool
is lost. The database engine performs crash recovery using the information logged to the BI file
to re-create lost transactions and undo transactions that were not committed.
Before updating the database, the database engine makes a copy of the current information and
writes it to the BI file. This activity begins at the end of an update operation. If the system fails
during the transaction, the engine uses the information in the BI file to restore the database to
its pretransaction state. The engine also uses the information in the BI files during normal
processing to undo transactions.
For example, suppose you execute the following ABL (Advanced Business Language)
procedure:
You update customers 1 and 2, and while you are updating customer 3, the system fails. When
you restart the database, messages appear in the database .lg file similar to the following:
The messages indicate the necessary phases of crash recovery performed by the database engine
to bring the database to the consistent state that existed prior to the system failure. Since the
engine performs crash recovery every time you open the database, not all of the recovery phases
are logged in the database .lg file. For example, the engine performs and logs the Physical Redo
phase unconditionally, but the Physical Undo and Logical Undo phases are only performed and
logged when outstanding transactions are found.
When you rerun the same procedure, customers 1 and 2 remain updated, but the database engine
has used the BI file to restore customer 3 to the state it was in before you began the update.
Crash recovery protects you from system failures, but it does not protect you from loss of media.
In the event of media loss, you must restore from a backup, and either manually re-enter the lost
transactions or use the roll-forward recovery mechanism to re-create the transaction.
63
Recovering a Database
Roll-forward recovery
Roll-forward recovery, used together with after-imaging and backup, lets you recover from
media failures. When a database disk fails, you can restore the most recent backup, then use
roll-forward recovery to restore the database to the condition it was in before you lost the disk.
With roll-forward recovery, the database engine uses data in the AI files to automatically
reprocess all the transactions that have been executed since the last backup was made.
To use roll-forward recovery, you must:
Perform regularly scheduled backups of the database. Regular backups are a fundamental
part of recovery.
Enable after-imaging immediately after you complete the backup. See Chapter 7,
After-imaging, for information about enabling after-imaging.
Store the AI files on different disks than those containing the database and BI files.
When you enable after-imaging, the database engine writes database changes to the AI
files. If you store the AI files on the same disks as the database or BI files and a disk is
corrupted, you cannot use the AI files to recover that database.
Archive the AI files to tape or other durable media as they become full.
This example shows how the database engine uses the AI files to restore the database. Suppose
you execute the following ABL procedure:
You update customers 1 and 2, and while you are updating customer 3, the disk where the
database file is stored is damaged. You cannot use the BI file to restore the transactions because
the original database is no longer valid.
However, because you enabled after-imaging, you can use roll-forward recovery to recover the
database. If you do not enable after-imaging, you lose all the updates since the last database
backup.
Before updating the database, the database engine makes a copy of the current information and
writes it to the BI file and the AI file.
After updating customers 1 and 2, the database disk fails while updating customer 3. The AI
files have a copy of all transactions completed since the last backup. Restore the last backup of
the database and then roll forward the AI files to produce a restored database that contains all
completed transactions.
64
Two-phase commit
Two-phase commit ensures that distributed transactions (that is, transactions involving multiple
databases) occur consistently across all databases. Two-phase commit is not necessary for
transactions involving a single database.
For detailed information on two-phase commit, see Chapter 12, Distributed Transaction
Processing.
Note:
If you use two-phase commit, you should also use after-imaging to ensure database
integrity and avoid backup synchronization problems. Although two-phase commit
ensures that two databases remain synchronized, it does not protect you from a lost
database or BI file. If you lose an entire file or disk, you must use the AI file to roll
forward to the point of the crash.
65
Recovering a Database
Disk
volume 2
DB file
LG file
TL file
Figure 62:
66
Disk
volume 1
Machine
BI file
Disk
volume 3
AI file
How long can the application be offline while you perform scheduled maintenance, such
as backups?
If the system or database becomes unavailable, how much time can you spend recovering?
If you use transactions that affect more than one database, can you allow transactions to
occur inconsistently in those databases?
Use the tables in the following sections to develop a recovery plan. These tables provide a range
of answers to these questions and backup and recovery suggestions for each.
Recovery guidelines
These are the guidelines to follow to ensure a safe recovery:
Always:
Back up the AI files on different media from the database and BI files.
Label, test, and keep the backups in a separate and secure location.
67
Recovering a Database
Do Not:
Erase a DB, BI, or AI file unless you are certain you no longer want the database, or unless
you have a secure, complete backup.
Copy a database file without also copying the BI files associated with that database.
Copy a database with an operating system utility while the database is in use without
running PROQUIET.
Caution: If you run your OpenEdge database with the Unreliable Buffered I/O (-r) parameter
and the system fails because of a system crash or power failure, you cannot recover
the database. If you run with the No Crash Protection (-i) parameter and the database
fails for any reason, you cannot recover the database.
68
Answer
Recovery technique
1 week of
transactions
24 hours
No
Given these requirements, the database administrators perform a full offline backup every
Monday at 5 PM. Incrementals are not required because the site can afford to lose one week of
transactions, and full backups are performed weekly. The backup is tested Tuesday at 5 PM
using disk space reserved for that purpose.
Answer
Recovery technique
1 day of
transactions
8 hours per
week
8 hours
No
Recovering a Database
Given these requirements, the database administrators perform a full offline backup every
Saturday night. Because they can lose only one day of transactions, they supplement weekly
backups with daily online incremental backups. Because they can only allow time to restore a
single incremental, they perform incremental backups with an overlap factor of six. This means
that each incremental backup saves changes that were saved in the last six incremental backups.
The full backup is tested Saturday night, and after the incremental backups finish, they are tested
using disk space reserved for that purpose.
Answer
Recovery technique
A few minutes
of transactions
Enable after-imaging.
Never
24 hours
Yes
Given these requirements, the database administrators perform a full online backup every
Saturday night. Because they can afford to lose only a few minutes of transactions, the site also
enables after-imaging. They switch AI extents every day at 7 PM and archive the resulting full
after-image extent. (Because they switch AI extents at scheduled intervals rather than waiting
for an extent to fill, they must be sure to monitor AI extent usage in case an extent fills before
the scheduled switch.) Because applications at this site perform distributed transactions that
must be consistent across databases, this site implements two-phase commit. The full backup is
tested Saturday night. After they are archived, AI files are applied to the tested backup since so
few transactions can be lost.
610
Availability question
Answer
Recovery technique
None
Enable after-imaging.
Never
4 hours
No
Given these high availability requirements, the database administrators keep a duplicate
database on warm standby, and they follow these steps:
1.
2.
On the production database, enable after-imaging using AI files with fixed-length extents.
3.
On the production database, whenever a fixed-length AI extent becomes full, copy it to the
standby system and roll it forward on the standby database.
4.
After bringing the standby database up to date, mark the full after-image extent on the
production database as empty to make it available for reuse.
In addition, backups and AI files are constantly tested and verified on the standby database. The
standby database is put through crash recovery, verified, and if possible, backed up before
restoring online backups of the production database to the standby database.
611
Recovering a Database
Friday PM
Start implementing
after -imaging
Monday AM
Monday PM
Perform a daily
full backup
Figure 63:
612
Recovery scenarioday 1
Tuesday AM
Tuesday Noon
System failure
Tuesday PM
Perform a daily
full backup
Figure 64:
Recovery scenarioday 2
613
Recovering a Database
Wednesday AM
Wednesday Noon
Database disk lost
Wednesday PM
Perform a daily
full backup
Figure 65:
614
Recovery scenarioday 3
Thursday AM
Disable after-imaging .
Thursday PM
Perform a
full backup
Start a large
batch job to
run overnight
Figure 66:
Recovery scenarioday 4
615
Recovering a Database
Friday AM
Perform a full
backup
Friday Noon
AI file is
on full disk
DB or BI file
is on full disk
Disable after-imaging .
Friday PM
Perform a daily
full backup
Figure 67:
616
Recovery scenarioday 5
Function
BUSY
Disables after-imaging.
Before you run a batch job, you might want
to disable after-imaging to improve
performance.
617
Recovering a Database
Use of this qualifier is limited to RFUTIL roll-forward operations that fail because of power
outages or system failures. The ROLL FORWARD RETRY qualifier restarts the roll-forward
operation on the after-image extent that was in the process of rolling forward. The retry
operation finds the transaction in process at the time of failure and resumes rolling forward.
618
BUSY
HOLDER
TRUNCATE BI
If you run the database with the -r parameter and your system fails because of a system
crash or power failure, you cannot recover the database. If you run the database with
the -i parameter and it fails for any reason, you cannot recover the database.
619
Recovering a Database
620
1.
Back up the current AI file and any full AI files not already archived before rolling
forward. This step is important to protect yourself in case you lose the AI files if anything
goes wrong, such as inadvertently overwriting the AI file. Be sure to back up the AI files
on media different from the media where the database files are backed up. The AI files
have the information you require to bring the database backup files up to date.
2.
Create the structure for the database. You will restore the database to this structure.
3.
Restore the most recent full database backup. If you are using incremental backups, you
must restore the most recent full backup and then apply all the incremental backups.
4.
If you use after-imaging, roll forward the AI files starting after the last backup. Use the
RFUTIL ROLL FORWARD utility. See Chapter 7, After-imaging, for more
information.
5.
Perform a full backup of the restored, recovered database. Follow the standard backup
procedure described in Chapter 5, Backing Up a Database.
6.
7.
8.
If two-phase commit is enabled for the database, end two-phase commit using the
PROUTIL 2PHASE END utility. You must do so before you can perform the next step.
Note: If you end two-phase commit and a coordinator recorded any distributed
transaction commits in this database, the user cannot resolve any limbo
transactions where this database was the coordinator.
2.
3.
If you have a database, re-create the lost extent using PROSTRCT ADD.
4.
Back up the database using the PROBKUP utility or an operating system backup utility.
5.
If you used an operating system backup utility to back up the database, mark the database
as backed up using the RFUTIL MARK BACKEDUP utility.
6.
Enable after-imaging.
7.
Archive the current AI file to ensure that a second copy is available in case the original
extents are damaged (for example, if you inadvertently overwrite the current busy AI file
when restoring archived AI files in Step 3). The AI files have the information you need to
bring the database backup files up to date.
Be sure to archive the AI files on different media than the database files backup.
2.
Restore the backup you made immediately before the lost backup. If you are using
incremental backups, you must restore the most recent full backup and then apply all the
incremental backups to the full backup.
3.
Restore the AI files you archived at the same time you made the backup that was lost or
corrupted.
621
Recovering a Database
4.
Use the RFUTIL ROLL FORWARD utility to roll forward the AI files you restored in
Step 3. See Chapter 7, After-imaging, for more information.
5.
Use the RFUTIL ROLL FORWARD utility to roll forward the current AI files (that is, the
AI files in use since the lost backup).
6.
Back up the database following the standard backup procedure described in Chapter 5,
Backing Up a Database.
7.
8.
9.
If two-phase commit is enabled for the database, end two-phase commit using the
PROUTIL 2PHASE END utility.
Note: If you end two-phase commit and a coordinator recorded any distributed
transaction commits in this database, the user cannot resolve any limbo
transactions for that distributed transaction unless after-imaging is also enabled for
the database.
622
2.
3.
2.
If you cannot make enough room on your disk to allow the database engine to perform crash
recovery, you must take further steps. The steps vary, depending on the contents of the full disk,
as explained in the following sections.
Back up the least recently used full after-image extent. Use the RFUTIL AIMAGE
EXTENT FULL utility to find out which extent this is.
2.
Use the RFUTIL AIMAGE EXTENT EMPTY utility to mark the extent as available for
reuse.
3.
Truncate the BI file using the PROUTIL TRUNCATE BI utility. This rolls back any
transactions active at the time the database shuts down. You can also start a single-user
OpenEdge session to accomplish this.
2.
Add one or more data extents to the database using the PROSTRCT ADD utility. This step
is not necessary to bring the database back online. However, if you omit this step, the
database will shut down when users begin adding data to it.
If you have no additional disk space, you must install a new disk or delete some data from
the database before you can bring the database online.
3.
Recovering a Database
Use the PROSTRCT ADD utility to add a BI extent on a disk that contains some free
space. The maximum amount of additional BI space you require to recover the database is
variable, and may exceed the current size of the database. If you have a disk with this much
free space, then assign the new BI extent to that volume. Note that this is a worst-case
space requirement and the BI file normally grows much less than this. Adding the BI
extent makes the variable length extent a fixed-length extent.
2.
Start a single-user OpenEdge session against the database. This begins database crash
recovery. Exit the single-user session.
3.
624
2.
3.
4.
625
Recovering a Database
proutil -C DBIPCS
626
ID
ShMemVer
Seg#
InUse
Database
Specifies the existing structure description file. If a structure description file is not
specified, PROSTRCT BUILDDB assumes the name is db-name.st.
PROSTRCT BUILDDB does minimal validation of the resulting control area.
627
Recovering a Database
Caution: If you are missing any database extents when you run PROSTRCT UNLOCK,
PROSTRCT replaces any missing extents with empty formatted extents and
displays a message. You can determine whether this has occurred the next time
any utility or program tries to open the database.
628
2.
Use the Data Administration tool if you are using a graphical interface or the Data
Dictionary if you are using a character interface to dump the data definitions and the table
contents.
3.
4.
5.
Restart the database and reload the dumped data definitions and data into the new
database.
In this procedure, data from the item table is dumped into the item.d file. Use the item.d file
with the reload procedure. For more information on loading, see Chapter 15, Dumping and
Loading.
In the p-dump.p procedure, you must set the end value for the DO block high enough (10,000 in
the previous example procedure) so that every record in the database is examined. Calculate a
safe value using the following formula:
The database block size varies among systems. Use the PROSTRCT STATISTICS utility to
determine the database block size for your system.
629
Recovering a Database
2.
3.
4.
Use any of the database administration tools or utilities to dump and then load the
database. See Chapter 15, Dumping and Loading, for more information.
Caution: Forcing access to a damaged database is an action of last resort and might not
result in successfully opening the database. If the database cannot open, none
of the OpenEdge recovery tools can be used to dump the contents of the
database.
630
7
After-imaging
The after-imaging feature lets you recover a database that was damaged when a failure caused
the loss of the database or primary recovery (before image) area. When you enable
after-imaging, the database engine writes notes containing a description of all database changes
to the after-image (AI) files. You can use the AI files with the roll-forward recovery process to
restore the database to the condition it was in before you lost the database, without losing
completed transactions that occurred since the last backup. This chapter describes how to use
after-imaging in the following sections:
After-image sequences
Disabling after-imaging
After-imaging
The database engine fills the AI areas in the order that you define them in the structure
description file. When you are defining areas, you can store more than one AI area on a
disk. However, you should store all the AI areas on disks other than the one that contains
the database (DB) files or primary recovery area (BI) files.
For both fixed-length and variable-length extents, the database engine automatically
switches extents when the current extent becomes full, as long as the next extent is empty.
If you define three large fixed-length extents, you can use extent 1 for a full days worth
of transactions, and have extent 2 empty and ready to use when you need to switch over to
the next extent. This also leaves extent 3 available if you perform an unusually high
number of transactions and use both extents 1 and 2.
The database engine uses AI areas sequentially, in the order defined in the structure description
file. AI area filenames have a .an extension, where n indicates the numerical order in which you
defined the area. After it uses the last area, the database engine reuses the first area if it is empty.
Figure 71 illustrates this behavior. An extent switch is the operation of switching from one AI
area extent to another.
AI area 1
extent
Figure 71:
AI area 2
extent
AI area 3
extent
You must monitor the status of the extents to ensure that you do not try to reuse an unavailable
file. For information on monitoring the status of your AI extents, see the Monitoring AI file
status section on page 79.
72
Fixed-length extents
Variable-length extents
Fixed-length extents
Fixed-length extents are extents that are preallocated and preformatted. With fixed-length
extents, you control how much disk space each extent uses by specifying the size in the structure
description file.
Variable-length extents
Variable-length AI extents do not have a predefined length. They continue to fill until they use
the entire disk, you back up the database, or you issue the RFUTIL AIMAGE NEW command.
The initial length of a variable-length extent is 128K. You can define more than one
variable-length AI area for a single database.
73
After-imaging
74
Create the structure description file for the database. See Chapter 1, Creating and
Deleting Databases, for a complete description of how to create a structure description
file.
a.
b.
c.
Define any fixed-length AI extents. You must define four tokens in the file
description line. Table 71 describes each token and value you enter for fixed-length
files.
Table 71:
File tokens
Token
Description
Storage area
type
Extent path
and file name
Extent type
Extent size
d.
db/mary/apdir/test.a1 f 2048
Define any variable-length after-image extents. Only specify the first two tokens: the
extent type and pathname. Unlike data or BI extents, you can define more than one
variable-length after-image extent for a database.
75
After-imaging
2.
Create the empty database structure using the PROSTRCT utility. For more information
about the structure description file and PROSTRCT, see Chapter 14, Maintaining
Database Structure.
3.
Create the initial database contents using the PROCOPY or PROREST utility. For more
information about the PROCOPY and PROREST utilities, see Chapter 23, Other
Database Administration Utilities.
76
1.
Create a structure definition file containing the descriptions of the files you are adding to
your database. See Table 71 for the tokens used to describe the files.
2.
Use PROSTRCT ADD to add the new files to your database. For more information about
the structure description file and PROSTRCT, see Chapter 14, Maintaining Database
Structure.
3.
Use PROSTRCT LIST to save your updated database definitions in a structure file.
Create the AI areas in your database. See the Creating after-image areas section on
page 75 for detailed instructions.
2.
3.
After you back up your database, use the following syntax to enable after-imaging:
Syntax
rfutil db-name -C aimage begin
If the database has been modified since it was last backed up, RFUTIL displays an error
message and does not enable after-imaging.
77
After-imaging
Create the AI areas in your database. See the Creating after-image areas section on
page 75 for detailed instructions.
2.
Simultaneously backup your database online and enable after-imaging with PROBKUP.
The syntax for the command is:
Syntax
probkup online dbname output-device enableai
For complete PROBKUP syntax information, see the PROBKUP utility section on
page 2313.
78
Archiving an AI file
Many of these steps can be automated using the AI File Management utility, however it is
important to understand the manual steps involved in the process. For details on the utility, see
the AI File Management utility section on page 716. The following sections detail each task
in the manual process.
Full When a busy AI extent fills up, an extent switch occurs: the state of the AI extent
changes to full and the next AI extent becomes busy.
Locked When running OpenEdge Replication, a full extent is locked until the contents
of the extent have been replicated to the target database. When the data is replicated, the
extent is unlocked and marked Full.
79
After-imaging
PROMON
Use PROMON R&D Status Displays AI Extents to display information about each
extent, including status:
09/13/05
15:28:44
Area
13
14
15
Status: AI Extents
Status
Type
BUSY
EMPTY
EMPTY
Fix
Fix
Var
File
Number
Size
(KBytes)
1
0
0
505
505
121
Extent Name
/usr1/101A/docsample.a1
/usr1/101A/docsample.a2
/usr1/101A/docsample.a3
The query-option specifies the information you want to gather about the AI extent. The
search-option specifies the how you are identifying the AI extent to query. The
search-value specifies the match criteria for the search-option. For example, if you
want to know when after-imaging started writing to a particular extent, identified by its
sequence number, use the following command:
If you want all the information about an extent, specified by its name, use the following
command:
710
Extent:
Status:
Type:
Path:
Size:
Used:
Start:
Seqno:
1
Busy
Fixed Length
/dbshare/databases/db1.a1
120
1
Wed May 26 15:06:49 2004
1
For more complete syntax detains, including the query-option and search-option
values, see the RFUTIL AIMAGE QUERY qualifier section on page 2218.
RFUTIL AIMAGE EXTENT FULL displays the filename of the oldest full file. You can then
use this information to archive extents in the order in which they were filled. Although there
might be multiple full files, this command displays the pathname of the oldest full file:
For more information, see the RFUTIL AIMAGE EXTENT FULL qualifier section on
page 2214.
When the current fixed-length AI extent is full, or when the disk holding the current
variable-length AI extent is full
Except when you switch to a new extent because the current extent is full, switching to a new
AI extent establishes a starting point for backup; after you restore the backup, you roll forward
starting from that extent.
Note:
When you perform an online backup, PROBKUP automatically switches over to a new
extent as long as the next extent is empty. Before you perform the online backup, make
sure that the next extent is empty.
A fixed-length extent has a predefined size, so the database engine can determine when the
extent becomes full.
711
After-imaging
In contrast to a fixed-length extent, a variable-length extent does not have a predefined
maximum size. Therefore, the database engine cannot anticipate when the extent is about to
become full. Unless you force a switch using RFUTIL AIMAGE NEW, the database engine
continues writing to the extent until an operating system limitation is reached, you reach the
2GB addressable AI file limit without large files enabled, or there is no more room left on the
disk. When the extent becomes full, the database engine automatically switches to the next
extent, provided that the next extent is empty. For more information on large files, see the
PROUTIL ENABLELARGEFILES qualifier section on page 2047.
If the next extent is full, the database engine shuts down the database. However, you can use the
After-image Stall (-aistall) parameter to suspend database activity and send a message to the
log file or you can use the RFUTIL qualifier AIMAGE AIOFF to disable after-imaging. If you
use -aistall, you can archive the oldest full extent and mark it as empty. The system will then
automatically switch to that extent and resumes database activity automatically. For more
information on the -aistall parameter, see Chapter 18, Database Startup Parameters. If you
use RFUTIL AIMAGE AIOFF, after-imaging becomes disabled and can no longer write notes.
Note:
You can only use the -aistall parameter and RFUTIL AIMAGE AIOFF in multi-user
mode.
When the database engine suspends database activity or shuts down the database, it sends the
following message to the log file:
The database engine cannot resume database activity until the next extent is backed up and
marked as empty.
You can manually perform an online AI extent switch if you want to archive the AI file at
regularly scheduled times instead of waiting until the extent becomes full.
To switch to the next extent in the sequence:
1.
Make sure the next extent in the sequence is archived. If not, archive it. See the Archiving
an AI file section on page 713 for details.
2.
Use the RFUTIL AIMAGE NEW utility with the following syntax:
Syntax
rfutil db-name -C aimage new
When you issue the RFUTIL AIMAGE NEW command, RFUTIL changes the status of the
current extent to full and changes the status of the next file to busy. For more information on
this command, see Chapter 22, RFUTIL Utility.
712
Archiving an AI file
Backing up the AI file involves:
Scheduling backups
Scheduling backups
Depending on your database backup schedule and needs, you might want to schedule AI file
backups:
On a daily basis.
You should consider backing up the AI files on a daily basis if:
You are on a weekly full backup schedule and the AI files will grow very large during
the week.
You want to perform daily backups of the AI files instead of performing incremental
backups. However, it is quicker to restore your database from incremental backups
than by rolling forward AI files.
If you are using a single AI file, it is important to back up the AI file before you fill the disk that
contains it. If you do not back it up in time, the database engine shuts down the database. For a
complete description of how to recover from a full AI disk, see Chapter 6, Recovering a
Database. Also, if you are using a single AI file, you must shut down the database to switch to
a new AI file.
If you are using multiple AI extents, you must back up the extents regularly to ensure that the
system does not run out of AI space. Before deciding to back up the AI files every day, consider
that recovering the database from small AI files is more intricate than recovering from a single,
large AI file.
713
After-imaging
Performing the backup
You must use an operating system utility to back up the AI files regardless of whether you are
using a single AI file or multiple AI files. Ensure that the backup technique backs up the entire
file. On many UNIX systems, certain utilities (for example, cpio) will back up only the first
part of files over a certain size (controlled by the ULIMIT parameter). Backups of partial AI files
are invalid and unusable. If you use ftp to transfer the AI files to a different machine, binary
mode must be used. Failing to use ftp in binary will leave your AI files in an unusable state.
Protecting the backup
After you back up the AI file, make sure you:
Label the backup. Properly labeling backups helps you ensure database integrity. Include
the following on the label:
Volume number and total number of volumes of the media, even if there is only one
volume
Keep backups in an area other than where the computer is located, preferably in another
building.
In the event of building damage, you are less likely to lose both the online and backup
versions of the files.
714
The output file of extracted blocks is equivalent to the source AI extent. Use the file of extracted
blocks with the RFUTIL utilities to roll forward a target database. For complete syntax
information see the RFUTIL AIMAGE EXTRACT qualifier section on page 2216.
Note:
Extracting blocks from an AI extent is only beneficial for fixed length extents that are
not filled to capacity. There will be minimal savings of disk space when extracting
blocks from a variable length extent.
extent-path ]
Use the RFUTIL AIMAGE EXTENT LIST or RFUTIL AIMAGE EXTENT FULL
utility to determine the extent-number or extent-path.
715
After-imaging
AI File Management has two modes: automatic and manual. The automatic mode allows users
with little or no experience with after imaging to quickly get started. In automatic mode, the
utility handles AI extent archival for you. In the manual mode, the knowledgeable user has
greater control over the process of archiving AI extents.
Awake from five second sleep and archive all FULL AI extents.
2.
Check to see if the time interval has expired. If the interval has expired, switch the current
extent. Switching causes the current BUSY extent to be marked FULL, and the next
EMPTY extent to be marked BUSY.
3.
4.
It is possible that there will be no FULL extents to archive on many iterations of this loop. After
the timer expires, there will be least one FULL extent to archive, the one marked FULL in
Step 2. On a busy system, it is possible that additional extents fill during Step 3 and Step 4 of
the archiving process. They are archived the next time the daemon awakes.
716
Awake from five second sleep and archive all FULL and LOCKED AI extents.
2.
Check to see if the time interval has expired. If the interval has expired, switch the current
extent. Switching causes the current BUSY extent to be marked LOCKED, and the next
EMPTY extent to be marked BUSY.
3.
4.
The difference in the archiving process when OpenEdge Replication is enabled is that extents
cannot be emptied until they have been fully replicated. Extents transition from the BUSY state
to the LOCKED state. If a LOCKED extent is replicated before it is archived, it transitions to
the FULL state and the AI File Management daemon archives it. If a LOCKED extent is
archived before it is replicated, it transitions to an ARCHIVED state, and it becomes the
responsibility of OpenEdge Replication to transition it to an EMPTY state when replicated.
717
After-imaging
Figure 72 shows the state transitions graphically.
Empty
Busy
Figure 72:
Locked
Full
Archived
718
1.
Awake from five second sleep. Archive all FULL AI extents, and mark as EMPTY.
2.
Normal database activity causes the current BUSY extent to fill. The extent is marked
FULL, and the next EMPTY extent is marked as BUSY.
3.
Awake from five second sleep. Archive all FULL and LOCKED AI extents. LOCKED
extents are marked as ARCHIVED; FULL extents are marked EMPTY.
2.
Normal database activity causes the current BUSY extent to fill. The extent is marked
LOCKED, and the next EMPTY extent is marked as BUSY.
3.
When OpenEdge Replication has replicated the LOCKED extent, it will mark the extent
EMPTY.
4.
Verify that your database has after-imaging areas and after-imaging is enabled. If not
enabled, see the Enabling after-imaging offline section on page 77 for instructions.
2.
3.
4.
719
After-imaging
Verify that your database has after-imaging areas and after-imaging is enabled. If not
enabled, see the Enabling after-imaging online section on page 78 for instructions.
2.
3.
Enable your database for AI file management with an online backup, The backup provides
the baseline for rolling forward extents:
In both examples, two directories are specified as output destinations for the archived AI files,
/usr1/aiarchives/mydb and /usr2/aiarchives/mydb. If you do not create the directories,
you can add an additional parameter, -aiarcdircreate, to instruct the AI file management
utility to create the archive directories. In Windows, if you specify more than one directory,
place the directory list in quotes. AI File management switches to the second directory when
there is insufficient space to write to the current directory (the disk is full). If you specify
multiple archive directories, they should be on separate partitions or disks. The -aistall
parameter is implicitly included.
Setting -aiarchinterval to 120 directs the daemon to waken and archive extents every 2
minutes. You can specify an interval as small as 1 minute, and as large as 24 hours. If you omit
the -aiarcinterval parameter, the daemon will waken each time an archive fills, regardless
of the amount of elapsed time.
If you are unsure of a time interval, you can initially omit -aiarchinterval, and then switch to
a timed interval based on the frequency of archives as recorded in the archive log file.
To modify the directory list where your AI extents are archived, use the following command:
rfutil mydb -C aiarchiver setdir
/usr3/aiarchives/mydb/,/usr4/aiarchives/mydb/
The directories specified with this command replace any previously specified archive
directories. If the directories do not exist, you can add an additional parameter,
-aiarcdircreate, to instruct the AI file management utility to create the archive directories.
720
To modify the time interval for your AI file management utility daemon, use the following
command:
rfutil mydb -C aiarchiver setinterval 600
This command increases the timer of the file management daemon to five minutes.
To force the next FULL AI extent to be archived immediately, use the following command:
rfutil mydb -C aiarchive nextextent
This command archives the next FULL extent to the previously specified directory list. If the
daemon is operating in timed mode, the timer is reset. If your database stalls because all your
AI extents are full, this command will free an extent. After clearing the stall, set the archive
interval to a smaller number or to 0 prevent another stall.
For complete syntax of the AIARCHIVE and AIARCHIVER qualifiers to RFUTIL, see
Chapter 22, RFUTIL Utility.
Archived extents
Extents archived automatically by the AI File Management utility daemon are assigned an
unique file name by the archive process. The archived file name is comprised of a combination
of file names and sequence numbers that identify the source of the archive file and the state of
the source database at the time of the archive. The elements are:
The full file specification of the source database, with a ~ replacing the directory
separator.
In Windows, the colon (:) following a driver letter is replaced by an exclamation point (!).
The backup sequence number in the database master block at the time the extent became
BUSY.
directory~spec~database.backup-sequence.ai-sequence.extent-name
For example, if the name of your archive file is usr1~dbs~sales.01.03.sales.a3, you can
reconstruct the following information:
721
After-imaging
The second line of the example above provides version and startup information and is formatted
as follows:
Table 72:
Value
date
time
After the header lines, each line of the file describes an archive or backup event with a comma
separated list of details.
Archive line
An archive line is formatted as follows:
722
Value
AI extent archive indicator; possible values are:
database
date
time
buseq
aibegin-date
aibegin-time
aiseq
extent-name
target-directory
target-extent-name
Backup line
A backup line is formatted as follows:
Backup indicator
database
(1 of 2)
Value
Backup indicator; possible values are:
723
After-imaging
Table 74:
724
(2 of 2)
Value
date
Date the backup line was written to the archive log file
in YYYYMMDD format
time
Time the backup line was written to the archive log file
in HHMMSS format
online indicator
0 offline
1 online
buseq
backup-date
backup-time
incr-seq
aigennbr
target-count
backup-targets
backupset-name
Caution: PROSTRCT REORDER AI will rename AI extent files, changing the .an extension.
Area numbers for the reordered extents will also change. It is critical that you use
PROSTRCT LIST to generate an accurate.st file for the database after running this
utility. For more information on generating a new .st file, see the PROSTRCT
LIST qualifier section on page 2110.
The following example illustrates the effect of PROSTRCT REORDER AI on your AI extents:
Figure 73 shows the original AI extent configuration. The database will shutdown when
A1 fills because it cannot switch to the full A2 extent.
A1
Status: Busy
Size: Fixed
Figure 73:
A2
Status Full/Locked
Size: Variable
A3
Status: Full/Locked
Size: Variable
725
After-imaging
A1
Status: Busy
Size: Fixed
A2
Status Full/Locked
Size: Variable
A5
Status: Empty
Size: Fixed
A4
Status: Empty
Size: Fixed
Figure 74:
Adding the new extents is not sufficient to prevent a database shutdown. The database will
still shutdown when A1 is filled because after-imaging still can not switch to the full A2
extent. Use PROSTRCT REORDER AI to arrange the extents. Figure 75 shows the
result of the PROSTRCT REORDER AI.
A1
Status: Busy
Size: Fixed
A2
Status: Empty
Size: Fixed
A4
Status Full/Locked
Size: Variable
Figure 75:
726
A3
Status: Full/Locked
Size: Variable
A3
Status: Empty
Size: Fixed
A5
Status: Full/Locked
Size: Variable
After the reorder is complete, after-imaging can successfully switch extents. The result of
the reordering is that the empty extents are moved to immediately follow the busy extent.
Their area number and file name extensions have been altered to reflect the move. The
physical location and size of the file on disk has not been altered.
For complete syntax information see the RFUTIL ROLL FORWARD qualifier section on
page 2225.
The ROLL FORWARD qualifier fails if:
If the system fails while you are running the ROLL FORWARD operation, restore the database
files again and rerun the ROLL FORWARD operation.
The ROLL FORWARD qualifier always disables after-imaging for the database before
beginning the roll-forward operation. After the roll-forward has completed, you must re-enable
after-imaging with the AIMAGE BEGIN qualifier if you want continued AI protection.
To perform a partial roll-forward recovery, use the endtime or endtrans options. The endtime
option lets you roll forward an AI file to a particular point. The endtrans option lets you roll
forward an AI file to a particular transaction. For information on using these options, see the
RFUTIL ROLL FORWARD qualifier section on page 2225.
For more information about the ROLL FORWARD qualifier, see Chapter 6, Recovering a
Database and Chapter 22, RFUTIL Utility.
727
After-imaging
After-image sequences
The SEQUENCE qualifier to the RFUTIL utility provides database administrators the ability to
update after-image sequence numbers and improves your ability to maintain a hot standby
database recovery strategy.
RFUTIL SEQUENCE updates the sequence number of your hot standby target database to
match the sequence number of the AI extent that was BUSY at the time the source database was
copied to create the hot standby.
After executing RFUTIL SEQUENCE on your standby target database, you can keep the target
in sync with the source by rolling forward AI extents, starting with the extent that was BUSY
when the copy occurred.
See the RFUTIL SEQUENCE qualifier section on page 2230 for the command syntax.
A hot standby is typically maintained by first making a copy of your source database, and then
regularly updating the database by rolling forward the AI extents from the source to the target.
The preferred methods of making a copy of your database use the OpenEdge utilities
PROCOPY or PROBKUP. However, if you use your operating system commands to make a
copy, the SEQUENCE qualifier to RFUTIL is available to update your target database to expect
the correct after-image extent in the roll forward sequence. Correcting the sequence is only
necessary if there has been an AI extent switch on your source database after the database was
last marked as backed up and before the operating system copy was made.
728
After-image sequences
729
After-imaging
Sequence required
The following code example demonstrates the steps for making a hot standby database using an
operating system copy command, but in this example, updates to the database occur at an earlier
point in the process. These updates require that the sequence of the target database be corrected
prior to rolling forward an AI extent:
The target database must have its sequence corrected prior to attempting to apply an AI extent
with roll forward. Roll forward disables after imaging, and it is impossible to correct the
sequence once after imaging has been disabled. If the roll forward on the target database fails,
you must recopy from the source, use RFUTIL SEQUENCE to correct the sequence number,
and then roll forward.
730
Disabling after-imaging
Disabling after-imaging
You use the RFUTIL utility to disable after-imaging.
To disable after-imaging:
1.
2.
3.
Disable after-imaging. Use the AIMAGE END qualifier with the RFUTIL utility:
You can also use the AIMAGE AIOFF qualifier with RFUTIL. Use AIMAGE AIOFF when you
need to temporarily disable after-imaging, such as during scheduled maintenance.
731
After-imaging
732
8
Maintaining Security
As an OpenEdge database administrator, you want to ensure that only authorized users connect
to a database, as well as prevent unauthorized users from modifying or removing database files
and objects. This chapter explains how to maintain the security of your OpenEdge database, as
described in the following sections:
Connection security
Schema security
For more information on the security features of OpenEdge, see OpenEdge Getting Started:
Core Business Services.
Caution: Before changing your databases security implementation, make a backup of the
original database. This way you can restore the original database if you cannot undo
some of the security measures you set. For example, if you are the sole security
administrator and you forget your user ID or password, you might be denied access
to the database and the data contained in it. However, if you have a backup of the
original database, you can restore it. To prevent such a mishap, designate at least two
security administrators for each database.
Maintaining Security
OpenEdge user ID
A user ID is a string of up to 12 characters associated with a particular OpenEdge database
connection. User IDs can consist of any printable character or digit except the following: #, *,
!, and @. User IDs are not case sensitive; they can be uppercase, lowercase, or any combination
of these. A user ID can be blank, written as the string , but you cannot define it as such
through the Data Dictionary.
OpenEdge password
A password is a string of up to 16 characters that is associated with a user ID. When you add
the password to the user list, the password is encoded with the ENCODE function. Because
ENCODE returns different values for uppercase and lowercase input, all OpenEdge passwords
are case sensitive.
82
83
Maintaining Security
84
Use the Data Dictionary to designate an ABL DBA user ID and use it to grant access
privileges.
Log into the database and use the GRANT statement to designate additional SQL DBAs, and
use the CREATE USER and DROP USER statements to add and delete user IDs.
Connection security
Connection security
Connection security ensures that only authorized users connect to a database. To establish
connection security, you must create a list of valid users, and appoint one or more security
administrators. Once connection security is established, users who are not security
administrators have limited access to the Security menu; they can only modify their own
passwords and run a quick user report.
Note:
Access the graphical Data Administration tool, or the character Data Dictionary and
choose Admin Security Edit User List. The Edit User List dialog box appears.
2.
3.
4.
Enter the user name. The User Name field allows you to keep track of the user assigned
to that user ID. Since the value in this field is not used for security purposes, you can enter
any text in this field.
5.
Enter the password and click OK. You are prompted you to verify the password.
85
Maintaining Security
6.
Enter the password again. If you successfully enter the same password, you will see the
user record added to the user list. If you enter a different password, no user record is
created.
7.
To add another user record, click Add again. A new set of fields appears on the screen.
Note: You cannot change a User ID field or Password field from the Edit User List
dialog box once you have added the user. However, you can change a user ID or
password by deleting and then re-creating the user record. Modify only allows you
to make changes to the User Name field.
After a user record is added to the user list, the user can connect to that database using the
password you assigned. Users can then change the assigned passwords to their own private
passwords. See the Changing a password section on page 88 for instructions.
If users forget their passwords, you can delete their records from the user list and re-create new
user records for them.
Security Administrators
All users can access the other two options in the Security menu: Change Your Password and
Quick User Reports. Designating security administrators does not limit other users ability to
create new tables or fields in the database.
You designate security administrators with the Security Administrators dialog box. Access
the dialog box by choosing Admin Security Security Administrators, or by choosing
CallAdmin from the Edit User List or Edit Data Security dialog boxes in the character Data
Dictionary.
86
Connection security
2.
Enter valid user IDs. You can enter many user IDs here, but you must include your own
user ID. Otherwise, the following error message appears:
Use commas, not spaces, to separate user IDs. If you use spaces in the string, they will be
accepted as part of the User ID.
3.
When you are done entering user IDs, choose OK. You are prompted to verify the security
administrator entries.
4.
Click Yes to save the entries or No to return the Security Administrators dialog box. If
you click Yes, you are returned to the main menu, and the specified user IDs are stored as
security administrators for the database.
Deleting a user
Only security administrators can delete users from the user list.
To delete a user from the user list:
1.
Choose Admin Security Edit User List from the graphical Data Administration tool
or the character Data Dictionary. The Edit User List dialog box appears.
2.
Select the user you want to delete, then click Delete. You are prompted to verify that you
want to remove that user record. You cannot delete your own record until all other existing
user records are deleted.
3.
If you delete all user records from the user list, you are prompted to confirm that you want to
remove all security restrictions for the database. If you verify the deletions, all users have
security administrator privileges when they connect to the database. If you choose No, you must
add one or more users to the user list before you can exit to the main window.
87
Maintaining Security
Changing a password
Users can change their own passwords; they do not need security administrator privileges.
To change your password:
1.
Choose Admin Security Change Your Password from the graphical Data
Administration tool or the character Data Dictionary. You are prompted to enter your new
password.
2.
Enter your new password. Remember that passwords are case sensitive. You are prompted
to verify the new password.
As security administrator, you can change a users user ID or password, but only by deleting
and then re-creating the user record. This way, users cannot be locked out of a database if they
forget their user IDs or passwords.
Caution: Do not try to bypass the Data Dictionary or Data Administration tool to modify
passwords. You might lock yourself out of the database.
88
Choose Admin Security User Report from the graphical Data Administration tool
or the character Data Dictionary. A report similar to the following is generated:
2.
Click Print to output the report to a printer or file. The Print Options dialog box appears.
3.
Specify the report destination. You can print the report, or save it to a file. If you want to
save the report to a file, specify the filename and whether to append the report to an
existing file.
4.
Specify the page length. then click OK. The report is directed to the specified output
destination.
89
Maintaining Security
Schema security
Schema security ensures that only authorized users can modify table, field, and index
definitions. To establish schema security, use the Data Dictionary Freeze/Unfreeze utility to
lock or freeze table, field, and index definitions in the database so that unauthorized users cannot
make any modifications to them. Schema modifications change file time stamps, or cyclic
redundancy check (CRC) values, making all procedures that reference the database invalid.
After you freeze a database, no user can make any changes to it.
You can unfreeze frozen tables to make changes, such as adding or deleting fields or indexes or
modifying field or index definitions.
To freeze or unfreeze a table:
1.
Choose Utilities Freeze/Unfreeze from the graphical Data Administration tool or the
character Data Dictionary. The tables defined for the working database are listed
alphabetically.
2.
Choose the table you want to freeze or unfreeze. The Freeze/Unfreeze Table dialog box
appears.
3.
As security administrator, you can control which users have authority to freeze or unfreeze
database tables. To freeze or unfreeze tables, the user must have can-write access to the _File
table and can-write access to the _File_Frozen field.
810
This permission ensures that only the owner of the database file can access the file directly.
811
Maintaining Security
812
9
Auditing
As part of the OpenEdge core business services strategy, auditing provides an efficient and
scalable mechanism to produce an audit trail of access to an applications operations and data.
The following sections are discussed:
Auditable events
Auditing states
Auditing tables
Auditing
Auditable events
Auditing in OpenEdge is highly customizable. You can coarsely track events in your application
at a very high level, and you can finely track changes to an individual field at a low level. A
defined set of tracked events is called a policy. Policies are created using the Audit Policy
Maintenance. For details on creating and maintaining policies, see the Audit Policy
Maintenance online Help.
There are many types of activity that can be audited. The following sections describe the
categories of auditable events:
Audit events
Security events
Schema events
Data events
Administration events
Utility events
User events
Application events
Audit events
Audit events record changes to the auditing process and policy definitions, and the manipulation
of the audit trail data. Specific events include:
92
Removing audit administrator permissions from the last authorized audit administrator
Auditable events
Security events
Security events record changes to users, roles, authentication systems, and domains. Security
events also record changes to SQL privileges. Specific events include creating, updating and
deleting any of the following:
A user account
A SQL DBA
An authentication system
An authentication domain
A role definition
A role assignment
Schema events
Schema events record changes to your database schema. Specific events include creating,
updating and deleting any of the following:
A table
A table trigger
A field in a table
A field trigger
An index on a table
A field in an index
A sequence
Data events
Data events record changes to the data in your database. The three specific auditable events
record the create, update, and delete of a record. You can track database events by table or by
field. When auditing events at the field level, table level events must also be defined. Field
events will take precedence over table events.
93
Auditing
Administration events
Administration events record the start and stop of your database.
Utility events
Utility events record when database utilities of interest are run against your database. Specific
events include:
Table move
Index move
Index check
Index rebuild
Truncate area
Index fix
User events
User events record user identities, login activity, and database connections. Specific events
include:
94
Auditable events
Application events
Pre-defined application events track application context and audit event group. Most application
events are user defined and added to your application programatically at critical locations in the
flow of execution where database events are not occurring. Event groups are used to
progamatically join a series of events into a logical unit. For more information on application
events and event groups, see OpenEdge Getting Started: Core Business Services.
95
Auditing
Auditing states
There are three possible states for your database with respect to auditing:
96
Clients and servers from earlier releases (10.0B and prior) cannot start or connect to
the database.
Audit privileges of current (10.1A and forward) clients are validated before granting
access to the audit data, protecting your data from unauthorized access.
Clients and servers from earlier releases (10.0B and prior) can start and connect to
the database.
Access to the audit data by current (10.1A and forward) clients is disallowed.
Audit privileges of current (10.1A and forward) clients are not validated.
Clients and servers from earlier releases (10.0B and prior) cannot start or connect to
the database.
Access to the audit data by current (10.1A and forward) clients is disallowed.
Audit privileges of current (10.1A and forward) clients are not validated.
Enabling auditing
The following steps detail the process for enabling a database for auditing. Ensuring the security
of your audit data requires that the audit data tables be empty when you enable auditing. When
you enable your database for auditing, your database can be in one of two possible states:
auditing is disabled (or has never been previously enabled) or auditing is deactivated.
To enable a database for auditing that was not previously enabled:
1.
Create a structure file (.st) defining an area for your audit tables, and optionally an area
for your audit indexes.
2.
Add the areas to your database using PROSTRCT ADD. For more information on creating
a structure file, and using PROSTRCT, see Chapter 14, Maintaining Database Structure.
3.
proutil sample-db -C enableauditing area Audit Area indexarea Audit Indexes deactivateidx
97
Auditing
2.
For a complete discussion of the PROUTIL ENABLEAUDITING syntax, see the PROUTIL
ENABLEAUDITING qualifier section on page 2043.
Once you have enabled a database for auditing, you must define and activate an auditing policy
to begin generating an audit trail. For information on auditing policies, see the Audit Policy
Maintenance online Help.
Disabling auditing
Use the command PROUTIL DISABLEAUDITING to disable auditing for a database. If a
database has data in the audit data tables, _aud-audit-data and _aud-audit-data-value,
when the command is issued, the database is deactivated rather than disabled. The syntax of
the DISABLEAUDITING command is:
Syntax
proutil db-name -C disableauditing
Auditing was not fully disabled because auditing data tables are not empty.
(13647)
Auditing has been deactivated, no additional auditing records will be
recorded. (13649)
98
2.
3.
Disable auditing with PROUTIL DISABLEAUDITNG. Because the audit data tables are
empty, auditing is completely disabled.
99
Auditing
Auditing tables
Enabling your database for auditing creates seven new tables in your database meta-schema.
Table 91 briefly describes each of the tables. The details on the fields and indexes of each of
these tables are discussed in OpenEdge Getting Started: Core Business Services.
Table 91:
Table name
Description
Archived
_aud-audit-data
Yes
_aud-audit-data-value
Yes
_aud-audit-policy
No
_aud-event
Yes
_aud-event-policy
No
_aud-field-policy
No
_aud-file-policy
No
910
Auditing tables
Table 92:
_aud-audit-data
Index name
_aud-audit-policy
_aud-event
_Audit-time
_Connection-id
_Event-group
_EventId
_Userid
_Field-name
_Continuation-seq
_Policy-active
_Policy-desc
_Policy-guid
Primary
Unique
_Policy-name
Unique
_Event-desc
_Event-name
_Event-guid
_Event-id
_aud-field-policy
Primary
Unique
_Event-context
_Event-id
_aud-event-policy
Index type
_AppContext-Id
_Data-guid
_aud-audit-data-value
(1 of 2)
_Field-name
_File-field-owner
Primary
Unique
Primary
Unique
Primary
Unique
911
Auditing
Table 92:
_aud-file-policy
Index name
_Delete-event-id
_File-owner
_Update-event-id
_Auth-time
_Db-guid
_Userid
912
Primary
Unique
_Read-event-id
_Session-uuid
_Db-detail
Index type
_Create-event-id
_Guid-file-owner
_Client-session
(2 of 2)
_Db-guid
Primary
Unique
Primary
Unique
Archive
File
Production
DB
PROUTIL
AUDITARCHIVE
Figure 91:
Long -term
storage DB
PROUTIL
AUDITLOAD
The user performing the audit archive must have the Audit Archive privilege for the source
database; the user performing the audit load must have the Audit Data Archiver privilege for the
target database. For complete utility details, see the PROUTIL AUDITARCHIVE qualifier
section on page 2011 and the PROUTIL AUDITLOAD qualifier section on page 2014.
Basic archive process
The most basic archive process is to archive and delete all the audit data from your production
database and load the data into your archive database. If your production database is called
prod-db, and your archive is arch-db, these are the basic steps:
1.
This command archives all the audit data currently in prod-db to your current working
directory, and deletes all the audit data from prod-db. The audit archive file is named
prod-db.abd.
913
Auditing
2.
This command loads all the audit data in the archive file
into the arch-db database.
/usr1/prod-db-dir/prod-db.abd
This command archives all the audit data currently in prod-db to the file
/usr1/audit_archives/prod-db.abd, but does not delete any of the audit data from
prod-db because of the -nodelete qualifier. The -checkseal qualifier, directs the archive
process to verify the data seal of each audit data record prior to archiving it.
2.
This command loads all the audit data currently stored in the
/usr1/audit_archives/_aud-audit-data.abd. file into the arch-db database, checking
the seal of each audit data record prior to loading it into the archive database.
3.
This command deletes the audit data archived in Step 1 from the database, and does not
produce a new archive file.
914
915
Auditing
The utility can alter the database areas defined for audit data and indexes.
Access to the utilities is acquired when the user is granted a privileged auditing role. The user
is identified by the database _User table, or the operating system login (effective process
user-id.) Depending on the utility, the user must be granted the Audit Administrator role or the
Audit Data Archiver role. For information on the definitions of these roles and how to grant
auditing roles, see OpenEdge Getting Started: Core Business Services or the Data
Administration online Help.
916
At the password prompt, the password guru must be typed before the AUDITARCHIVE
executes.
Encrypted password:
First, you must encrypt the password using genpassword. Then, when you execute the
AUDITARCHIVE utility (presumably at a later time), specify the encrypted password in
the command.
Local operating system login
If the _User table is not used in the database, the local operating system login (effective process
user-id) identifies the privileged user. This user must be granted at least the Audit Administrator
role. Once the appropriate roles are granted to the user, no further action is required. The utilities
are to trust the operating system user verification, and the user can execute the utilities without
specifying any additional command-line parameters.
Optionally, a local operating system user-id can be specified on the utility command line by
adding the -userid username qualifier. The consequence of adding -userid is that it requires
a password. The password can be specified with the -password qualifier. If the -password
qualifier is not specified, the utility will prompt for the password to be entered. For the local
operating system user, the password for the enhanced utilities is not the operating system login
password. The utilities require the encrypted database MAC key (DB Pass key) for the
password. The database MAC key is stored in the _db-detail table of the database in the
_db-mac-key field, and is set through the Data Administration tool. For details on setting the
DB Pass Key, see OpenEdge Getting Started: Core Business Services or the Data
Administration online Help. For details on specifying encrypted passwords, see the Specifying
encrypted passwords section on page 918.
917
Auditing
If your operating system login is sysdba, and you have not established the _User table, and
you have assigned sysdba the Audit Data Archiver role for the database auditexampledb,
then executing the protected PROUTIL AUDITARCHIVE utility for the database would use
one of the following formats:
For this example, assume that the DB Pass Key is utlra_secret_password. First, you
must encrypt the DB Pass Key using genpassword. Then, when you execute the
AUDITARCHIVE utility (presumably at a later time), specify the encrypted DB Pass Key
in the command.
At the password prompt, the DB Pass Key must be typed before the AUDITARCHIVE
executes. The password value is obfuscated as it is typed, and can be either the clear text
value, or the encrypted value, provided it has the proper encryption prefix.
Specifying encrypted passwords
Encrypted passwords enhance security, particularly when generating maintenance scripts that
contain calls to the utilities with passwords.
918
Component
Description
oec
h1
::
Separator.
password
Encrypt clear text passwords with the utility genpassword. See OpenEdge Getting Started:
Installation and Configuration for a detailed description of genpassword.
Utility modifications
The utilities enhanced with user security either access protected auditing data or alter the state
of auditing. This section defines the protected data and enumerates the impacted utilities in the
following sections:
Protected tables
Protected areas
Enhanced utilities
Protected tables
Protected tables are those database tables used in the auditing process and whose manipulation
is restricted to privileged users. There are two types of protected tables: audit data tables and
audit policy tables. Table 94 lists the protected tables.
Table 94:
Protected tables
(1 of 2)
Table name
Table type
_aud-audit-data
Data
_aud-audit-data-value
Data
_client-session
Data
_aud-event
Policy
_aud-event-policy
Policy
919
Auditing
Table 94:
Protected tables
(2 of 2)
Table name
Table type
_aud-audit-policy
Policy
_aud-file-policy
Policy
_aud-field-policy
Policy
Protected areas
Protected areas contain audit data and indexes. These areas are specified when auditing is
enabled.
Enhanced utilities
The enhanced utilities operate on the protected tables or protected areas, or alter the status of
auditing. For some utilities, the operation is unconditionally denied. For others, the operation
can execute if the user has the required privilege. Table 95 lists the protected utilities. The table
also details the impacted objects and required privilege if applicable, and the access restrictions.
Table 95:
Operation
Protected utilities
PROUTIL
qualifier
Impacted
protected
object
Required
privilege
Bulk load
bulkload
Protected tables
Denied to all
users
Binary load
load
Protected tables
Denied to all
users
Binary dump
dump
Protected tables
Denied to all
users
dumpspecified
920
Restriction
Index fix
(delete record)
idxfix
Protected tables
Audit
Archiver
Denied to all
except Audit
Archiver
Truncate area
truncate area
Protected areas
Audit
Archiver
Denied to all
except Audit
Archiver
Archive audit
data
auditarchive
Protected tables
Audit
Archiver
Denied to all
except Audit
Archiver
auditload
Protected tables
Audit
Archiver
Denied to all
except Audit
Archiver
Disable
auditing
disableauditin
g
Audit
Administrator
Denied to all
except Audit
Administrator
-userid username
[ qualifier-parameters ]
[-password passwd ]]
921
Auditing
922
10
Replicating Data
Data replication is the distribution of copies of information to one or more sites. In a single
enterprise, sites spanning organizational and regional boundaries often require the timely
sharing of transaction data across databases in a consistent manner. Developing and deploying
a successful replication process involves careful planning and input from business experts,
application developers, and administrators.
This chapter contains the following sections:
Replication schemes
Replication models
Replicating Data
Replication schemes
A replication scheme is a system of definable tasks and operations that are used to perform data
replication. An enterprises replication scheme addresses its specific business requirements.
This section summarizes different replication models, data ownership models, and
implementation strategies that are available. A replication scheme can be implemented through
event triggers or through log-based capture.
Trigger-based replication
To implement trigger-based replication, use event triggers stored in the database. When an event
to be replicated occurs (that is, a record is created, modified, or deleted) the database uses the
event to record the change in a replication change log. ABL (Advanced Business Language)
provides full support for trigger-based replication. See OpenEdge Development: ABL
Handbook for more information about trigger-based replication.
Replication models
At the highest level, there are two major models of replication: synchronous and asynchronous.
In a synchronous replication model, data replication occurs within the scope of the original
transaction. In other words, replication occurs transaction by transaction. Typically, this model
is implemented using a two-phase commit protocol. Two-phase commit ensures that distributed
transactions occur consistently across databases. For more information, see Chapter 12,
Distributed Transaction Processing.
Because the data modifications are replicated as part of the original transaction, synchronous
replication ensures high data availability and consistency. The entire transaction is either
committed to both systems or backed out completely.
Asynchronous replication (also known as store and forward replication) allows the replication
to occur outside the scope of the original transaction. The replication might take place seconds,
minutes, hours, or days from the time of the transaction, depending on your business
requirements. Although the replication executes record by record, replication can occur by
transaction. That is, if an order is placed in the system with order lines containing multiple data
changes and these changes are made within the scope of a single transaction, the changes can
be replicated as a single transaction.
102
Distribution model
In the distribution ownership model, a single master database owns the data. The master
database is the read/write area, and all changes are made to this database only. All changes are
then propagated to the remote sites in a read-only state. The remote sites cannot change the data,
only view it. In terms of replication, the chief advantage to this model is that it greatly reduces
data collision (conflicts between updates to the same record). This is because data changes are
made at one site only.
Figure 101 illustrates the data distribution model.
Site 1
Read only
Central DB
Read/Write
Site 3
Read only
Site 2
Read only
Figure 101:
Consolidation model
In the consolidation model, data changes are made at the remote sites and then propagated to
the central database. The central database is read-only and is used for reporting purposes. For
replication, this model increases the frequency of data collision over the distribution model. If
there is a collision of changes by two or more users, the changes are applied on a
first-come-first-served basis.
To avoid data collision, the consolidation model often uses table partitioning. Table partitioning
(also called data ownership) requires that all data be owned by each site. Changes to data at each
remote site are made exclusively by respective remote site users. A data ownership model might
not be appropriate for your business organization. Although data collisions are avoided, the
ability to update the same record from any site is lost.
103
Replicating Data
Figure 102 illustrates two data consolidation models, one with no data ownership, and the
other with table partitioning.
Site 1
Central DB
Read/Write
Site 3
Read/Write
Read only
Site 2
Read/Write
With no data ownership
Site 1
Read/Write
Site 1
Site 2
Site 3
Read all
Site 2
Read/Write
With table partitioning
Figure 102:
104
Site 3
Read/Write
Site 1
Site 3
Read/Write
Read/Write
Central DB
Read/Write
Site 2
Read/Write
Figure 103:
Peer-to-peer model
105
Replicating Data
The creation of hot standby sites in case the primary site fails
Add after-imaging extents and then enable after-imaging in the primary database. For
information about after-imaging extents and enabling after imaging, see Chapter 7,
After-imaging.
2.
Use the PROSTRCT utility with the LIST option to create a structure description file
containing the central databases data structure. For information about the structure
description file and the PROSTRCT utility, see Chapter 14, Maintaining Database
Structure.
3.
With the structure description file produced from the central database, use PROSTRCT
with the CREATE option to create an additional database on the remote system.
4.
Perform a backup of the primary database to initialize the secondary database. This step
creates a basis for subsequent roll-forward operations. For information about performing
a backup, see Chapter 5, Backing Up a Database.
5.
Restore the backup copy of the primary database to the secondary database.
6.
Use the RFUTIL command with the option EXTENT FULL to monitor the after-image
extents. This will automatically determine which image extent is ready for replication (or
transfer) to the secondary site. You can transfer the after-image extent file to the secondary
site using an OS command to remote copy.
For more information about RFUTIL, see Chapter 22, RFUTIL Utility.
7.
106
Once the after-image extent has been transferred to the secondary site, use RFUTIL with
the EMPTY option to mark the extent empty and ready for use on the primary database.
Implement a process to monitor and transfer full after-image extents (AI extents). You can
copy AI extents to an AI log, then transfer the log contents to the secondary site on a
continuous basis.
9.
If it becomes necessary to shift operations to the secondary site, transfer the last full and
busy after-image extents and roll-forward to completion. Start up of the secondary site
database causes the database to undergo crash recovery, resulting in the shift to the
secondary site.
For more information about performing roll-forward recovery, see Chapter 6,
Recovering a Database.
107
Replicating Data
Figure 104 shows after-image extent files replicated from the primary to a secondary database.
Al extent 1
sales.a1
Al extent 2
sales.a2
Al extent 3
sales.a3
Al log
Secondary
site
Figure 104:
108
Primary site
11
Failover Clusters
Failover Clusters provide an operating system and hardware vendor-independent solution for
automated fail over of your OpenEdge database and related resources.
This chapter describes Failover Clusters, as detailed in the following sections:
Overview
Platform-specific considerations
Throughout the remainder of this chapter, Failover Clusters is also referred to as Clusters.
Failover Clusters
Overview
A system cluster is comprised of two or more machines, known as nodes, tightly integrated
through hardware and software to function together as one machine. In a cluster, redundant
hardware and software are primarily put in place to enable fail over. Fail over is the movement
of a cluster resource from one node in the cluster to another node. If something goes wrong on
one node, or the node needs to be taken offline for maintenance, cluster resources can fail over
to another node to provide continual access. Disk access is a key factor in proper fail over. If
you use a clustered environment and wish to use Failover Clusters, you must have your database
on a shared device. A shared device is a disk that is available to any node in the cluster. If a node
in the cluster has an outage, the shared device is still recognized and available to the remaining
nodes in the cluster, thus providing access to the database.
Failover Clusters provides a simple command-line interface, PROCLUSTER, to your operating
systems clustering software. The Clusters interface is easy to use, and is the same regardless of
the hardware or software platform. This simplifies administration of an OpenEdge database in
a clustered environment. Because Clusters integrates your database into your cluster manager
software, cluster resource administration and fail over mechanisms are enabled for your
database.
Clusters does not replace OS-specific cluster management software. In fact, Clusters requires
that the OS cluster management software and hardware be properly configured. See the
Related software and hardware section on page 112 for your specific OS software and
hardware requirements.
The operating system integrates specific components of its clustering technology to monitor the
system resources state and fail over the resource to another node if the primary node is not
accessible. Clusters tightly integrates with the cluster management software so that OpenEdge
is properly defined as a cluster resource and fails over during planned or unplanned outages. A
planned outage might be a hardware or software upgrade. An unplanned outage might be a
system crash. With Clusters, you can decide ahead of time how clustered resources will behave
during fail over. Failover Clusters eliminates unnecessary downtime and provides continuity of
behavior in the cluster even when the database administrator managing the cluster is not the one
who set it up.
112
AIX5L V5.2
Overview
HPUX (32-bit and 64-bit)
HP software:
HPUX (Itanium 2)
HP software:
HP Tru64
HP software:
Windows
Microsoft software:
An IP address
A cluster name
Installation
Prior to enabling an OpenEdge database for use in a cluster, the OpenEdge Enterprise product
must be installed on every node in the cluster and the installation path must be identical on every
node. When upgrading from earlier releases, all cluster-enabled databases must be disabled
before the updated and re-enabled afterwards.
113
Failover Clusters
Configuration
You must define and modify the environment variables described for your operating system in
the following sections for Failover Clusters to operate correctly.
AIX
$JREHOME/lib, $JREHOME/jre/bin,
HPUX
For HPUX 32-bit only, set JDKHOME in the default system profile.
Set the JDKHOME environment variable in /etc/profile so that the Java environment can
be configured to properly support SQL Java Stored Procedures and Java Triggers. JDKHOME
should be set to point to the installation for the JDK.
HP Tru64
Modify $DLC/bin/java_env to correctly set up SQL Java Stored Procedures and Java
Triggers.
114
1.
2.
Open the script in the editor of choice and locate the string OSF1.
Overview
3.
Modify the JDKHOME and JREHOME environment variable values to point to the install
directory for each Java package.
4.
Windows
DLC/bin/procluster.bak
Security
OpenEdge conforms to the security model defined by the OS vendor in terms of what users can
create and modify, access rights to the various directories and devices, and rights to start and
stop resources, such as databases.
Performance
Performance of the database should not be affected by the use of Clusters beyond the additional
separate process required to probe and report on the databases viability to the cluster
management software. Fail over times depend on the cluster environment and will vary.
Logging
PROCLUSTER generates a log file, $PSC_CLUSTER_PATH/PSCluster.log on UNIX and
%PSC_CLUSTER_PATH%\PSCluster.log in Windows, that records the creation and management
of database resources. The log file tracks the PROCLUSTER commands, so its rate of growth
is very small and it can be deleted at the customers discretion.
115
Failover Clusters
Physical model
DB-SERVER-1
DB-SERVER-2
Common storage
Application
Application
D
Agent (s)
Agent (s)
E
Cluster manager
Cluster manager
F
Operating system
Operating system
Private
storage
Private
storage
Network 1 (Cluster-only
communication )
192 .168 .0.12
Figure 111:
If the database structure changes or the database is moved to a new common storage device, the
Clusters command-line interface, PROCLUSTER, makes it very easy to ensure that the
database is still protected if a fail over event occurs. If the shared device in the cluster needs to
change, use PROCLUSTER to stop the database and enable it on the new shared device once it
is moved. If extents are added, Clusters provides for making this change, and you only need to
start the database once the changes are automatically applied.
116
Overview
Clusters integrates OpenEdge into the operating system cluster not only by making use of the
pre-existing cluster manager software, but by also augmenting OpenEdge feature functionality.
When you enable a database for failover, the master block is updated to reflect this fact. When
you have a cluster-enabled or cluster-protected database, commands are then funneled through
the underlying pscluster executable to the operating system cluster manager. Figure 112
shows this relationship. The cluster manager software must know about OpenEdge to handle it
properly in the event of a failover. See the Using the PROCLUSTER command-line interface
section on page 1111 for information on using PROCLUSTER to cluster-enable a database.
DB Utilities
_dbutil
Startup
_mprosrv
Shutdown
_mprshut
Shutown
AdminServer
PROCLUSTER
_mprosrv
Broker
Pscluster[.exe]
UNIX
Cluster Manager
Utility programs and scripts
Figure 112:
Windows
Cluster Manager
Utility programs and APIs
117
Failover Clusters
Network considerations
Clusters assumes a virtual server model. Connections to the database must be through the cluster
alias and cluster IP address, which are different from a single node name and IP address.
Figure 111 shows the physical layout of a cluster where the cluster IP address is 192.168.0.01
(the virtual server IP address). For Clusters, clients connect to this IP address and not the address
of one of the individual nodes. In essence, the clients really only know about the virtual server
as the node to connect to. Figure 113 shows the clients connecting over the network to the
virtual server with its IP address of 192.168.0.01. Within this cluster there might be several
nodes with separate IP addresses, but the clients need not know about each node and must not
connect to the individual nodes or Clusters will not work properly.
192.168.0.01
Virtual server
Figure 113:
118
Client
CheckInterval The amount of time to attempt to restart a database on its current node.
The CheckInterval period is 900 seconds.
Retries The number of attempts to start the application before initiating failover.
Retries is three.
119
Failover Clusters
1110
Delay The number of seconds to wait prior to initiating fail over. Delay allows the
operating system time to perform clean up or other transfer related actions. Delay is
0 seconds.
FailBack Indicates if the application is to be transferred back to the primary node upon
its return to operation. FailBack is False (Windows).
AutoStart Indicates to the cluster software that the application should automatically
start without operator intervention. AutoStart is True.(Windows).
Cluster-enabling a database
Once a database is created and resides on a shared disk, you can enable the database as a cluster
resource so that it will fail over properly. The database must not be in use when it is enabled as
a cluster resource. Enable the database as a cluster resource with the following command:
procluster db-name enable
[-pf
params-file]
[AI][BI][APW=n][WDOG]
You must specify the fully qualified path of the database you want to enable for fail over. The
database must be located on a shared disk among the cluster nodes, and the shared disk must be
online for the current node. The parameter file contains any parameters that the database
requires when started. The parameter file is required to:
Be named db-name.pf
If the parameter files does not meet these requirements, database startup will fail.
When you enable a database as a resource using PROCLUSTER enable, Clusters performs the
following:
A cluster-enabled database:
Cannot be started without following the prescribed protocol. See the Starting a
cluster-enabled database section on page 1113.
Cannot be deleted without first being disabled. See the Disabling a cluster-enabled
database section on page 1112.
Cannot have its physical structure altered without being re-registered. See the Changing
the structure of the database section on page 1115.
1111
Failover Clusters
To verify the success PROCLUSTER enable, examine the database log file (.lg) to see if the
following messages occur:
To verify that the database is created and available on the machine as a cluster resource in
Windows, you can use the Cluster Administrator to verify that the fully qualified path of the
database is visible within the Virtual Server Group area of the tool. On UNIX, you can use the
operating system-specific command to enumerate cluster objects, such as caa_stat on HP
Tru64. In the list of enumerated objects, the database with a UUID appended to the end will be
listed, with the target and state displayed as offline. The following example shows the caa_stat
result:
db_nameE3B35CEF-0000-0000-DC17-ABAD00000000
Where the text is the UUID for the newly created database resource. For more information on
cluster administration tools and commands, see your operating system cluster documentation.
Note:
The PROCLUSTER ENABLE will register the database as a cluster resource, even if
there are errors in the command for the helper processes. To correct the errors, you
must first use PROCLUSTER DISABLE to unregister the database, then use
PROCLUSTER ENABLE to re-register without errors.
You must specify the fully qualified path of the database to disable. Specifying the database
name automatically disables any other optional dependencies specified when the database was
enabled.
When you remove a database resource using PROCLUSTER disable, Clusters does the
following:
1112
Deletes the resource from the cluster manager software once it is in an offline state
Deletes the group from the cluster manager software if the resource is the last resource in
the resource group
The database to be started is specified by db-name. The database must have been previously
enabled as a cluster resource. The database name must contain the fully qualified path.
Note:
To verify that the database started correctly, use the isalive and looksalive parameters to
PROCLUSTER. See the Isalive and looksalive section on page 1114 for more information.
The database to be stopped with Clusters is specified by db-name. The database must be a
member of the cluster. The database name must contain the fully qualified path. To verify that
the database has successfully stopped, use the Isalive and Looksalive parameters of
PROCLUSTER. For more information, see the Isalive and looksalive section on page 1114.
When you stop the database with PROCLUSTER stop, Clusters does the following:
Notifies the cluster that the resource should be stopped without fail over
Note:
1113
Failover Clusters
The database to be forcefully shut down with Clusters is specified by db-name. The database
must be a member of the cluster. The database name must contain the fully qualified path.
The looksalive exit status returns the following text if the database looks alive:
The only database state that returns a value of Looks Alive is when the database is enabled and
started. All other states return the following, where db-name is the fully qualified path of the
database:
The isalive exit status returns the following text if the database returns a successful query:
1114
PROSTRCT handles the updating of the cluster resource list of the cluster manager
automatically. For more information on PROSTRCT, see Chapter 21, PROSTRCT Utility.
After the change is complete, you must manually start the database with PROCLUSTER.
1115
Failover Clusters
To correct and register the new extents with the cluster manager:
1.
Edit PSC_CLUSTER_REG.TMP and correct the paths for all the extents you added.
2.
3.
Execute the following command, which might take a few minutes to complete:
Note:
1116
Do not add extents to a volume group or file system already in use by another resource
group.
Platform-specific considerations
Platform-specific considerations
There are small differences in the implementation of Failover Clusters on the various operating
systems where it is supported. The following sections note important differences.
Use the cmapplyconf utility provided by MC/Service Guard to register the packages
manually.
Note: The package names can be found in <dbname>.uuid file.
Disable the database and then re-enable with the desired packages. PROCLUSTER will
ensure automated registration.
1117
Failover Clusters
1118
The configuration file (.conf), which contains configuration details for the package
The control file (.scr), which contains script for start, stop, and monitor functions.
Progress Software Corporation recommends that you do not edit this file
The log file which is created when the resource is brought online for the first time. Once
created, the file remains there
Verify that the database resides on shared disk resources that are available to the currently
active node in the cluster.
2.
Start Progress Explorer and connect to the AdminServer for the cluster.
3.
Right-click to add a new database. Enter the database name and check the Start this
Database in Cluster mode checkbox, as shown:
4.
1119
Failover Clusters
5.
Update the database parameter file dbname.pf that resides in the same directory as the
database .db file. The file should contain the following:
-cluster protected
-properties /usr/dlc/properties/conmgr.properties
-servergroup database.defaultconfiguration.defaultservergroup
-m5
-adminport adminserverport
# substitute the port name or number
6.
1120
The -db and -cluster parameters are the only parameters allowed; any others are ignored. The
effect of this command is that proserve directs the cluster manager to start the database using
the configuration provided when the database was enabled for failover. See the PROSERVE
command section on page 179 for a complete description of the PROSERVE command. See
the Cluster-enabling a database section on page 1111 for information on enabling a
database.
1121
Failover Clusters
1122
1.
Open a connection to your cluster and select New Resource for the group where you are
adding the database.
2.
Fill in Name with the name of the resource. The fully qualified name of the database
is the recommended value.
b.
c.
d.
e.
3.
4.
Select the disk(s) and cluster name (or IP address) as resources the database depends on.
5.
Enter the fully qualified name of the database for Database File Spec.
b.
Enter the fully qualified path to the working directory of your OpenEdge install for
Working Directory.
c.
Enter the name of the command to start the database for Start Command. For
example, C:\Progress\OpenEdge\bin\_mprosrv. Enter only the name of the
command, not the complete command line to execute.
d.
Supply the options normally specified on the command line in Start Options. For
example, -pf <parameter-file>.
e.
Enter the command used to shutdown the database in Stop Command. For example,
C:\Progress\OpenEdge\bin\proshut. Enter only the name of the command. Do
NOT include the database file specification.
f.
Enter the options normally used to shutdown the database in Stop Options. For
example, -by.
Select Finish to complete the operation. The database resource will be displayed in the
Explorer window in the Offline state.
Once the database resource is created, it can be started and shut down through the Cluster
Administrator.
To start the database, right click on the database resource and select Bring Online. The state of
the database resource will change from Offline, to Online Pending, to Online in the Explorer
window as the database completely starts up.
To shut down the database, right click the resource name and select Take Offline. The state of
the database will change from Online, to Offline Pending, to Offline once the database has
completely shut down.
To add a secondary broker to a cluster-enabled database with Cluster Administrator:
1.
2.
3.
Select File New Resource. The New Resource dialog box appears.
4.
5.
For Name, enter the name you want for the secondary broker resource.
For Possible owners, chose the same hosts as you selected for your database.
When you have completed the configuration, bring up the Properties dialog box, and fill
in the Parameters table for the database resource as follows:
a.
Enter the fully qualified name of the database for Database File Spec.
b.
Enter the fully qualified path to the working directory of your OpenEdge install for
Working Directory.
c.
Enter the name of the command to start the database for Start Command. For
example, C:\Progress\OpenEdge\bin\_mprosrv. Enter only the name of the
command, not the complete command line to execute.
d.
Supply the options normally specified on the command line to start a secondary
broker in Start Options. For example, -pf start_secondary_brkr.pf.
e.
Leave Stop Command and Stop Options blank. Shutdown of the database is
handled by the commands specified with the primary broker.
If you want to start more than one secondary broker, modify the procedure to specify a .bat file
for the Start Command entered in Step 5. The .bat file must contain the command lines to start
the secondary brokers.
1123
Failover Clusters
Clearing the cluster setting allows you to start and manage the database without cluster
protection. Clearing the cluster setting does not clean up cluster-specific objects associated with
the database; you must manually remove these objects.
Note:
1124
Use of the PROSTRCT command to clear the cluster setting is for emergencies only.
Under normal circumstances, the PROCLUSTER command should be used to disable
a cluster-enabled database. See the Disabling a cluster-enabled database section on
page 1112.
Generic
function
AIX
HPUX
HP Tru64
Solaris
Default script
generation.
n/a
cmmakepkg
n/a
n/a
Enumerate
cluster objects.
clfindres,
clgetgrp,
clgetaddr,
cllscf
cmquerycl,
cmgetconf,
cmscancl,
cmviewcl,
cmviewconf
caa_start
scha_cluster_open,
scha_resource_open,
scha_resourcetype_open
Validate
configuration.
clverify_cluster
cmcheckconf
n/a
n/a
View cluster or
resource group
status.
clstat
clRGinfo
cmviewcl
caa_report
scstat
Register
resource.
cl_crlvfs,
cl_mkvg,
cl_mkgroup,
cl_mklv,
cl_updatevg
cmapplyconf
caa_register
scha_control,
scha_resource_setstatus
Remove
resource.
cl_rmfs,
cl_rmgroup,
cl_rmlv,
cl_updatevg
cmdeleteconf
caa_unregister
scha_control,
scha_resource_setstatus
Start resource.
n/a
cmrunpkg
caa_start
scha_control,
scha_resource_setstatus
Stop resource.
n/a
cmhaltpkg
caa_stop
scha_control,
scha_resource_setstatus
Move resource
to node.
n/a
cmhaltpkg,
cmrunpkg
caa_relocate
scha_control,
scha_resource_setstatus
For AIX and HP, shared disks must be made available for use, varied-on, and mounted. Use the
commands listed in Table 112 to vary-on shared resources. For information on any of these
commands, see your cluster manager documentation.
Table 112:
Generic function
AIX
HPUX
varyonvg volgrp
vgchange -a y
volgrp
mount/sharedfs
mount/sharedfs
lsvg -o
vgdisplay
1125
Failover Clusters
1126
12
Distributed Transaction Processing
Distributed transactions involve two or more databases in a single transaction, as described in
the following sections:
Distributed transactions
Distributed transactions
A distributed transaction is a single transaction that updates two or more databases. The
following scenario illustrates how inconsistencies can occur during a distributed transaction. A
bank has two accounts, one on database acct1 and another on database acct2. The bank runs
an application that starts a transaction to withdraw a sum of money from acct1 and deposit it
into acct2. To keep the accounts in balance, it is critical that both operationsthe withdrawal
and the depositsucceed, or that they both fail. For example, if acct1 commits its part of the
transaction and acct2 does not, there is an inconsistency in the data, as shown in Figure 121.
acct1
withdrawal
deposit
Figure 121:
122
Data inconsistency
acct2
123
Begin
No
Are all
databases ready to
commit?
Did
coordinator
commit?
Yes
No
Did other
databases
abort?
Yes
Abnormal termination.
Limbo transaction occurs.
Figure 122:
Yes
No
Did other
databases
commit?
Yes
Abnormal termination.
Limbo transaction occurs.
End
As Figure 122 shows, the coordinator database is the first database in the distributed
transaction to either commit or abort the transaction. By keeping track of the coordinators
action, two-phase commit allows you to resolve any inconsistent transactions.
124
DB1
Coordinator
withdrawal
deposit
DB2
PHASE ONE:
The database engine withdraws money from DB 1 and
deposits it in DB2. The database engine asks both
databases if they are ready to commit the transaction .
Commits!
DB1
Coordinator
withdrawal
Commits!
deposit
DB2
PHASE TWO:
Coordinator database commits , but a hardware or
software failure prevents DB 2 from committing. Limbo
transaction occurs on DB 2.
Figure 123:
Limbo transaction
When a limbo transaction occurs, you must resolve the transaction to re-establish data
consistency.
Once the coordinator database establishes the status of a distributed transaction, it writes the
status to its BI and TL (transaction log) files. The TL file tracks the status of all distributed
transactions that affect the coordinator database.
Since the database engine continually overwrites the contents of the BI file, the TL file is
necessary to permanently record the status of a transaction. If you must resolve limbo
transactions, the transaction log file ensures that the coordinator has a reliable record of the
transaction.
125
If you perform roll-forward recovery on a database that has after-imaging and two-phase
commit enabled, RFUTIL disables after-imaging and two-phase commit.
When you roll forward an after-image file that contains coordinator transaction end notes,
RFUTIL writes a transaction log file containing the notes. Also, if two-phase commit is
not enabled, RFUTIL enables two-phase commit for the coordinator database.
See Chapter 6, Recovering a Database, for more information about roll-forward recovery and
after-imaging.
You must create and maintain a transaction log (TL) area for your database in order to
use two-phase commit. For more information, see the Transaction log area section
on page 127.
You enable two-phase commit with the PROUTIL 2PHASE BEGIN qualifier. When you
enable two-phase commit, you can specify the database that should serve as the coordinator
database. You can also specify an alternate name (nickname) for the coordinator database.
This is the syntax for enabling two-phase commit:
126
-crd
-tp nickname
Be sure to specify a unique nickname. If you must resolve limbo transactions with two
databases that have the same path name but are on different machines, PROUTIL does
not distinguish between the two databases.
-crd
-tp nickname
When you specify -crd, PROUTIL toggles whether or not the database can serve as a
coordinator database. If you specify -crd against a database that is a candidate for coordinator
database, it is no longer a candidate. If you specify -crd against a database that is not a
candidate, it becomes a candidate.
When you specify -tp nickname, PROUTIL identifies a new nickname for the coordinator
database.
127
When you deactivate two-phase commit, PROUTIL places a note in the database log file.
However, PROUTIL does not delete the databases transaction log file.
128
During normal
operations:
When limbo
transactions occur:
Remote client
Does not write to
database .
Remote client
Writes messages to
screen.
Self-service client
Writes to database.
Self-service client
Writes messages to
screen and log file .
Server
Writes to database.
Figure 124:
DB
Server
Writes messages to
log file.
Screen
Log
file
Limbo transactions can occur without any messages being displayed on screen; for example, if
a hardware or software failure occurs while a user is running a PROUTIL application or if a user
powers off a client machine. If possible, users on client machines should inform the system
administrator when these events occur. If such an event occurs, examine all of the databases that
might be involved to determine whether any limbo transactions occurred. You can use
PROMON or PROUTIL to examine a database for limbo transactions.
Caution: If an application is performing a distributed transaction when a client machine fails
or shuts down, the transaction remains open. If this continues unchecked, the BI files
of the databases involved in the transaction could grow to fill the disk, as with any
other long-running transaction.
129
Determine whether one or more limbo transactions occurred against a database by starting
the PROMON database monitor. Enter the following command:
promon db-name
When you enter the PROMON utility, the main menu appears:
User Control
Locking and Waiting Statistics
Block Access
Record Locking Table
Activity
Shared Resources
Database Status
Shut Down Database
R&D.
T.
L.
C.
Advanced Options
2PC Transactions Control
Resolve 2PC Limbo Transactions
2PC Coordinator Information
J.
M.
Q.
Modify Defaults
Quit
2.
Choose option T (2PC Transaction Control). PROMON displays a screen similar to the
following:
1.
2.
3.
Q.
1210
Choose 1. (Display all entries). PROMON displays a screen similar to the following:
Transaction Control :
Usr
2
.
.
.
Coord
sports
.
.
.
Crd -task
1 42453
.
.
.
Note: If you run PROMON against a database where no limbo transaction has occurred,
PROMON does not display any field information on the Transaction Control
screen.
Take note of any limbo transactions. For example:
Transaction Control :
Usr
2
.
.
.
A transaction is in limbo if yes is displayed in the Limbo field. For each limbo transaction,
write down the following information:
The transaction number of the transaction in the coordinator database, shown in the
Crd-task field
1211
For each limbo transaction, run PROMON against the coordinator database to determine
whether the coordinator committed the transaction.
5.
From the PROMON main menu, choose C (2PC Coordinator Information). PROMON
displays a screen similar to the following:
MONITOR Release 10
Database: /users/sports1
Q. Quit
Enter the transaction number you want to find out if committed:
Note: If the coordinator database is shut down and you cannot run PROMON against it,
you must use the 2PHASE COMMIT qualifier of PROUTIL to determine whether
it committed the transaction.
6.
Enter the transaction number that you recorded from the Crd-task field in Step 3, and
press RETURN. PROMON displays a message that tells you whether the transaction
committed.
Note: To commit transactions on a database that is shut down, you must use the 2PHASE
RECOVER qualifier of PROUTIL.
7.
Run PROMON against the database where the limbo transaction occurred to commit or
abort each limbo transaction.
8.
From the PROMON main menu, choose L (Resolve 2PC Limbo Transactions). The
following menu appears:
1
2
Q
Enter choice>
9.
To abort the transaction, choose 1 (Abort a Limbo Transaction). PROMON prompts you
to enter the user number of the transaction you want to abort. Enter the user number, then
press RETURN.
Repeat Step 4 through Step 9 for all the limbo transactions. After you commit or abort all of the
limbo transactions, they are resolved.
1212
Try to start a database session with PROSERVE, PRO, or PROUTIL. If the session starts
successfully, no limbo transactions have occurred on the database. If limbo transactions
occurred, the session fails to start and output similar to the following is displayed, or
written to the event log file for the database:
13:27:05 SRV
Name of coordinator
database
Transaction number in
current database
Transaction number in
coordinator database
Capture the following information, for all the listed limbo transactions:
The transaction number on the current database (that is, the database where you tried
to start the session)
Once you have this information, you must consult the coordinator database to determine
whether it committed or aborted the transaction.
2.
Enter the following command against the coordinator database to determine if the
coordinator committed or aborted the limbo transaction:
where db-name specifies the coordinator database, and tr-number specifies the number
of the transaction to check. Specify the number of the transaction on the coordinator
database.
If the coordinator committed the transaction, PROUTIL displays a message similar to the
following:
If the coordinator database committed the transaction, you must also commit the
transaction on the database where the limbo transaction occurred. If the coordinator did
not commit the transaction, you must abort the transaction on the database where the limbo
transaction occurred.
1213
Commit or abort the limbo transactions, depending on whether the coordinator committed
or aborted the transaction in Step 2.
Use the PROUTIL 2PHASE RECOVER utility to commit or abort transactions for a
database. Before you enter this command, determine whether you will commit or abort
each transaction; you must either commit or abort all limbo transactions to complete this
command:
When you run this command against a database with limbo transactions, PROUTIL
displays a message similar to the following:
4.
If you respond yes, PROUTIL commits the transaction. If you respond no, PROUTIL
aborts the transaction.
PROUTIL displays this message for all of the limbo transactions that exist in the database.
After you commit or abort all of the limbo transactions, they are resolved.
This message does not necessarily mean that a transaction failed. Occasionally, a transaction
commits properly, but a network communication failure intercepts the servers message
verifying that it committed. When you see this message, or any similar message, the database
administrator must determine whether a limbo transaction occurred, then resolve the limbo
transaction.
To resolve limbo transactions, you complete the transactions from the point where they were
interrupted by the hardware or software failure. If the coordinator committed the transactions,
you must commit the transactions. If the coordinator did not commit the transactions, you must
abort the transactions.
1214
This message indicates that limbo transactions must be resolved. Consult the log file for a record
of the limbo transactions.
Scenario 3: You are on a client machine and it fails
Suppose a hardware or software failure occurs on a running client machine, or a user
inadvertently powers off a machine while the database is running. A message indicating that a
limbo transaction occurred cannot be displayed, since the client machine is down. In this
situation, use the PROMON utility against the server to determine whether any limbo
transactions occurred. If so, resolve them.
After connecting, you try to run a distributed transaction. While running this procedure, the
client process is halted by a system error, and the following messages appear:
The message indicates that a limbo transaction might have occurred. You must determine
whether a limbo transaction did occur, then resolve it.
1215
Transaction Control:
Usr Name Trans Login
Time
R-comm?
Limbo?
Crd?
Coord
Crd-task
If PROMON failed to run against sports1, it indicates that the server also crashed and you must
use PROUTIL to determine whether any limbo transactions occurred.
After determining that no limbo transactions occurred on sports1, perform the same steps
against sports2. This time, the following screen appears, indicating that a limbo transaction has
occurred:
Transaction Control:
Usr Name Trans Login Time R-comm? Limbo? Crd? Coord Crd-task
15 paul 755 04/01/02 14:19 yes
yes
no sports1 61061
.
.
.
RETURN - repeat, U - continue uninterrupted, Q - quit
Write down the coordinators transaction number (indicated in the Crd-task field). The Coord
field indicates that sports1 is the coordinator database for this transaction. Therefore, you must
again run PROMON against sports1. This time, choose C (Coordinator Information). The
following screen appears, where you enter the transaction number 61061:
1216
Transaction Control:
Usr
PID Time of Login
15 3308 Fri Apr 5 14:19:45 2002
User ID TTY
paul
mach1 ttyp1
Coord Crd-task
sports1
61061
Type 15 (the user number indicated on the previous screen). The PROMON utility commits the
transaction on sports2 and displays the following message:
Since there are no more limbo transactions, the situation is resolved and no further action is
required.
1217
It is possible for records to be locked by JTA transaction with no user associated with the
lock.
1218
After-image and before-image files Additional notes support JTA transactions and
table lock acquisition. AI and BI files need to be increased at least 30 percent when JTA
is enabled.
Lock Table (-L) JTA transactions will may hold onto locks for a longer period of time
than local transactions.
Transaction Table The number of rows in the transaction table is increased by the
maximum number of JTA transactions. The maximum number is controlled by the
-maxxids startup parameter.
Xid Table An additional table for storing JTA transaction information. The size of the
table is determined by the maximum number of JTA transactions allowed, and is
controlled by the -maxxids startup parameter.
Crash recovery Crash recovery processing executes every time a database is started,
whether in single-user or multi-user mode. A JTA enabled database must perform crash
recovery in multi-user mode. Attempting to perform crash recovery in single user mode
will result in an error if any JTA transactions exist in at startup in a prepared state.
Your database must be offline when enabling for JTA transactions. Enabling your database for
JTA transactions disables after-imaging. You must re-enable after-imaging after enabling the
database for JTA transactions. For complete syntax details, see the PROUTIL ENABLEJTA
qualifier section on page 2045.
1219
1220
Determine whether one or more unresolved JTA transactions exist against a database by
starting the PROMON database monitor. Enter the following command:
promon db-name
When you enter the PROMON utility, the main menu appears:
User Control
Locking and Waiting Statistics
Block Access
Record Locking Table
Activity
Shared Resources
Database Status
Shut Down Database
R&D.
T.
L.
C.
Advanced Options
2PC Transactions Control
Resolve 2PC Limbo Transactions
2PC Coordinator Information
J.
M.
Q.
Modify Defaults
Quit
2.
Choose option J (Resolve JTA Transactions). PROMON displays a screen similar to the
following:
WARNING:
WARNING:
1.
2.
3.
Q.
1221
Choose 1. (Display all JTA Transactions). PROMON displays a screen similar to the
following:
Usr
5
16
JTA State
JTA Prepared
JTA Active
XID
4a982a20-49b7-11da-8cd6-0800200c9a66
0
Take note of the Tran Id value for any outstanding transactions. You need this
information to resolve the transaction.
4.
For each outstanding JTA transaction, determine if you are going to commit or rollback
the transaction.
5.
If you are going to commit the transaction, select 2 from the Resolve JTA Transactions
menu. If you are going to rollback the transaction, select 3. You are prompted to enter the
transaction id value you noted in Step 3. You are prompted to confirm your decision. The
transaction commit or rollback is logged in the database log file.
Caution: Manually committing or rolling back a JTA transaction can compromise your
databases referential integrity.
Repeat Step 4 and Step 5 for all the outstanding JTA transactions. After you commit or abort all
of the transactions, they are resolved.
1222
Part III
Maintaining and Monitoring Your Database
Chapter 13, Managing Performance
Chapter 14, Maintaining Database Structure
Chapter 15, Dumping and Loading
Chapter 16, Logged Data
13
Managing Performance
The potential for improving the performance of your OpenEdge RDBMS depends on your
system. Some options might not be available on your hardware or operating system platform.
This chapter discusses options for managing database performance, as described in the
following sections:
Memory usage
Database fragmentation
Index use
Managing Performance
Performance is diminished if the database cannot use these resources efficiently. Performance
bottlenecks occur when a resource performs inadequately (or is overloaded) and prevents other
resources from accomplishing work. The key to improving performance is determining which
resource is creating a bottleneck. Once you understand your resource limitations, you can take
steps to eliminate bottlenecks.
Performance management is a continual process of measuring and adjusting resource use.
Because system resources depend on each other in complex relationships, you might fix one
problem only to create another. You should measure resource use regularly and adjust as
required.
To effectively manage performance, you must have solid knowledge of your system, users, and
applications. A system that is properly tuned has sufficient capacity to perform its workload.
Applications running on your system should not compete with the database for system
resources. Because system and application performance can vary greatly depending on the
configuration, use the information in this chapter as a guideline and make adjustments as
required for your configuration.
132
PROMON utility
PROMON utility
The OpenEdge Monitor (PROMON) utility helps you monitor database activity and
performance. Chapter 19, PROMON Utility, documents the main PROMON options. In
addition, PROMON provides advanced options (called R&D options) for in-depth monitoring
of database activity and performance.
133
Managing Performance
CPU usage
Record locking
Memory usage
CPU usage
To use your system to its full potential, the CPU should be busy most of the time. An idle CPU
or unproductive CPU processing can indicate a bottleneck. Use operating system utilities to
monitor CPU usage.
If performance is inadequate and your CPU is idle, the CPU might be waiting for another
resource. Identify the bottleneck and eliminate it so that the CPU can process work efficiently.
Use PROMON to monitor database activity.
Disk I/O is a common bottleneck. For more information, see the Disk I/O section on
page 135.
Symmetric multi-processing (SMP) systems use a spin lock mechanism to give processes
exclusive access to data structures in shared memory. Spin locks ensure that only one process
can use the structure at a time, but that all processes can get access to these structures quickly
when they have to. However, if tuned incorrectly, SMP CPUs might spend time processing the
spin locks unnecessarily instead of performing useful work.
The spin lock algorithm works as follows: When a process requires a shared-memory resource,
it attempts to acquire the resources latch. When a process acquires the latch, it has exclusive
access to the resource. All other attempts to acquire the latch fail until the holding process gives
up the latch. When another process requires access to the resource, it attempts to acquire the
latch. If it cannot acquire the resources latch because another process is holding it, the second
process continues the attempt. This iterative process is called spinning. If a process fails to
acquire a latch after a specified number of spins, the process pauses, or takes a nap, before trying
again. If a process repeatedly fails to acquire a latch, the length of its nap is gradually increased.
You can set the Spin Lock Retries (-spin) parameter to specify how many times to test a lock
before napping.
To use a system of semaphores and queues to control locking, set -spin to zero (0).
Use the PROMON R&D Adjust Latch Options under Administrative Functions to change
the spin mechanism after start up.
134
Disk I/O
Because reading and writing data to disk is a relatively slow operation, disk I/O is a common
database performance bottleneck. The database engine performs three primary types of I/O
operations:
Database I/O
Before-image I/O
If performance monitoring indicates that I/O resources are overloaded, try the techniques in the
following sections to better balance disk I/O.
The best way to reduce disk I/O bottlenecks is to spread I/O across several physical disks,
allowing multiple disk accesses to occur concurrently. You can extend files across many disk
volumes or file systems.
Database I/O
Database I/O occurs when the database engine reads and writes blocks containing records to and
from disk into memory. To minimize database disk I/O, the database engine tries to keep a block
in memory after it reads the block the first time. The next time the engine needs that block, it
can access it from memory rather than reading it from disk.
To eliminate database I/O bottlenecks, you can:
Storage areas
Storage areas are the largest physical unit of a database. Storage areas consist of one or more
extents that are either operating system files, or some other operating system level device that
is addressed randomly. A storage area is a distinct address space, and any physical address
stored inside the area is generally stored relative to the beginning of the storage area.
Storage areas give you physical control over the location of specific database objects. You can
place each database object in its own storage area or place many database objects in a single
storage area. Storage areas can contain database objects of one type or of many types. For
example, to achieve load balancing, you can place a particularly active table in a separate
storage area, then place the most active index for that table in its own storage area. Then, in a
third storage area, place all the remaining tables and indexes. You cannot split a table or index
across storage areas.
However, you can improve performance by moving tables and indexes to an application data
storage area on a faster disk, while the database remains online. For a description of how to
move tables and indexes while the database remains online, see Chapter 14, Maintaining
Database Structure.
135
Managing Performance
Database buffers
A database buffer is a temporary storage area in memory used to hold a copy of a database
block. When the database engine reads a database record, it stores the block that contains that
record in a database buffer. Database buffers are grouped in an area of memory called the buffer
pool. Figure 131 illustrates database disk I/O.
1 Retrieve record .
3 Manipulate record .
Database
buffer
Database
2 Write record.
Buffer Pool in Memory
Figure 131:
Database I/O
136
1.
When a process needs to read a database record, it requests access to the record.
2.
The database engine searches the buffer pool for the requested record.
3.
If the block that holds the record is already stored in a buffer, the engine reads the record
from the buffer. This is called a buffer hit. When tuned correctly, the engine should
achieve a buffer hit most of the time.
4.
If the record is not found in any buffer, the engine must read the record from disk into a
buffer. If an empty buffer is available, the engine reads the record into that buffer.
5.
If no empty buffer is available, the engine must replace another buffer to make room for it.
6.
If the block that will be replace has been modified, the engine must write the block to disk
to save the changes. This is known as an eviction. While the eviction takes place, the
process that requested the record in Step 1 must wait. For this reason, performance is
improved if empty buffers are always available. See the How the database engine writes
modified buffers section on page 138 for detailed steps.
1
Request
Record Access
2
Search
Buffer Pool
Find Record?
Buffer Hit
Yes
3
Read Record
No
Eviction
Unmodified Buffer
Available?
4
Yes
Read Record
Into Buffer
No
5
Save Buffer
Contents To Disk*
Figure 132:
137
Managing Performance
How the database engine writes modified buffers
When a process requires access to a database block that is not in the buffer pool, the database
engine must replace another buffer to make room for it. The server searches for a buffer to
replace.
The ideal replacement candidate is a buffer that is unlocked and unmodified. Replacing an
unmodified buffer requires only one step: writing the new contents into the buffer. If a buffer
contains modified data, it must first be evicted before it can be replaced. Evicting the buffer
requires two steps: writing the buffers contents to disk, then writing new contents into the
buffer. It is therefore slower and requires more overhead as shown in Figure 133.
Database
unmodified
1
Place new data
in buffer.
Database
Buffer
modified
2
Place new data
in buffer.
Figure 133:
Evicting buffers
When searching for a replacement candidate, the server searches a maximum of ten buffers. If
the server fails to find an unlocked, unmodified buffer, the server evicts the first unlocked,
modified buffer that it finds.
Monitoring database buffer activity
A buffer hit occurs when the database engine locates a record in the buffer pool and does not
have to read the record from disk. See the Database buffers section on page 136 for an
explanation of buffer hits and how they improve performance by reducing overhead. When
tuned correctly, the engine should achieve a buffer hit most of the time.
138
Buffer
Hits
Field
Figure 134:
139
Managing Performance
Consequently, you can request some number of buffers in the buffer pool to be private read-only
buffers. Private read-only buffers do not participate in the LRU replacement algorithm of the
general shared buffer pool.
Applications that read many records in a short time, such as applications that generate reports
or lists, should use private read-only buffers. Private read-only buffers prevent applications
from quickly using all the public buffers and depriving buffers from other users. When an
application is using private read-only buffers, updates are performed correctly using the public
buffers. Therefore, an application performing many read operations but only a modest amount
of updates might also benefit from using private read only buffers.
When a sequential reader is using private read-only buffers and needs a buffer to perform a read
operation, and the buffer is already in the private read-only buffer pool, the database engine
marks the buffer as most recently used (MRU) and uses it. If the buffer is not already in the
private read-only buffer pool, the sequential reader takes a buffer from the LRU chain and puts
it in the private read-only buffer pool. If the sequential reader has exhausted its quota of private
read-only buffers, a private read-only buffer is replaced. The sequential reader maintains a list
or chain of all its private buffers and uses a private LRU replacement mechanism identical to
the public-shared buffer pool LRU replacement algorithm.
All users, regular and sequential, have access to all buffers in the buffer pool (public or private).
If a regular user needs a block found in a private buffer pool, the buffer is removed from the
sequential readers list of private buffers and is put back into the LRU chain as the most recently
used buffer. In addition, if a sequential read user needs to update a private read-only buffer, it
is removed from the sequential readers private buffer pool and put into the general shared
buffer pool as most recently used.
Sequential reads use an index and require that index blocks be available in memory because they
are used repeatedly. Therefore, you want to request enough private read-only buffers to hold all
of the index blocks needed to retrieve a record. To determine how many private read-only
buffers to set, count the number of tables that you read and determine the indexes you use. Then,
determine the number of levels in the B-tree (balance tree) of each index and add 1 (for the
record blocks). For example, request at least five private read-only buffers if you have a report
that reads the Customer table using the Cust-Name index, and the Cust-Name index has four
B-tree levels.
If you do not know the number of levels in your index, you can generally request six private
read-only buffers and get a good result. If you perform a join and are reading from two tables
simultaneously, request 12. If the system is unable to allocate the requested number of private
read-only buffers, a message is written to the database log.
You can request a number of private read-only buffers using the Private Buffers (-Bp) startup
parameter. When you use the -Bp startup parameter the request remains active for the entire
session unless it is changed or disabled by an application. Each user of private read-only buffers
reduces the number of public buffers (-B).
Note:
1310
The total number of private read-only buffers for all simultaneous users is limited to
25 percent of the total blocks in database buffers. This value is set by the -B startup
parameter. See Chapter 18, Database Startup Parameters for information on setting
-B.
The following example demonstrates how to turn private read-only buffers on and off using an
SQL statement:
They ensure that a supply of empty buffers is available so the database engine does not
have to wait for database buffers to be written to disk.
They reduce the number of buffers that the engine must examine before writing a modified
database buffer to disk. To keep the most active buffers in memory, the engine writes the
least recently used buffers to disk; the engine must search buffers to determine which one
is least recently used.
They reduce overhead associated with checkpointing because fewer modified buffers have
to be written to disk when a checkpoint occurs.
You must manually start APWs. You can start and stop APWs at any time without shutting
down the database. See Chapter 3, Starting Up and Shutting Down, for instructions on
starting and stopping an APW.
A database can have zero, one, or more APWs running simultaneously. The optimal number is
highly dependent on your application and environment. Start two APWs and monitor their
performance with PROMON. If there are buffers being flushed at checkpoints, add an additional
APW and recheck. Applications that perform fewer changes to a database require fewer APWs.
Note:
1311
Managing Performance
APWs are self-tuning. Once you determine how many APWs to run, you do not have to adjust
any startup parameters specific to APWs. However, you might want to increase the BI cluster
size to allow them to perform at an optimal rate. The PROUTIL TRUNCATE BI qualifier lets
you create a BI cluster of a specific size. For more information, see Chapter 20, PROUTIL
Utility.
APWs continually write modified buffers to disk, making it more likely the server will find an
unmodified buffer without having to wait. To find modified buffers, an APW scans the Block
Table (BKTBL) chain. The BKTBL chain is a linked list of BKTBL structures, each associated
with a database buffer. Each BKTBL structure contains a flag indicating whether the associated
buffer is modified. When an APW finds a modified buffer, it immediately writes the buffer to
disk. Figure 135 illustrates how an APW scans the BLKTBL chain.
APW Scan
BKTBL Chain
unmodified
Buffer
modified
Buffer
unmodified
Buffer
modified
Buffer
Buffer
unmodified
Figure 135:
..
.
Write To
Disk
Write To
Disk
The APW scans in cycles. After completing a cycle, the APW goes to sleep. When the APW
begins its next scanning cycle, it picks up where it left off. For example, if the APW scanned
buffers 1 to 10 during its first cycle, it would start at buffer 11 to begin its next cycle.
When the database engine writes modified buffers to disk, it replaces the buffers in a
least-to-most-recently-used order. This is beneficial because you are less likely to need older
data.
1312
LRU Chain
Buffer
Least
Recently
Used
Buffer
LRU Anchor
Head
Whenever a process
accesses a database
buffer, OpenEdge must
lock and update the
LRU anchor, moving the
last accessed buffer to
the tail of the chain.
Buffer
Tail
Most
Recently
Used
Buffer
Buffer
.
.
.
Buffer
Buffer
Figure 136:
Since all processes must lock the LRU anchor whenever they have to access a buffer, long
buffer replacement searches create contention for all processes accessing the database buffer
pool. This can have a debilitating effect on performance, especially on heavily loaded systems.
APWs reduce contention for the LRU anchor by periodically clearing out modified buffers.
When buffer replacement is required, the database engine can find an unmodified buffer
quickly.
A third way that APWs improve performance is by minimizing the overhead associated with
before-image checkpointing.
The before-image file is divided into clusters. A checkpoint occurs when a BI cluster becomes
full. When a cluster becomes full, the database engine reuses the cluster if the information stored
in it is no longer required. By reusing clusters, the engine minimizes the amount of disk space
required for the BI file.
Checkpoints ensure that clusters can be reused and that the database can be recovered in a
reasonable amount of time. During a checkpoint, the engine writes all modified database buffers
associated with the current cluster to disk. This is a substantial overhead, especially if you have
large BI clusters and a large buffer pool. APWs minimize this overhead by periodically writing
modified buffers to disk. When a checkpoint occurs, fewer buffers must be written.
1313
Managing Performance
Monitoring APWs
The PROMON R&D option Page Writers Activity display shows statistics about APWs
running on your system. Figure 137 shows a sample display.
01/25/00
16:29
Total DB writes
APW DB writes
scan writes
APW queue writes
ckp queue writes
scan cycles
buffers scanned
bfs checkpointed
Checkpoints
Marked at checkpoint
Flushed at checkpoint
Number of APWs:
Figure 137:
Note:
Total
3
0
0
0
0
0
0
173
82110
0
0
Per Min
0
0
0
0
0
0
0
0
0
0
0
Per Sec
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.11
5.22
0.00
0.00
Per Tx
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.14
6.79
0.00
0.00
Nonzero numbers in the Flushed at Checkpoint row indicates that the APW was
unable to write buffers fast enough to prevent a memory flush. Increase the number of
APWs and/or increase the cluster size to eliminate the flush.
1314
Parameter
Indication
-basetable n
-tablerange n
-baseindex n
-indexrange n
To consume less memory, set the indexrange and tablerange parameters to smaller values.
Before-image I/O
Before-imaging is always enabled to let the database engine recover transactions if the system
fails. This mechanism is extremely important for database reliability, but it creates a significant
amount of I/O that can affect performance. In addition, before-image I/O is usually the first and
most likely cause of I/O bottlenecks. The engine must always record a change in the BI file
before it can record a change in the database and after-image files. If BI activity creates an I/O
bottleneck, all other database activities are affected.
You can reduce the I/O impact of before-imaging by:
Running a before-image writer (BIW) on systems with and Enterprise database license
Delaying BI writes
1315
Managing Performance
Monitoring BI activity
Use operating system utilities to monitor the amount of I/O activity on the disk where the BI
files reside. Use the PROMON utility to monitor specific BI activity. Use the R&D option BI
Log Activity. Figure 138 shows a sample display.
01/25/00
11:36:56
Activity : BI Log
04 /12/00 13 :56 to 04/13/00 11 :23 (21 hrs 27 min)
Total
Total BI writes
BIW BI writes
Records written
Bytes written
Total BI Reads
Records read
Bytes read
Clusters closed
Busy buffer waits
Empty buffer waits
Log force waits
Partial Writes
Per Min
131
127
3630
129487
13
0
0
0
0
0
0
4
0
0
0
0
0
0
0
0
0
0
0
0
Per Sec
0 .08
0 .08
2 .31
82 .42
0 .00
0 .00
0 .00
0 .00
0 .00
0 .00
0 .00
0 .00
Per Tx
0 .10
0 .10
3 .00
107 .19
0 .01
0 .00
0 .00
0 .00
0 .00
0 .00
0 .00
0 .00
Figure 138:
1316
High number of partial writes. A partial write occurs when the database engine must write
data to the BI file before the BI buffer is full. This can happen if:
An after-image writer (AIW) runs ahead of the BIW. Because BI notes must be
flushed before the AI notes can be written, the AIW writes the BI buffer before it is
full so it can perform the AI write.
The Suppress BI File Write (-Mf) parameters timer expires before the buffer is filled.
1317
Managing Performance
Use the PROSHUT command or the PROMON Shutdown a Database option to shut
down the database.
2.
For size, specify the new cluster size in kilobytes. The number must be a multiple of 16
in the range 16 to 262128 (16K256MB). The default cluster size is 512K. Cluster sizes
from 512 to 16384 are common.
You can also change the BI block size with this command. You might want to do so at this
time. For more information, see the Increasing the BI block size section on page 1320.
Increasing the number of BI clusters
When you create a new database or truncate an existing database, the database engine, by
default, creates four BI clusters, each of which is 512K. As the engine fills a cluster, the cluster
is checkpointed, and the engine writes to the next cluster on the chain. Figure 139 illustrates
the default BI clusters.
Cluster 2
Cluster 3
Cluster 1
Cluster 4
Figure 139:
1318
BI clusters at startup
Cluster 2
Cluster 3
Cluster 1
Available
Cluster 5
Active Transactions
Cluster 4
Automatically by the database engine when you start after-imaging (RFUTIL AIMAGE
BEGIN)
Automatically by the database engine when you perform an index rebuild (PROUTIL
IDXBUILD)
For n, specify the number of BI clusters that you want to create for the specified database.
1319
Managing Performance
Increasing the BI block size
The database engine reads and writes information to the BI file in blocks. Increasing the size of
these blocks allows the engine to read and write more data at one time. This can reduce I/O rates
on disks where the BI files are located.
The default BI block size (8K) is sufficient for applications with low transaction rates. However,
if performance monitoring indicates that BI writes are a performance bottleneck and your
platform's I/O subsystem can take advantage of larger writes, increasing the BI block size might
improve performance.
To change the BI block size:
1.
Use the PROSHUT command or the PROMON Shutdown a Database option to shut
down the database.
2.
For size, specify the new BI block size in kilobytes. Valid values are 0, 1, 2, 4, 8, and 16.
If you have a single AI file and after-imaging is enabled when you enter this command,
you must use the After-image Filename (-a) parameter to specify the AI filename.
You can also change the BI cluster size with this command. You might want to do so at
this time. For more information, see the Increasing the BI cluster size section on
page 1317.
For detailed information on this command, see Chapter 20, PROUTIL Utility.
Delaying BI writes
When the Delayed BI File Write (-Mf) startup parameter is set to zero, use the Group Commit
technique to increase performance. This technique assumes that for the benefit of overall
performance, each individual transaction can take slightly longer. For example, when a
transaction begins to commit and spools its end note to the BI buffer, it waits a short time until
one of two things happen: it fills the buffer and is written to disk, or a few other transactions
complete and store their end notes in the BI buffer so that a single synchronous write commits
all the transactions. Use the Group Delay (-groupdelay) startup parameter to set the amount of
time (milliseconds) the transaction waits.
If the Group Commit technique does not provide sufficient improvement, you can improve
performance on a busy system by delaying BI file writes with the Delayed BI File Write (-Mf)
startup parameter.
By default, the database engine writes the last BI block to disk at the end of each transaction.
This write guarantees that the completed transaction is recorded permanently in the database.
On a system with little update activity, this extra BI write is very important and adds no
performance overhead. On a busy system, however, the BI write is less important (the BI block
will be written to disk very soon anyway) and might incur a significant performance penalty.
1320
Suppressing the last BI write does not reduce database integrity. However, if there is a
system failure, the last few completed transactions can be lost (never actually written
to the BI file).
For more detailed information on the -Mf parameter, see Chapter 18, Database Startup
Parameters.
Setting a BI threshold
When an application performs large schema updates or large transactions, the BI clusters can
grow in excess of 2GB. If a crash occurs during such an operation, the recovery process might
require several times the amount of disk space as the BI log was using at the time of the crash.
Often this space is not available, leaving the database in an unusable state.
Using the Recovery Log Threshold (-bithold) startup parameter sets the maximum size to
which BI files can grow. Once the threshold is reached, the database performs an emergency
shutdown. This mechanism ensures that there will be enough disk space to perform database
recovery. All messages associated with the threshold are logged in the database log (.lg) file.
These messages include:
Message that a database shutdown is occurring because the threshold has been reached
The recommended range is to set -bithold between three and one hundred percent (3-100%) of
the largest possible recovery log file size, rounded to the nearest cluster boundary. If the
threshold is set above 1000MB, the database engine issues a warning message to the display and
the database log (.lg) file. The system will check the total amount of BI clusters in use each time
a new cluster is marked as used. If the No Crash Protection (-i) is set, the recovery log threshold
parameter is set to the default (none) and cannot be overridden.
Enabling threshold stall
Often a database administrator does not want the database to perform an emergency shutdown
when the Recovery Log Threshold limit is reached. The Threshold Stall (-bistall) startup
parameter quiets the database when the recovery log threshold is reached. Instead of an
emergency shutdown, the database stalls forward processing until the database administrator
intervenes. This provides the database administrator the options of shutting down the database,
making more disk space available, and increasing the threshold amount. A message is added to
the database log (.lg) file stating that the threshold stall is enabled.
1321
Managing Performance
Using PROQUIET to adjust the BI threshold
You can adjust the value of the threshold by providing a valid threshold value for the
PROQUIET command on systems with an Enterprise database license. The value can be
increased above the current value or reduced to a value of one cluster larger than the recovery
log file size at the time the PROQUIET command is issued.
To adjust the BI threshold:
1.
db-name
is the name of the database for which you want to adjust the BI threshold.
Note: For more information on, and complete syntax for, the PROQUIET command, see
Chapter 17, Startup and Shutdown Commands.
During a database quiet processing point, all file write activity to the database is stopped.
Any processes that attempt to start a transaction while the quiet point is enabled must wait
until you disable the database quiet processing point.
2.
db-name
Specifies the name of the database for which you want to adjust the BI threshold.
n
For more information on, and the complete syntax for, PROQUIET, see Chapter 17,
Startup and Shutdown Commands.
1322
After-image I/O
After-imaging is an optional recovery mechanism that lets you recover data and transactions if
a disk fails. AI files must be kept on separate disks from the database and BI files, so
after-imaging I/O activity does not contribute to I/O activity on the disks where BI and database
files are stored. However, after-imaging creates a significant amount of I/O that can affect
performance. You can reduce the I/O impact of after-imaging by:
01/25/00
11:36:56
Activity : AI Log
01 /12/00 13 :56 to 01/13/00 11 :23 (21 hrs 27 min)
Total
Total AI writes
AIW AI writes
Records written
Bytes written
Busy buffer waits
Buffer not avail
Partial Writes
Log force waits
Per Min
131
127
3630
129487
13
0
0
0
0
0
0
0
0
0
0
0
Per Sec
0 .08
0 .08
2 .31
82 .42
0 .00
0 .00
0 .00
0 .00
Per Tx
0 .10
0 .10
3 .00
107 .19
0 .01
0 .00
0 .00
0 .00
1323
Managing Performance
You can run only one AIW process per database at a time. You must manually start the AIW,
but you can start and stop an AIW at any time without shutting down the database. See
Chapter 3, Starting Up and Shutting Down, for instructions on starting and stopping an AIW.
Increasing the -aibufs startup parameter increases the number of buffers in the after-image
buffer pool, which increases the availability of empty buffers to client and server processes. Set
the -aibufs parameter to 1.5 times the value of the Before-image Buffers (-bibufs) parameter.
(For information on setting the -bibufs parameter, see the Providing more BI buffers section
on page 1317.) Increasing -aibufs has no effect if the AIW is not running.
Increasing the AI block size
As with before-imaging, the database engine reads and writes information to the AI file in
blocks. Increasing the size of AI blocks lets the engine read and write more AI data at one time.
This can reduce I/O rates on disks where the AI files are located. In general, the default AI
block size (8K) is sufficient for systems with low transaction rates. However, if performance
monitoring indicates that AI writes are a performance bottleneck and your platforms I/O
subsystem can take advantage of larger writes, increasing the AI block size might improve
performance. A larger AI block size might also improve performance for roll-forward recovery
processing.
To change the AI block size:
1.
Use the PROSHUT command or the PROMON Shutdown a Database option to shut
down the database.
2.
For more specific information on this command, see the description of the RFUTIL utility
AIMAGE END qualifier in Chapter 22, RFUTIL Utility.
3.
Truncate the BI file to bring the database and BI files up to date and eliminate any need
for database recovery. To do this, enter the following command:
-bi size
-biblocksize size
Typically, if you change the AI block size, you should also change the BI block size. If
you have not already, you might want to use this command to do so. For more information
on the BI block size, see the Increasing the BI block size section on page 1320.
1324
-a aifilename
For size, specify the size of the AI read and write block in kilobytes. The minimum value
allowed is the size of the database block. Valid values are 0, 1, 2, 4, 8, and 16. If you
specify 0, RFUTIL uses the default size (8K) for your operating system platform.
5.
6.
buffered
unbuffered
-a ai-name
For more specific information on this command, see Chapter 22, RFUTIL Utility.
7.
Direct I/O
The database engine can use an I/O technique that forces blocks to be written directly from the
buffer pool to disk. This optional technique prevents writes to disk from being deferred by the
operating systems buffer manager.
In general, use Direct I/O only if you are experiencing memory shortages. In many cases the
normal buffered I/O will provide better performance. Test the performance impact before
implementing Direct I/O on a production database.
To use this feature, specify the Direct I/O (-directio) startup parameter. If you use the
-directio startup parameter, you might need to add additional APWs to compensate for the fact
that with Direct I/O, each database write requires more processing time from the APWs.
1325
Managing Performance
Memory usage
Many of the techniques for improving server performance involve using memory to avoid disk
I/O whenever possible. In general, you spend memory to improve performance. However, if the
amount of memory on your system is limited, you can overload memory resources, causing your
system to page. Paging can affect performance more severely than a reasonable amount of disk
I/O. You must determine the point where memory use and disk I/O is balanced to provide
optimal performance. In other words, you must budget the amount of memory you can afford
to spend to improve performance.
Activity Shows the amount of shared memory used by the database and the number of
shared-memory segments
Shared Resources Status (an R&D option) Shows the amount of shared memory
allocated
Shared-memory Segments Status (an R&D option) Shows the ID number, size, and
amount used for each shared-memory segment
For detailed information on these options, see the PROMON R&D Advanced Options section
on page 1925.
Startup parameter
1326
(1 of 2)
Suggested use
Memory usage
Table 132:
Startup parameter
(2 of 2)
Suggested use
The database server cannot create a shared memory segment larger than the operating
system maximum shared memory segment size. If the value specified for -shmsegsize is
larger than the operating system maximum, the operating system maximum will be used.
Not all platforms specify a maximum shared memory segment size; for example,
Windows and AIX do not. For many of the other supported UNIX platforms, the
maximum shared memory segment size is determined by the kernel parameter SHMMAX.
The database server cannot create more shared memory segments than the maximum
allowed by the operating system. This value is not tunable on all systems, but for many of
the supported UNIX platforms, the kernel parameter for this maximum is SHMSEG.
The database server cannot create shared memory segments that exceed the maximum
addressable memory of a system. For 32-bit platforms, the theoretical maximum is 4 GB,
but in practice, the maximum is smaller, as other aspects of the process (heap, stack, code,
etc.) consume part of the available memory. For 64-bit platforms, physical system
limitations keep the practical maximum well below any theoretical computational
maximum. If the amount of shared memory requested exceeds the capacity of the system,
the server will not start.
See your operating system documentation for specific information on shared memory segment
settings.
1327
Managing Performance
Processes
The following functions run as processes:
Brokers
Servers
Clients
The user table contains one entry per process. Use the Number of Users (-n) parameter to
specify the number of users.
On UNIX, the NPROC parameter limits the total number of active processes on the system and is
commonly set between 50 and 200. The MAXUP parameter limits the number of concurrent
processes that can be started by a single user ID, and it is commonly set between 15 and 25.
Also, if more than one user logs in with the same user ID, MAXUP can be exceeded quickly. You
see the following error message when the process limit is exceeded:
If you see this message repeatedly, you should reconfigure your system kernel.
Semaphores
On single-processor systems, semaphores are used to synchronize the activities of server and
self-service client processes that are connected to a database. By default, each database has an
array of semaphores, one for each user or server. Each process uses its semaphore when it must
wait for a shared resource. Semaphores are not used for single-user sessions or for client
sessions connecting to a remote database on a server system.
1328
2
CLIENT/
SERVER
Process 5
Wait
SHARED MEMORY
Decrement
SEMAPHORE 5
Lock
3
Notify (increment )
when lock released
CLIENT/
SERVER
Process 8
1329
Managing Performance
Table 133 lists the UNIX kernel parameters that control the number and size of the semaphore
sets.
Table 133:
Parameter
Description
Recommended setting
SEMMNI
SEMMSL
(Max-local-users-on-any-databases +
Max-#servers-on-any-databases + 4).
If you set this value too low, the
database engine might generate error
1093 or 1130.
SEMMNS
Total number of
semaphores allowed for
the system.
SEMMNU
Maximum number of
semaphore undo structures
allowed for the system.
When you install the OpenEdge RDBMS, you might have to increase the values of these
parameters. If you are running other software that uses semaphores, take into account the
combined requirements. See your system documentation for information on how to change
these parameters.
The amount of kernel memory required for semaphores is relatively small, so setting the limits
higher than your current needs probably will not affect performance.
The PROMON R&D Shared Resources option displays the number of semaphores used. When
you start the broker process, a message specifies the number of semaphores still available. If the
number of database users grows large, the database engine might exceed the maximum number
of semaphores allowed, specified by the SEMMNS parameter. If this happens, you must
reconfigure the systems kernel to increase the semaphore limit. You can reduce semaphore use
only by lowering the values of the Number of Users (-n) and/or Maximum Servers (-Mn) startup
parameters.
Allocating semaphores
By default, the database engine uses one semaphore set for all the semaphores needed by the
database. When greater than 1000 users connect to a single database, there might be high
contention for the semaphore set. Using multiple semaphore sets helps alleviate this contention
and improve performance with high user counts. The broker startup parameter, Semaphore Sets
(-semsets), allows you to change the number of semaphore sets available to the OpenEdge
broker.
The broker uses two groups of semaphores, Login and User. The Login semaphore is used
during connection to the database. The system allocates one User semaphore for every user
specified by the Number of Users (-n) startup parameter. User semaphores are allocated using
1330
In this example, the broker uses three semaphore sets, one for the Login semaphore, one for five
of the User semaphores, and one for the remaining five User semaphores:
Spin locks
On multi-processor systems, the database engine uses a spin lock algorithm to control access to
memory. The spin lock algorithm works as follows: When a process needs a memory resource,
it attempts to acquire the resources latch. If it cannot acquire the resources latch, it repeats the
attempt. This iterative process is called spinning. If a process fails to acquire a latch after a
specified number of spins, the process pauses, or takes a nap, before trying again. If a process
repeatedly fails to acquire a latch, the length of its nap is gradually increased. You can set the
Spin Lock Retries (-spin) parameter to specify how many times to test a lock before napping.
File descriptors
A file descriptor is an identifier assigned to a file when it is opened. There is a system limit on
the number of file descriptors. Each database process (clients, remote client servers, and the
broker) might use 15 or more file descriptors. Therefore, set the systems file descriptor limit at
approximately 15 times the maximum number of processes, or higher. Allow approximately10
file descriptors for the operating system as well.
1331
Managing Performance
Database fragmentation
Over time, as records are deleted from a database and new records are added, gaps can occur on
the disk where the data is stored. This fragmentation can cause inefficient disk space usage and
poor performance with sequential reads. You can eliminate fragmentation by dumping and
reloading the database. You can manage fragmentation by changing create and toss limits.
You can run PROUTIL with the TABANALYS qualifier while the database is in use; however,
PROUTIL generates only approximate information.
In the TABANALYS display, check the following fields:
Count The total number of record fragments found for each table in the database.
Fragments Factor The degree of record fragmentation for each table. If the value is
2.0 or greater, dumping and reloading will improve disk space usage and performance. If
the value is less than 1.5, dumping and reloading is unnecessary.
Scatter Factor The degree of distance between records in the table. The optimal value
for this field varies from database to database. To determine the optimal value for your
database, run the TABANALYS qualifier on a freshly loaded database.
Totals:
1332
---------------------------------------------------------------4190
431.1K
6 1156
105
4263
1.0
2.5
Database fragmentation
2.
Managing fragmentation
Records are allocated to a block or blocks according to algorithms that aim to maximize storage
utilization, and minimize fragmentation. When allocating space for a new record, the database
engine first attempts to find space in the blocks on the RM (record management) chain. The RM
chain contains partially filled blocks. If a block is not found on the RM chain, the record will be
inserted into a block from the free chain. The free chain contains empty blocks. Changing create
and toss limits determines where new records are inserted and the amount of unused space a
block must contain when on the RM chain. Unused space is intentionally left within a block to
allow for future expansion of the record. The goal is that the free space is sufficient to handle
anticipated growth and that there will not be a need to split the record across blocks.
The limits delineating the free space thresholds are defined as:
Create Limit The minimum amount of free space in bytes required in a block after a
new record is inserted. The create limit must be greater than 32, and less than the block
size minus 128. For a database with a 1K block size, the default create limit is 75. For all
other block sizes, the default create limit is 150.
Toss Limit The minimum amount of free space in bytes required in a block for the
block to remain on the RM chain. Once a block contains fewer empty bytes than the toss
limit, it is removed from the RM chain, and any remaining space is dedicated to record
growth within the block. For a database with a 1K block size, the default toss limit is 150.
For all other block sizes, the default toss limit is 300.
The create and toss limits are changed using PROUTIL. For Type I storage areas, create and
toss limits are changed on a per area basis. For Type II storage areas, limits are changed on a
per area or object basis. The general syntax for changing a create or toss limit with PROUTIL
is as follows:
1333
Managing Performance
Table 134 describes the PROUTIL qualifiers for changing create and toss limits.
Table 134:
Limit
Create
Toss
Qualifier
Area
BLOB
Table
Area
BLOB
Table
The current values for create and toss limits are displayed by using PROUTIL
DISPTOSSCREATELIMITS. The limits are displayed for a specified area. See the PROUTIL
DISPTOSSCREATELIMITS qualifier section on page 2037 for the complete syntax.
Create and toss limits should only be changed when your database experiences high rates of
fragmentation or inefficient space utilization within blocks. Table 135 describes situations and
suggested solutions.
Table 135:
Situation
Suggested
solution
Increase create
limit.
Increase toss
limit.
Decrease create
limit.
Decrease toss
limit.
Increasing the create and toss limits address record fragmentation by allowing more space for
records to grow within a block before being continued in another block. Both limits identify an
amount of space to reserve for existing records to grow within a block. If the create limit check
fails, the new record will be inserted into a different block. It remains possible that a smaller
record can be inserted into the first block because failing the create limit check does not remove
it from the RM chain. If the toss limit check fails, when a record is expanded, the block is
removed from the RM chain so no new records can be inserted into it.
1334
Database fragmentation
Decreasing the create and toss limits address problems with efficient block space utilization.
When blocks have large amounts of empty space and record growth is not anticipated, but new
records are unable to be added because the limit checks are failing, you can consider decreasing
the create and toss limits. Decreasing the create limit will increase the potential that a new
record can be added to an existing record, and decreasing the toss limit will leave blocks with
on the RM chain.
1335
Managing Performance
Index use
As database blocks can become fragmented, index blocks can become under-utilized over time.
The optimal degree of index block utilization depends on the type of database access performed.
Retrieval-intensive applications generally perform better when the index blocks are close to full
since the database engine has to access fewer blocks to retrieve a record. The larger the index,
the greater the potential for improving performance by compacting the index. Update-intensive
applications, on the other hand, perform better with loosely packed indexes because there is
room to insert new keys without having to allocate more blocks. Index analysis provides the
utilization information you require to make decisions. Choose a balance between tightly packed
indexes and under-utilized indexes, depending on your data and applications, and use the Index
Compact (IDXCOMPACT) qualifier, described later in this chapter, to compact indexes.
db-name
The percent utilization within the index (that is, the degree of disk space efficiency)
A summary of indexes for the current database and the percentage of total index space
used by each index
Note:
You can run PROUTIL with the IDXANALYS qualifier while the database is in use;
however, PROUTIL generates only approximate information.
The most important field in the IDXANALYS display is the % Util field. This field shows the
degree of consolidation of each index. If an index is several hundred blocks and your application
most frequently retrieves data, an index utilization of 85 percent or higher is optimal. There are
two ways to increase an indexs utilization rate:
1336
Compress the index with the database online or offline with the PROUTIL
IDXCOMPACT utility.
Rebuild and compress the index offline with the PROUTIL IDXBUILD utility.
Index use
The Levels field shows the number of reads PROUTIL performs in each index per entry. The
Blocks and Bytes fields show you the size of each index. The Factor field is based on the
utilization and size of the index; it is an indicator of when you should rebuild indexes.
Table 136 provides a description of the different ranges of values for the Factor field. When
you use the Factor field to decide whether to rebuild an index, consider the context of how the
particular index is used. For example, if an index is highly active, with continuous insertions
and deletions, its utilization rate varies greatly, and a rebuild is inadvisable. However, a static
index with a high factor value benefits from a rebuild.
Table 136:
Factor values
Factor range
Description
1 to 2
2 to 2.5
2.5 to 3
The index is less than 25 percent utilized and/or the index is very
unbalanced. You should rebuild this index.
Compacting indexes
When space utilization of an index is reduced to 60 percent or less as indicated by the PROUTIL
IDXANALYS utility, use the PROUTIL IDXCOMPACT utility to perform index compaction
online. Performing index compaction increases space utilization of the index block to the
compacting percentage specified. For example:
Note:
owner-name.]table-name.index-name
[n]
For the complete syntax description see Chapter 20, PROUTIL Utility.
Performing index compaction reduces the number of blocks in the B-tree and possibly the
number of B-tree levels, which improves query performance.
1337
Managing Performance
The index compacting utility operates in phases:
Phase 1 If the index is a unique index, the delete chain is scanned and the index blocks
are cleaned up by removing deleted entries
Phase 2 The nonleaf levels of the B-tree are compacted, starting at the root and working
toward the leaf level
The _UserStatus virtual system table displays the utilitys progress. For more information, see
Chapter 14, Maintaining Database Structure.
Note:
Because index compacting is performed online, other users can use the index
simultaneously for read or write operation with no restrictions. Index compacting only
locks one to three index blocks at a time, for a short time. This allows full concurrency.
Rebuilding indexes
Use the IDXBUILD (Index Rebuild) qualifier of the PROUTIL utility to:
Repair corrupted indexes in the database (index corruption is normally signaled by error
messages)
Notes: When you run the Index Rebuild, the database must not be in use.
You perform a backup of your database immediately prior to running an Index Rebuild. Should
the Index Rebuild crash due to data corruption, the only method of recovery is a restore from
backup.
To run the IDXBUILD qualifier with PROUTIL, enter the following command:
all
[owner-name.]table-name |
| schema schema-owner |
activeindexes | inactiveindexes ]
[ -T dir-name ]|[ -SS sort-file-directory-specification ]
[ -TB blocksize ][ -TM n ] [ -B n ] [ -SG n ]
[-threads n ] [-threadnum n ]
table
area area-name
For more information on each option, see the PROUTIL IDXBUILD qualifier section on
page 2055.
1338
Index use
When you enter this command without the all, table, area, or schema qualifiers, the following
menu appears:
the following:
(a/A) - Rebuild
(s/S) - Rebuild
(r/R) - Rebuild
(c/C) - Rebuild
(t/T) - Rebuild
(v/V) - Rebuild
Use the All option to rebuild all indexes. Use the Some option to rebuild only specific indexes.
Use the By Area option to rebuild indexes specific to one or more areas. Use the By Schema
option to rebuild indexes owned by one or more schema owners. Use the By Table option to
rebuild indexes specific to one or more tables. Use the By Activation option to select active or
inactive indexes. After you enter a selection and you qualify those indexes you want to rebuild,
the utility prompts if you have enough disk space for index sorting. If you enter yes, the utility
sorts the indexes you are rebuilding, generating the indexes in order by their keys. This sorting
results in a faster index rebuild and better space use in the index blocks.
To estimate whether you have enough free space to sort the indexes or not, use the following
formulas:
If you rebuild all the indexes in your database, sorting the indexes requires up to 75 percent
of the total database size in free space.
If you rebuild an individual index, sorting that index requires as much as the following
amount of free space:
The utility scans the database by area, clearing all index blocks that belong to the indexes
you are rebuilding and adding those blocks to the free block list.
2.
The utility scans the database by area and rebuilds all the index entries for every data
record. If you chose to sort the index, the utility writes the index entries to the sort file.
Otherwise, the utility writes the index entries to the appropriate index at this point.
3.
The utility sorts the index entries in the sort file into groups and enters those entries into
their respective entries in order, one index at a time, building a compacted index. This
phase only occurs if you chose to sort the indexes.
The Index Rebuild qualifier accomplishes most of its work without displaying messages, unless
it encounters an error condition.
1339
Managing Performance
For Enterprise database licenses, index rebuild is multi-threaded by default. You can specify the
maximum number of threads created using the -threadnum n parameter. If not specified, the
maximum number of threads created will equal the systems number of CPUs. The actual
number of threads created will not exceed the number of index groups in an area if this value is
smaller than the maximum. During a multi-threaded index rebuild, separate threads are assigned
the external merging of each index group during Phase 2. Once the main process has created all
the threads for Phase 2, it immediately begins building the index tree for Phase 3, enabling
Phase 3 to be executed in parallel with Phase 2. If an area has only one index to rebuild, the work
will be executed without the use of threads.
If you do not want your index rebuild to be multi-threaded, specify -threads 0. This directs the
index rebuild to execute in an unthreaded mode.
If the index rebuild is interrupted while rebuilding selected indexes, the list of selected indexes
is retained in a file named dbname.xb. This .xb file is used when the utility is restarted. You do
not have to enter the list of indexes manually if the .xb file exists.
Overcoming SRT size limitations
When you run the Index Rebuild utility and choose the Sort option, you might encounter space
limitations that can cause the utility to terminate. To overcome this limitation, simply create a
file that contains specifications for the directories and the amount of space per directory that you
want the SRT file to have access to during the Index Rebuild. The file that contains the
specifications must be a text file, have the same name as the database with an extension of.srt
(dbname.srt), and reside in the same directory as the .db file. In addition, the contents of the
file must follow these conventions:
1340
List the directory and the amount of space that you want to allocate to the index rebuild
sort on separate lines.
The size that you specify in the dbname.srt directory specification is the maximum (in
1024 byte units) that the file can grow. Specifying 0 for any directory indicates that you
want to allow unlimited growth.
For threaded index rebuilds, spread the directories across as many devices as possible. In
threaded builds, each thread will use the next directory in the sort file, looping back to the
beginning of the list, if necessary. If multiple sort files are open on the same disk, you
could create significant I/O contention, reducing the performance gain of the threaded
rebuild.
Index use
For example, if you want to rebuild the index for the sports database and you want the speed
sort to have access to 300K of space available in the /user2/db1/first directory, 400K in the
user3/junk directory, and unlimited space in the /user4/last directory, then the sports.srt
looks like this on UNIX:
300
400
0
/user2/db1/first/
/user3/junk/
/user4/last/
300
400
0
d:\temp
e:\temp
f:\temp
The Index Rebuild utility accesses the files in the order in which they are listed in the
dbname.srt file. So, if you specify an amount of space that is not available, when the disk is
filled, then Index Rebuild terminates and the next directory specification is not used. Thus, if a
disk has only 200K of space and the dbname.srt specifies 300K, when the 200K is exhausted
the Index Rebuild terminates. For example, if /user2/db1/first above does not get 300K of
data, Index Rebuild never processes /user3/junk. In addition, if you specify a directory size of
0, any directories specified after it in the dbname.srt are not processed. For these reasons, you
should verify that the space you specify in the dbname.srt file is available before running index
rebuild.
The Index Rebuild utility opens the files for each of the directories before it actually starts the
sort process. As a result, one of the following messages is displayed for each file:
Or:
Temporary sort file at:pathname will use the available disk space.
The previous message occurs even if the .srt file was not found.
When the sort completes, the following message is displayed for each file:
In some cases the message displays OK. This simply means that the sort took place completely
in memory.
If Index Rebuild does not find a dbname.srt file, then by default, it uses the directory supplied
by either the -T parameter or the current working directory.
1341
Managing Performance
Maximizing index rebuild performance
To speed up index rebuild operations, do the following:
Answer yes when prompted whether you have enough disk space for sorting.
Increase the Speed Sort (-TB) startup parameter to 24K. (If you are very short of memory,
use 16K or 8K.) This improves sort performance; however, it also uses more memory and
disk space.
Increase the Merge Number (-TM) startup parameter to 32 (unless memory is scarce).
Use the Sort Grouping (-SG) parameter. A large -SG value requires more memory
allocation and more file handles. To determine the amount of memory (in kilobytes)
needed for each index group, add 1 to the merge number (the value of -TM) and multiply
the sum by the speed sort block size (the value of -TB). Memory consumption for each
index group equals (-TM + 1) * -TB.
Change the Temporary Directory (-T) startup parameter to store the temporary files on
another disk.
The database engine uses the following algorithm to rebuild indexes for each record:
1.
Read the index key fields and store in the first available SRT file block.
2.
Allocate additional SRT file blocks of the specified block size as required to hold all index
keys.
3.
Sort the keys in each block then merge the keys to produce a sorted file.
A similar technique is used to sort records when there is no index to satisfy a BY clause.
A larger block size can improve index rebuild performance considerably. A larger block size
means less SRT block allocation overhead and fewer quicksort operations on individual blocks.
You might have to run the application several times using different block size values to
determine the optimal value. If you experience extreme memory shortages when running an
OpenEdge session, try setting the block size to 1 to reduce memory consumption.
During index rebuild, try setting -TB to 31, if memory and disk space are available. If the index
rebuild fails, try successively smaller values. Remember, a larger value for -TB improves sort
performance but uses more memory. The -TB setting has a significant impact on the size of the
SRT temporary file. The SRT file size depends on the number of session compile files, and the
number and size of sort operations.
1342
Index use
Memory usage depends on the number of sorts simultaneously occurring. The simultaneous
sorts are logically equivalent to nested FOR EACH statements. You can estimate memory usage as
follows, where M is the estimated memory usage:
M= sort-block-size *
(number-of-simultaneous-sorts+Merge-Number(-TM)parameter)
Index rebuild always requires eight simultaneous sorts, so during index rebuild:
M = (2*(8+5)) = 26K
You must change the record data to eliminate duplicate keys to access all the data with this
index. Use another index on the table (if one exists):
[useindex
[owner-name.]table-name.index-name
[recs n] [refresh t]
table-name.index-name]
See the PROUTIL IDXACTIVATE qualifier section on page 2053 for the details of the
command.
1343
Managing Performance
Prior to activating the index, IDXACTIVATE checks to make sure that there are no users with
a schema timestamp that is earlier than the schema timestamp of the index. If any such users are
connected to the database, IDXACTIVATE cannot proceed. You are given the option of waiting
or cancelling the index activation. If you chose to wait, you must wait until all the users with an
earlier schema timestamp disconnect, or you can use PROSHUT to forcibly remove the
connections. You control the update of the status display of blocking users with the refresh
option. The number supplied to refresh indicates the number of seconds between displays of
blocking users.
When IDXACTIVATE activates and builds the index, it bundles records into transactions. By
default, 100 records are bundled into one transaction. You can alter that number with the recs
option. The number you specify for the recs option is the number of records to bundle into one
transaction.
The following output shows the output of IDXACTIVATE. In this example the index
tst_table.inact3 is activated for the database doc_db; no optional parameters are specified so
default values are used. The cycle of waiting for users to disconnect executes twice before all
the users with old timestamps are disconnected and the index is activated, as shown:
Once the IDXACTIVATE command completes, the index is active and all users can access it.
1344
1345
Managing Performance
1346
14
Maintaining Database Structure
Once you create and start a database, you must manage the database so that it operates
efficiently and meets the needs of users. In the following sections, this chapter describes
methods to manage the database structure and alter it as necessary to improve storage and
performance:
Area numbers
142
You specify the database whose storage information you want and the PROSTRCT
STATISTICS utility displays information about:
The primary database block size and the before-image and after-image block sizes
The total number of active blocks allocated for each data area
The total number of all blocks (active, data, free, empty, extent, and total blocks)
Running PROSTRCT STATISTICS against an online database gives you a snapshot of the
database state at that moment in time.
143
6
26
1
32
32
884
44
1
928
32
327680
32768
144
28
324
2
352
32
3440640
3311
49
1
3360
32
1459
45
1
1504
32
8320
8141
179
1184
16
9504
If a full backup has not yet been performed against service.db, the message would read NO
FULL BACKUP HAS BEEN DONE.
145
To update the structure description file with the current information stored in the database
control area, use the PROSTRCT LIST utility:
structure-description-file
In the command, db-name specifies the database whose structure description file you want to
update, and structure-description-file specifies the structure description file to create. If
you do not specify the structure description file, PROSTRCT LIST uses the base name of the
database and appends a .st extension. It replaces an existing file of the same name.
For example, to update the structure description file for /user/joe/service, enter the
following command:
146
147
2.
Create a new structure description file that contains only information about the areas you
want to add. For example:
# add.st
#
d "chris",128 . f 1024
d "chris" .
Note: To avoid overwriting the .st file for your existing database, the name of this .st
file must be different from the existing .st file for the database. For example,
name the new structure description file add.st.
3.
Use the PROSTRCT Utility with the ADD qualifier, specifying the .st file you created
in Step 2. For example:
PROSTRCT ADD adds the new extents or storage areas and extents to the existing
database control area, and outputs descriptive information, such as:
Formatting extents:
size
1024
32
4.
area name
chris
chris
path name
/user/joe/service_9.d1
/user/joe/service_9.d2
00:00:00
00:00:01
Generate an update structure definition file for your database with PROSTRCT LIST. For
example:
After you modify the structure of your database (adding or removing extents), run
PROSTRCT LIST. PROSTRCT LIST automatically creates a structure description file for
your database if one does not exist, or overwrites an existing .st file of the same name to
reflect the changes you just made to the structure of your database.
148
In the following sample output of the PROSTRCT LIST utility, note the absolute pathname
(/usr1/example1.db) of the modified database:
149
You can only add transaction-log (TL) extents when the database is offline.
2.
Create a new structure description file that contains only information about the areas you
want to add. For example:
# add.st
#
d "chris",128 .
Note: To avoid overwriting the .st file for your existing database, the name of this .st
file must be different from the existing .st file for the database. For example,
name the new structure description file add.st.
3.
Use the PROSTRCT utility with the ADD qualifier, specifying the .st file you created in
Step 2. For example:
PROSTRCT ADD adds the new extents or storage areas and extents to the existing
database control area and outputs descriptive information, such as:
Formatting extents:
size
area name
0
chris
path name
/user/joe/service_9.d3
00:00:01
Generate an update structure definition file for your database PROSTRCT LIST. For
example:
After you modify the structure of your database (adding or removing extents), run
PROSTRCT LIST. PROSTRCT LIST automatically creates a structure description file for
your database if one does not exist, or overwrites an existing .st file of the same name to
reflect the changes you just made to the structure of your database.
1410
You can not have more than one instance of PROSTRCT ADDONLINE executing at a
single time.
All connected users must have sufficient privileges to access the newly created extents. If
currently connected users will not have sufficient privileges to access the new extents, you
may begin the ADDONLINE, but the users must be disconnected before the
ADDONLINE can complete.
Check status of all connected users. If any connected users do not have sufficient
privileges to access the new extents, you are informed of the users and the risks with
proceeding. You are prompted to continue, as shown:
usr1 Usr
1234
5678
...
There exist connections to the database which will have
problems opening newly added extents.
Create the physical files associated with the extents. This is the longest step, but requires
no locks so connected users are not impacted.
1411
Re-check the status of all connected users. You are prompted as follows:
usr1 Usr
1234
5678
...
There exist connections to the database which will have
problems opening newly added extents.
Do you wish to recheck user permissions and continue adding extents
online?
5.
If adding an extent to an existing data area, convert the current variable-length extent to a
fixed length extent.
6.
Add the new extent and size information to the area in the database control area.
7.
1412
Create a new structure description file that contains only information about the areas you
want to add. For example:
#
#
#
b
b
#
#
d
d
#
#
a
a
a
FILE: add.st
Add additional before-image extents
.
f 1024
.
Add additional extents to the Employee area
"Employee":7,32;1 . f 4096
"Employee":7,32;1 .
Add after-image areas
. f 512
. f 512
.
Note: To avoid overwriting the .st file for your existing database, the name of this .st
file must be different from the existing .st file for the database. For example,
name the new structure description file add.st.
2.
Adding the -validate parameter directs PROSTRCT to check the syntax and contents of
your .st file, without performing the add. For complete information on the use of the
-validate parameter, see the Validating structure description files section on
page 1422.
3.
Add the new areas and extents. Use the PROSTRCT Utility with the ADDONLINE
qualifier, specifying the .st file you created in Step 1. For example:
1413
5.
PROSTRCT ADDONLINE adds the new extents or storage areas and extents to the
existing database control area, and outputs descriptive information, such as:
Formatting extents:
size
area name
128
Primary Recovery Area
16
Primary Recovery Area
4096
Employee
32
Employee
64
After Image Area 10
64
After Image Area 11
16
After Image Area 12
path name
/user/joe/service.b3 00:00:02
/user/joe/service.b4 00:00:02
/user/joe/service_7.d5 00:00:02
/user/joe/service_7.d6 00:00:04
/user/joe/service.a10 00:00:00
/user/joe/service.a11 00:00:00
/user/joe/service.a12 00:00:00
Enabling extents:
size
128
Primary
16
Primary
4096
32
64
After
64
After
16
After
area name
Recovery Area
Recovery Area
Employee
Employee
Image Area 10
Image Area 11
Image Area 12
Generate an update structure definition file for your database PROSTRCT LIST. For
example:
After you modify the structure of your database (adding or removing extents), run
PROSTRCT LIST. PROSTRCT LIST automatically creates a structure description file for
your database if one does not exist, or overwrites an existing .st file of the same name to
reflect the changes you just made to the structure of your database.
1414
Area numbers
Area numbers
An OpenEdge database can have area numbers up to 32,000. Assigning area numbers is optional
and the maximum can be controlled with a startup parameter. Area numbers are discussed in the
following sections:
Error checking
#
b
b
#
d
#
#
d
d
#
d
d
#
d
d
#
. f 1024
.
"Schema Area":6,32;1 .
A database created with PROSTRCT using this structure file as input, creates the areas
sequentially.
1415
7:
8:
9:
.
.
.
.
.
.
.
.
.
32000 :
Figure 141:
1416
Active area.
Memory allocated
and used .
Inactive area .
Memory allocated ,
but unused .
Area numbers
Assigning specific area numbers to the user data areas in your structure file can improve
readability, but will prevent the ability to trim the maximum amount of unused memory in your
area array.
The following example shows a structure file with three data areas assigned area numbers 1000,
2000, and 3000:
#
b
b
#
d
#
#
d
d
#
d
d
#
d
d
#
. f 1024
.
"Schema Area":6,32;1 .
A database created with PROSTRCT using this structure file as input, creates the areas as they
are defined.
1417
7:
1000 :
2000 :
.
.
.
.
.
.
.
.
.
3000 :
.
.
.
Active area.
Memory allocated
and used .
Inactive area .
Memory allocated ,
but unused .
32000 :
Figure 142:
If minimizing memory consumption is a priority, you can trim the unused memory for the areas
numbered 3001 to 32,000 with the -maxAreas startup parameter. However, the memory for the
unused areas in between the allocated areas cannot be trimmed. See the Trimming unused area
memory section on page 1419 for more information.
1418
Area numbers
-maxAreas 2000
#
b
b
#
d
#
#
d
d
#
.
.
.
#
d
d
#
. f 1024
.
"Schema Area":6,32;64 .
1419
1:
Control area
2:
Reserved
3:
Before-Image area
4:
Reserved
5:
Reserved
6:
Schema area
7:
RDBMS use.
.
.
.
Reserved for
future use.
User data
2000 :
Figure 143:
Setting -maxAreas equal to your largest area number leaves no room for potential growth
beyond that area number. You can save unused memory, and provide room for growth, by
setting -maxAreas slightly higher than your maximum area number.
If during the time your database is online, you need to add areas beyond the current maximum
area number specified by -maxAreas, you must shutdown and restart your database. Your
options are:
1420
Shutdown your database and remove the -maxAreas parameter from the startup command.
This will allow the maximum area number, 32,000.
Area numbers
Error checking
Agreement between the defined area numbers of the database and the -maxAreas startup
parameter is verified at two points:
Server startup
Server startup
If the -maxAreas parameter is included at server startup, the server verifies that the maximum
area number defined for the database is less than or equal to the parameter value. If -maxAreas n
is smaller than the maximum defined area number, the server will not start and will issue an
error message. The following example shows the error message created when -maxAreas 200
was specified on the server startup of a database with a maximum area number of 2000:
$
$
$
#
#
#
d
d
f 4096
f 4096
1421
Verify that every line in the file is either blank, a comment, or a proper extent definition.
Verify that every non-blank line begins with a comment character (*, :, #) or an extent
type indicator (a, b, d, t).
Verify that if the extent definition includes area information, it begins with an area name
contained in double quotes ( ).
Verify that if an area number, records per block, or blocks per cluster field is defined for
one extent, they are the same for all extents defined for that area.
Verify that if delimiters are included, they adhere to the following rules:
Verify that if the cluster size is specified, it is one of the following values: 0, 1, 8, 64, 512.
Verify that a path specification is included for each extent, and that any path with a space
in the name is enclosed in double quotes( ), and preceded by an exclamation point (!).
Verify that size information for each extent is either blank or includes a type (f or v) and
a size in 1K units.
Verify that size information of an extent, when specified, is a multiple of 16 times the
block size and larger than 32; for Type II areas, also verify the size is large enough to
contain at least one cluster. These sizes are automatically rounded up when the files are
created, but validation issues a warning that the rounding will occur.
Validation also verifies that for each extent defined, there is sufficient disk space available to
create the file.
For information on the syntax of the structure definition file, see the Creating a structure
description file section on page 13.
1422
For all utilities, the directories where extents are specified must exist.
For PROSTRCT ADD and ADDONLINE, you cannot add transaction log extents when
two-phase commit is enabled.
1423
If the extent to be removed is in the BI area, use the PROUTIL TRUNCATE BI utility to
truncate the primary recovery area to be removed. For example:
If the storage area to be removed is an application data area, use the PROUTIL
TRUNCATE AREA utility to truncate the application data area. For example:
Note: You must disable after-imaging before you can remove an AI extent. You must
also disable two-phase commit before you can remove a transaction-log (TL)
extent.
For more information about truncating areas, see the PROUTIL TRUNCATE AREA
qualifier section on page 2090.
2.
Use PROSTRCT REMOVE to remove extents from the storage area. For example:
Note: Use the area-name parameter only to remove application data extents. If the area
name contains a space, supply double-quotes around the area name. For example:
"test data."
You can remove one extent at a time. After you have removed all of the extents from a
storage area, PROSTRCT REMOVE removes the storage area and outputs descriptive
information such as:
solaris:100a$ prostrct
/user/joe/service_9.d3
solaris:100a$ prostrct
/user/joe/service_9.d2
solaris:100a$ prostrct
/user/joe/service_9.d1
3.
1424
Run PROSTRCT LIST after removing any areas. PROSTRCT LIST will overwrite your
existing .st file to reflect the changes made to the structure of your database.
Moving tables
Use the PROUTIL TABLEMOVE utility to move a table and its associated indexes from one
storage area to another while the database remains online. For example:
index-area
[owner-name.]table-name
table-area
Notes: For the complete syntax description, see Chapter 20, PROUTIL Utility.
The _UserStatus virtual system table displays the utilitys progress. For more
information see Chapter 25, Virtual System Tables.
If you omit the index-area parameter, the indexes associated with the table will not be moved.
Moving the records of a table from one area to another invalidates all the ROWIDs and indexes
of the table. Therefore, the indexes are rebuilt automatically by the utility whether you move
them or not. You can move the indexes to an application data area other than the one to which
you are moving the table. If you want to move only the indexes of a table to a separate
application data area, use the PROUTIL IDXMOVE utility.
Moving a tables indexes with the TABLEMOVE qualifier is more efficient than moving a table
separately and then moving the indexes with the IDXMOVE utility. Moving a table separately
from its indexes wastes more disk space and causes the indexes to be rebuilt twice, which also
takes longer.
Note:
While you can move tables online, no access to the table or its indexes is recommended
during the move. The utility acquires an EXCLUSIVE lock on the table while it is in
the process of moving. An application that reads the table with an explicit NO-LOCK
might be able to read records, but in some cases can get the wrong results, since the
table move operation makes many changes to the indexes. Run the utility during a
period when the system is relatively idle, or when users are doing work that does not
access the table.
1425
Phase 1 The records are moved to the new area and a new primary key is built.
If you did not specify the index-area parameter, then the indexes are rebuilt in their
original area.
If you did specify the index-area parameter, then all the indexes are moved to the
new area where they are rebuilt.
Phase 4 All the old indexes are removed and the _StorageObject records of the indexes
and the table are updated.
Note:
Moving indexes
Use the PROUTIL IDXMOVE utility to move an index from one application data area to
another while the database remains online. You might be able to improve performance by
moving indexes that are heavily used to an application data area on a faster disk. For example:
Note:
1426
[owner-name.]indexname
area-name
For the complete syntax description, see Chapter 20, PROUTIL Utility.
Phase 1 The new index is being constructed in the new area. The old index remains in
the old area, and all users can continue to use the index for read operations.
Phase 2 The old index is being removed, and all the blocks of the old index are being
moved to the free block chain. For a large index this phase can take a significant amount
of time. During this phase, all operations on the index are blocked until the new index is
available; users accessing the index might experience a freeze in their application.
While you can move indexes online, no writes to the table or its indexes are allowed
during the move. The IDXMOVE utility acquires a SHARE lock on the table, which
blocks all attempts to modify records in the table. Run the utility during a period when
the system is relatively idle, or when users are doing work that does not access the
table.
Compacting indexes
When the DBANALYS utility indicates that space utilization of an index is reduced to 60
percent or less, use the PROUTIL IDXCOMPACT utility to perform index compaction online.
Performing index compaction increases space utilization of the index block to the compacting
percentage specified. For example:
Note:
owner-name.]table-name.index-name
For the complete syntax description, see Chapter 20, PROUTIL Utility.
Performing index compaction reduces the number of blocks in the B-tree and possibly the
number of B-tree levels, which improves query performance.
The index compacting utility operates in phases:
Phase 1 If the index is a unique index, the delete chain is scanned and the index blocks
are cleaned up by removing deleted entries.
Phase 2 The nonleaf levels of the B-tree are compacted starting at the root working
toward the leaf level.
Because index compacting is performed online, other users can use the index
simultaneously for read or write operation with no restrictions. Index compacting only
locks one to three index blocks at a time, for a short time. This allows full concurrency.
1427
IDXFIX Detects corrupt indexes and records with a missing or incorrect index.
For detailed descriptions of each VST, see Chapter 25, Virtual System Tables.
Example
Using the virtual system table mechanism, you can monitor the status and progress of several
database administration utilities.
The following example describes how to use an ABL routine to monitor all the index move
(PROUTIL IDXMOVE) processes being performed:
The following is an equivalent OpenEdge SQL statement to monitor all the index move
(PROUTIL IDXMOVE) processes being performed:
1428
15
Dumping and Loading
Dumping and reloading data definitions and table contents is important for both application
development and database maintenance. This chapter details the different ways you can perform
a dump and load, in the following sections:
Bulk loading
Note:
For a complete description of dumping and loading SQL content, see the SQLDUMP
and SQLLOAD descriptions in Chapter 24, SQL Utilities.
When you dump a database, you must first dump the database or table definitions, and then
dump the table contents. The definitions and contents must be in separate files and cannot be
dumped in one step. You perform both procedures with the Data Administration tool if you are
using a graphical interface, or the Data Dictionary if you are using a character interface.
You can also dump and load the table contents with PROUTIL. This option dumps and loads
the data in binary format.
OpenEdge Data tools automatically attempt to disable triggers before performing the dump or
load. If you do not have Can-Dump and Can-Load privileges, the Data tool asks if you want to
dump or load the database without disabling the triggers. See Chapter 8, Maintaining
Security, for information about assigning privileges.
For information on manually dumping and loading database contents using ABL syntax, see the
entries for EXPORT and IMPORT in OpenEdge Development: ABL Reference.
When you reload data from one database to another, the reload must be to a target database
that has the same fields, arranged in the same logical order as the source database. The
schema for each database must be identical. Otherwise, you must write your own reload
procedure using such ABL statements as INPUT FROM, CREATE, and IMPORT.
If you define a database field with a data type of ROWID or RECID, then the ROWID
values in that field are dumped and reloaded as Unknown value (?). You must write your
own dump and reload procedures to accommodate ROWID fields.
If you do not have Can-Create and Can-Write privileges for a file, you can still dump the
data and data definitions for that file. You can also load the data definitions for the file, but
you cannot load the data for that file.
Note:
152
OpenEdge RDBMS table data can also be dumped and loaded in formats other than
those described in this chapter. For information about other file formats for dumping
and loading OpenEdge table data, see OpenEdge Development: ABL Handbook.
Dump the entire database, including all its tables and fields.
Whenever you run the dump utility to dump table definitions, the Data Dictionary or Data
Administration tool creates a data definitions (.df) file that contains definitions of tables, fields,
indexes, sequences, and auto-connect records, and all their characteristics. However, depending
on whether you choose to dump all tables or only selected tables when you dump the definitions,
the Data Dictionary or Data Administration tool might or might not write all definitions to the
.df file.
Table 151 shows the definitions that are dumped in each case.
Table 151:
Definitions
All tables
Selected tables
Yes
Yes
Sequence definitions
Yes
No
Auto-connect records
Yes
No
Collation
No
No
_User
No
No
If you dump individual tables, you must also dump the sequence definitions and auto-connect
records separately. For instructions, see the Dumping sequence definitions section on
page 157 and the Dumping auto-connect records section on page 158.
To dump ABL database or table definitions:
1.
Access the Data Administration tool. The Data Administration main window appears.
2.
Make sure you are using the database (the source) from which you want to dump the table
definitions.
3.
Choose Admin Dump Data and Definitions Data Definitions (.df file). The Data
Administration tool lists all the tables defined for the database alphabetically.
153
Choose the table where you want to dump the definitions, or choose ALL.
If you choose ALL, the .df file will also contain the sequences and auto-connect record
definitions, but not the collation/conversion table. The Data Administration tool displays
a default name for the file that you can dump data definitions into (hidden tables are not
dumped). This default file is always the name of the table or database with a .df extension.
The Data Administration tool truncates table names to eight characters. When you dump
only one table, the table dump name becomes the default for its corresponding contents
dump file. For example, if you specify customer.d as the filename, when you dump the
file contents, the default filename is customer.d. If you specify to dump all the tables, the
default name is db-name.d.
5.
Accept this default or enter a different name and choose OK. The Data Administration tool
displays each object name as it writes its definition to the data definitions file.
You see each table name on the screen as the Data Administration tool writes its definition
to the .df file. After all of the table definitions are dumped, the Data Administration tool
displays a status message and prompts you to continue.
6.
154
Field
definition
Additional
field
definitions
omitted
Index
definition
Additional
index
definitions
omitted
Indicates
end of
definition
Indicates
end of
variables
Figure 151:
155
All definitions must belong with the most recent ADD DATABASE or CREATE DATABASE entry
in the .df file.
The trailer information contains the values used by the database definitions. If you edit the
definitions file or create one manually, be sure to specify these values correctly. Table 152
explains these values.
Table 152:
Description
codepage=codepage
Character count
156
Access the appropriate Data tool (the Data Administration tool if you are using a graphical
interface or the Data Dictionary if you are using a character interface).
2.
Select the source database as the current working database. The source database is the
database that has the schema that you want the definitions file to have.
3.
Choose Admin Dump Data and Definitions Create Incremental .df File. The Data
tool lists all the connected databases.
4.
If you have more than two databases connected, choose the target database. The target
database is the database to update.
The Data tool prompts you for the file where you want to write the differences.
5.
Specify the file where you want to write the differences. The default filename is delta.df.
The Data tool displays the file, field, sequence, and index names as it compares the
databases.
6.
After comparing the database, the Data Administration tool or the Data Dictionary returns
you to the main window.
Note: If you use this option to create a .df file in conjunction with r-code files to update
schema changes, you must load the .df file and recompile before you can run the
new r-code. You must recompile because the Data tool reorders the indexes during
the dump and load procedure.
Access the appropriate Data tool (the Data Administration tool if you are using a graphical
interface or the Data Dictionary if you are using a character interface).
2.
Make sure that the database containing the sequences you want to dump or load is the
current working database.
3.
Choose Admin Dump Data and Definitions Sequence Definitions. The Data tool
prompts you for the file where you want to write the sequence definitions. The default
filename is _seqdefs.df.
4.
Specify the filename or use the default value. After dumping the sequence definitions, the
Data tool displays a status message and prompts you to continue.
5.
157
158
1.
Access the appropriate Data tool (the Data Administration tool if you are using a graphical
interface or the Data Dictionary if you are using a character interface).
2.
Choose Admin Dump Data and Definitions Auto-connect Records Only. The data
tool prompts you for the output file where you want to write the auto-connect records. The
default filename is _auto.df.
3.
Specify a new filename or accept the default. After dumping the auto-connect records, the
Data tool displays a status message and prompts you to continue.
4.
Table contents
Sequence values
Note:
For a complete description of dumping and loading SQL content, see Chapter 24,
SQL Utilities.
The OpenEdge RDBMS provides two methods of dumping database contents: you can use the
PROUTIL utility to dump the data in binary format, or you can use the Data Administration tool
user interface to dump the data in text format. Dumping the data in binary format with
PROUTIL is generally faster.
[owner-name.]table-name
directory
[-index
num]
In the syntax, db-name specifies the database from which you want to dump; owner-name
specifies the owner of the table containing the data you want to dump; table-name specifies
the name of the table containing the data you want to dump; directory specifies the name of
the target directory where the data will be dumped; and -index num specifies an index to use
to dump the tables contents.
Expand the PROUTIL DUMP syntax for a threaded online dump with the following options:
{[-thread
] [-threadnum
nthreads]
[-dumplist
dumpfile]
Header
Record length
Table number
Binary record
The file header contains information that appears in the following order:
1510
1.
Version number
2.
3.
4.
5.
6.
Record CRC
8.
Section number
9.
To support the dump and load of binary large objects (BLOBS) and character large
objects (CLOBS), PROUTIL DUMP adds more items to the header of the binary dump
file.
[owner-name.]table-name.field-name
operator2 high-value directory [-preferidx
DUMPSPECIFIED syntax
Field
Description
db-name
owner-name
table-name
field-name
1511
DUMPSPECIFIED syntax
Field
operator1
and operator2
Description
Specifies the comparison to make when selecting
records. Operator can be one of five values:
EQ Equal to
GT Greater than
LT Less than
AND
directory
-preferidx
index-name
Note:
Examples
and high-value
The following syntax dumps all order date values greater than 2/3/02 from the Sports2000
database:
Syntax
proutil sports2000 -C dumpspecified order.order_date GT 02-03-2002
The following syntax dumps all item prices less than $25.90 from the Sports2000 database:
Syntax
proutil sports2000 -C dumpspecified item.price LT 25.9
1512
The following syntax dumps all back ordered items from the Sports2000 database into the
directory:
/inventory/warehouse1
Syntax
proutil sports2000 -C dumpspecified order_line.backorder EQ yes
/inventory/warehouse1
Access the appropriate Data tool (Data Administration tool if you are using a graphical
interface or the Data Dictionary if you are using a character interface).
2.
Make sure you are using the database (the source) where you want to dump the table data.
3.
Choose Admin Dump Data and Definitions Table Contents (.d files). The Data
tool lists all the tables defined for the database alphabetically.
4.
Mark the table contents you want to copy. You can mark tables with asterisks (*), then use
the Select Some and Deselect Some buttons to select or deselect groups of tables. If you
are using a character interface, use F5 and F6.
When you dump a single table, the Data tool displays a default name for the file that you
can dump the table contents into (hidden tables are not dumped). This default file is always
the dump name of the table definition file, with a .d extension. If you want to specify a
file other than the default file, type the name in the Output file: field.
When you dump table contents for more than one table, the Data tool prompts you for a
directory name in the Output Directory field. If you do not specify a directory, the Data
tool dumps the table contents files into the current directory. The Data tool names each
contents file with its corresponding table name.
Accept the default or enter a different name.
5.
If your database contains LOBs you want dumped, check the Include LOB: check box.
6.
If you checked the Include LOB: check box, specify the location of the LOBs in the LOB
Directory: field.
1513
If you want to use character mapping, enter the character mapping, then choose OK. See
OpenEdge Development: Internationalizing Applications for information about character
mapping, PROTERMCAP, and national language support.
The Data tool displays each table name as it writes the contents to the table contents file.
After dumping the contents, the tool displays a status message and prompts you to
continue.
8.
Note:
You must dump the sequence values separately. See the Dumping sequence values
with a Data tool section on page 1516 for more information.
1 "USA " "Lift Line Skiing " "276 North Street " "" " Boston "
"MA" " 02114 " "Gloria Shepley " "( 617) 450-0087" "HXM " 66700
42568 "Net30" 35 "This customer is on credit hold . Last
payment received marked ""Insufficient Funds ""!"
.
.
.
84 "USA" "Spike 's Volleyball " " 34 Dudley St " "" " Genoa " "NV"
"89411 " "Craig Eleazer " "( 702) 272-9264 " "KIK" 20400 18267
"Net30" 5 ""
.
Indicates end of contents
PSC
Always specified
filename =Customer
records =00000083
ldbname =junk
timestamp =2000 /06/28-16:20:51
Variables
numformat =thousands -,fractional -separator
dateformat =mdy-1900
map=NO-MAP
codepage =ibm850
Indicates end of variables
.
0000012998
Character count (always the last line)
Figure 152:
Table
contents
Trailer
information
The trailer information contains information about the source database and certain startup
parameters specified for the session in which the contents file was created. Certain variables are
included for informational purposes only; other variables are used to load the data correctly. If
you edit the contents (.d) file or create one manually, be sure to specify these variables
correctly.
1514
Description
filename=table-name
Table name
records=num-records1
ldbname=logical-db-name
num format=1
fractional-separator
dateformat=date-format1
thousands-separator,
map={MAP protermcap-entry
NO-MAP}1
codepage=codepage
Character count
1. Information used when loading the file. If the value is specified in the trailer and is specified incorrectly, the data
will not load.
1515
Access the appropriate Data tool (the Data Administration tool if you are using a graphical
interface or the Data Dictionary if you are using a character interface).
2.
Make sure that the database containing the sequences you want to dump or load is the
current working database.
3.
Choose Admin Dump Data and Definitions Sequence Values. The Data tool
prompts you for the file you want to write the sequence values to. The default filename is
_seqvals.d.
4.
Specify the filename or use the default value. After dumping the sequence values, the Data
tool displays a status message and prompts you to continue.
5.
1516
1.
Access the appropriate Data tool (the Data Administration tool if you are using a graphical
interface or the Data Dictionary if you are using a character interface).
2.
Make sure you are using the database from which you want to dump the table data.
3.
Choose Admin Dump Data and Definitions User Table Contents. The Data tool
prompts you for the file to write the user file contents to. The default filename is _user.d.
4.
Specify the filename or accept the default. After dumping the user file contents to the
specified file, the Data tool displays a status message and prompts you to continue.
5.
Access the appropriate Data tool (the Data Administration tool if you are using a graphical
interface or the Data Dictionary if you are using a character interface).
2.
Make sure you are using the database (the source) where you want to dump the table data.
3.
Choose Admin Dump Data and Definitions Views. The Data tool prompts you for
the file, then for the file that you want to write the user file contents to. The default
filename is _view.d.
4.
Specify the filename or accept the default. After dumping the view file contents to the
specified file, the Data tool displays a status message and prompts you to continue.
5.
Note:
For a complete description of dumping and loading SQL content, see Chapter 24,
SQL Utilities.
1517
Access the appropriate Data tool (the Data Administration tool if you are using a graphical
interface or the Data Dictionary if you are using a character interface).
2.
Make sure you are using the database (the target) where you want to load the table
definitions.
3.
Choose Admin Load Data and Definitions Data Definitions (.df files). The Data
tool prompts you for the name of the file that contains the data definitions you want to load
into the current database. The default filename is the logical name of the current working
database, with a .df extension.
4.
5.
Specify whether you want to stop the load as soon as the Data tool encounters a bad
definition statement. The Data tool displays each item as it loads the definitions for that
object. The Data tool displays a status message and prompts you to continue. If you choose
not to stop on errors, the load will continue with the next entry.
Note: Whatever you choose, if the Data tool encounters any error, it backs out any loaded
definitions.
6.
The database now contains the table, field, index, or sequence, but none of the data.
1518
1.
Make a copy of the database you want to update and save the original. The database should
not be empty.
2.
Connect to the database that includes the new, modified data definitions.
3.
Access the appropriate Data tool (the Data Administration tool if you are using a graphical
interface or the Data Dictionary if you are using a character interface).
Choose Database Connect Database. The Database Connect dialog box appears.
5.
Enter the name of the database you want to connect to and choose OK. The Data tool
connects the database and returns you to the Data tools main window.
6.
Choose Admin Dump Data and Definitions Create Incremental .df File. The
Create Incremental .df File dialog box appears.
The Create Incremental .df File option compares the data definitions in the nonempty
copy to the current database schema and creates a new data definitions file. The new .df
file contains a record for each difference between the two schemas. The differences
include any added, renamed, changed, or deleted table, field, or index.
If a table, field, or index exists in the old database but not in the new schema, the Data tool
asks if you renamed the object. If you answer no, a record appears in the new .df file
marking the object as deleted.
If the new schema includes a new, unique, active index, the Data tool prompts you to
deactivate it. If you do not deactivate the index and there are duplicate keys in the old
database, the system aborts your attempt to load new definitions into the old database. If
you deactivate the index, the load procedure defines the new index but does not create the
index file. You must complete Step 8 to build and activate the index after loading the new
data definitions.
7.
Enter the database name or accept the default databases, then choose OK.
8.
9.
Load the updated data definitions by choosing Admin Load Data and Definitions
Data Definitions (.df files).
10. If you deactivated any indexes, re-create data in the indexed fields as required to avoid
duplicate keys, then reactivate the indexes with PROUTIL IDXBUILD.
11. The Data tool updates the old database schema to match the modified schema. Compile
and test all your procedures against the updated database.
1519
Table contents
Sequence values
Note:
For a complete description of dumping and loading SQL contents, see Chapter 24,
SQL Utilities.
The OpenEdge RDBMS provides three methods of loading table contents. You load data
dumped in binary format with the PROUTIL LOAD command. Data in text format is loaded
with a Data tools user interface or the PROUTIL BULKLOAD command. You can perform a
binary load only on database contents that were created with a binary dump.
-dumplist dumpfile
In the syntax, db-name specifies the database where you want to load the data. To load one file,
specify filename; to load multiple binary dump files, specify -dumplist dumpfile where
dumpfile contains a list of binary dump files. You can chose to build indexes on your data as
it loads with the build indexes parameter. For complete syntax details, see the PROUTIL
LOAD qualifier section on page 2073.
When specifying multiple files to load, you can use a dump file created with a multi-threaded
binary dump, or create your own. A dump file contains a list of fully qualified file names of
binary dump files. This example shows the contents of the file order.dmp_lst:
/usr1/docsample/101A/bindump/order.bd
/usr1/docsample/101A/bindump/order.bd2
/usr1/docsample/101A/bindump/order.bd3
/usr1/docsample/101A/bindump/order.bd4
/usr1/docsample/101A/bindump/order.bd5
/usr1/docsample/101A/bindump/order.bd6
1520
Using the -dumplist parameter is the equivalent of issuing 6 individual load commands. For
example:
proutil
proutil
proutil
proutil
proutil
proutil
newdb
newdb
newdb
newdb
newdb
newdb
-C
-C
-C
-C
-C
-C
load
load
load
load
load
load
/usr1/docsample/101A/bindump/order.bd
/usr1/docsample/101A/bindump/order.bd2
/usr1/docsample/101A/bindump/order.bd3
/usr1/docsample/101A/bindump/order.bd4
/usr1/docsample/101A/bindump/order.bd5
/usr1/docsample/101A/bindump/order.bd6
When the load procedure finishes, it reports the number of records that were loaded.
Recovering from an aborted binary load
The procedure to recover from an aborted binary load depends on how much the binary load
completed before the abort. If the binary load aborted during the sort phase (after all the rows
loaded), then recovery entails running the index rebuild utility on the affected table. If the binary
load aborted during the load of the rows into the database, there is a separate procedure for
recovery.
To recover from a binary load that aborted during the loading of rows:
1.
2.
You can determine which phase the binary load completed by examining the database log (.lg)
file. In this file, you will see messages identifying the major phases of the load operation, such
as:
1521
Table definitions must be in the database before you can load table contents.
Access the appropriate Data tool (the Data Administration tool if you are using a graphical
interface or the Data Dictionary if you are using a character interface).
2.
Make sure that the working database is the target database where you want to load the table
contents.
3.
Choose Admin Load Data and Definitions Table Contents (.d files). The Data tool
alphabetically lists all the tables defined for the database.
4.
Mark the tables whose contents you want to copy. Use the Select Some and Deselect
Some buttons to select or deselect groups of tables. For a character interface, use F5 and
F6 to select table names.
The Data tool prompts you for the name of the contents file (or the directory that contains
the contents files) you want to load into the current database.
5.
6.
7.
If you are including LOBs, specify the directory in the Lob Directory field. If you dont
specify a directory, the current directory is assumed.
8.
1522
Check Output Errors to Screen, if you want errors reported to the screen as they occur.
10. Choose OK to return to the Data Administration or Data Dictionary main window.
Note:
You must load the sequence values separately. See the Loading sequence values using
a Data tool section on page 1524 for information about loading sequence values.
Access the appropriate Data tool (the Data Administration tool if you are using a graphical
interface, or the Data Dictionary if you are using a character interface).
2.
Choose Admin Load Data and Definitions User Table Contents. The Data tool
prompts you for the file from where you want to read the user file contents. The default
filename is _user.d.
3.
Specify the filename or accept the default. After loading the user file contents to the
specified file, the Data tool displays a status message and prompts you to continue.
4.
Access the appropriate Data tool (the Data Administration tool if you are using a graphical
interface or the Data Dictionary if you are using a character interface).
2.
Choose Admin Load Data and Definitions SQL Views. The Data tool prompts you
for an input file from which to load the SQL views. The default filename is _view.d.
3.
Specify the filename or accept the default. After loading the view file contents from the
specified file, the Data tool displays a status message and prompts you to continue.
4.
Note:
For a complete description of dumping and loading SQL content, see Chapter 24,
SQL Utilities.
1523
1524
1.
Access the appropriate Data tool (the Data Administration tool if you are using a graphical
interface or the Data Dictionary if you are using a character interface).
2.
Make sure that the working database is the target database where you want to load the table
contents.
3.
Choose Admin Load Data and Definitions Sequence Current Values. The Data
tool prompts you for the filename where you want to write the sequence values. The
default filename is _seqvals.d.
4.
Specify the filename or use the default value. After the sequence values are loaded, the
Data tool displays a status message and prompts you to continue.
5.
Bulk loading
Bulk loading
The Bulk Loader utility loads text data at a higher speed than the Load utility provided with the
Data Dictionary or Data Administration tool.
Note:
The bulk loader works only with OpenEdge databases. Some non-OpenEdge databases
offer a similar bulk loading tool. If you are using a non-OpenEdge database and a bulk
loading tool is not available, you must use the standard load utility in the Data
Administration tool or Data Dictionary.
For information on how to dump table data into contents files, see the Dumping user
table contents with a Data tool section on page 1516.
Access the appropriate Data tool (the Data Administration tool if you are using a graphical
interface or the Data Dictionary if you are using a character interface).
2.
Choose Admin Create Bulk Loader Description File. The Data tool alphabetically
lists all the tables defined for the database.
3.
Select the tables for which you want to create bulk loader description files. Use the Select
Some and Deselect Some buttons to select or deselect groups of tables. For a character
interface, use F5 and F6. The Data tool prompts you for the bulk load filename. The default
filename is table-dumpname.fd.
4.
5.
If you are loading LOBs, check Include LOB: and specify their location in the Lob
Directory: field.
6.
7.
Click OK to create the bulk load description file. After creating the file, the Data tool
displays a status message and prompts you to continue.
8.
1525
Table name
Data (.d) file
Error file
customer customer .d customer .e
Cust -num
Name
Address
Address 2
City
St
Zip
Phone
Contact
Field names
Sales -rep
Sales -region
Max -credit
Curr -bal
Terms
Tax -no
Discount
Mnth -sales
Ytd -sls
Figure 153:
In this example, customer is the name of the database table, customer.d is the name of the data
file you want to load, and customer.e is the name of an error file that the Bulk Loader creates
if errors occur during the bulk load. The field names in the customer table are Cust-num, Name,
Address, etc.
Action
Skip a field.
By default, the Bulk Loader adds .d and .e extensions to the table name to produce the two file
names it requires to operate. It assumes that both of the files (the data file and the error file)
begin with the table name. If this assumption is false, you must specify the different filenames.
1526
Bulk loading
For example, if you dump the customer table into cust.d instead of customer.d, you must
specify the name cust.d in the bulk loader description file. Otherwise, the Bulk Loader
searches the current directory for customer.d. Figure 154 shows a modified .fd file.
#This is an example
customer
cust.d cust.e
Cust-num
Name
^
^
City
St
^
Phone
.
item
Item-num
Idesc
.
order
Order-num
^
Name
Figure 154:
Create a Bulk Loader description file using the Data Administration tool, Data Dictionary,
or a text editor. If you use the Data Dictionary or Data Administration tool, it automatically
writes the description file. If you use a text editor, you must create the file.
Using a text editor to modify a description file that the Bulk Loader has created allows you
to customize the file to suit your needs. For more information see the Creating a Bulk
Loader description file section on page 1525.
2.
1527
proutil db-name
[ -yy n ] -C
bulkload fd-file
[ -B n ]
Where db-name specifies the database you are using; -yy n indicates the start of a 100-year
period in which any two-digit DATE value is defined; fd-file identifies the bulk loader
description file; and -B n indicates the Blocks in Database Buffers startup parameter; n
specifies the number of blocks.
See the PROUTIL BULKLOAD qualifier section on page 2017 for complete syntax details
and more information about the PROUTIL BULKLOAD.
Note:
On exceptionally small systems, you might encounter memory problems when using
the Bulk Loader utility. To work around these problems, split the description file into
several smaller files.
The Bulk Loader utility checks the description file to determine which data file contains the
customer tables dumped records. In this example, it is customer.d.
The order of the fields in the description file must match the order of the fields in the data (.d)
file. If they do not match, the Bulk Loader attempts to load data into the wrong fields.
The Bulk Loader utility automatically deactivates a data tables indexes, so you must run the
Index Rebuild utility after the Bulk Loader loads the data files. If the Bulk Loader encounters
duplicate key values during a load, it continues to load the data files records. You must
manually correct the data before you run the Index Rebuild utility.
1528
Access the appropriate Data tool (the Data Administration tool if you are using a graphical
interface or the Data Dictionary if you are using a character interface).
2.
Choose Admin Load Data and Definitions Reconstruct Bad Load Records. The
Reconstruct Bad Load Records dialog box appears.
3.
Specify the original data file, the error file, and the new output file, then choose OK. The
default filename for the new data file is error.d. After writing the new output file, the
Data tool displays a status message and prompts you to continue.
4.
5.
Use a text editor to edit the new output file and fix the bad records after the Data tool builds
the file. Once you have fixed the records, you can reload the file.
1529
Copy
Starting version
(No data)
Copy
Working DB
Copy
Figure 155:
Test DB
Distributed
DB
To create a new database that contains table definitions from another database, use the Dump
facility to dump the table definitions, the PRODB utility to create the new database, and the
Load facility to load the table definitions into the new database.
To create a starting version of a database:
1530
1.
Start an OpenEdge session with the database (the source) you want to copy.
2.
3.
5.
Start an OpenEdge session with the new database (the target) you created in Step 4.
6.
With the data definitions file you created in Step 2, load the database definitions into the
new database.
Start an OpenEdge session with the database (the source) that has the table you want to
copy.
2.
Dump the contents of the table you want to copy into a contents file.
3.
4.
With the target database as the working database, load the contents of the table.
Start an OpenEdge session with the database (the source) you want to copy.
2.
3.
4.
5.
6.
Designate the new (target) database you created in Step 5 as the working database.
1531
With the data definitions file you created in Step 2, load the database definitions into the
new database.
8.
Load the table contents files you created in Step 3 into the new database, using either the
Load or Bulk Loader utility.
9.
Use PROUTIL with the TABANALYS qualifier to determine the mean record size of each
table in the database. See Chapter 20, PROUTIL Utility, for a detailed description of the
PROUTIL TABANALYS qualifier.
2.
OUTPUT TO table-name.D.
FOR EACH table-name BY index
EXPORT table-name.
END
Load the tables of less than 1,000 bytes first, in order of average record size. Use the Data
Dictionary or the Data Administration tool to load data one table at a time, or use the Bulk
Loader utility and a description file to control the order.
If you use the Bulk Loader, the order of the fields in the description file must match the
order of the fields in the data file. If they do not match, the Bulk Loader attempts to load
data into the wrong fields.
4.
1532
2.
Load the definitions using the Data Dictionary or the Data Administration tool.
3.
In multi-user mode, start a server with a before-image writer (BIW) and asynchronous
page writer (APW).
4.
Start a client for each processor on your system. Have each client load certain tables. For
each client, write an ABL procedure similar to the following:
The clients, loading the data simultaneously, distribute the data across all disks. This
eliminates hot spots (that is, areas where data might be concentrated).
5.
After the data is loaded, perform a full index rebuild using PROUTIL IDXBUILD.
1533
Deactivate individual indexes or all the indexes with the Data Administration tool if you
are using a graphical interface, or the Data Dictionary if you are using a character
interface.
You can deactivate (but not reactivate) a single index from either Data tool. If you create a new
unique index on an existing file, consider deactivating the index. If existing table data yields
duplicate keys, all changes made during the current session are backed out when you try to save
them. Create or reactivate a unique index after you have ensured that all key values are unique.
To activate an index, use PROUTIL IDXBUILD. See Chapter 20, PROUTIL Utility, for
more information about activating indexes with PROUTIL IDXBUILD.
Once an index is deactivated, you cannot use it to retrieve information from the records it
normally points to. If you attempt to use a FOR, FOR EACH, or CAN-FIND statement in connection
with a deactivated index, the database engine terminates the currently executing program.
However, you can create, delete, or update a record that a deactivated index points to.
The database engine does not examine the active status of an index when it makes a search. Nor
does the active status of an index affect the time stamp on the _Index file. As a result,
precompiled programs do not require compilation if the only change to the schema is index
activation or deactivation.
1534
2.
Choose Utilities Deactivate Indexes. The Index Deactivation dialog box appears.
3.
Choose OK. The Data tool lists the tables in the database.
4.
Type all to deactivate all indexes, or select the indexes you want to deactivate. The Data
tool prompts you to verify that you want to deactivate all the indexes.
5.
2.
3.
4.
Click the Index Properties button. The Index Properties dialog box appears.
5.
Select the table that contains the index you want to deactivate. The Data Dictionary lists
the indexes defined for the selected database.
6.
7.
Click the Save button. The Data Dictionary prompts you to verify that you want to
deactivate the index.
8.
Verify that you want to deactivate the index. The Data Dictionary deactivates the index.
You can also deactivate an index from an ABL procedure. Search through the _Index file to find
the index you want to deactivate, then set the _Active field equal to NO. The following example
uses this technique:
1535
1536
16
Logged Data
The OpenEdge database engine logs significant database events such as startup parameter
settings; startup, shutdown, and system error messages; and application-related events. This
chapter details the messages written to the database log file, and how to save key events within
the database, as described in the following sections:
Logged Data
Table 161 describes the fields of each entry in the log file. All fields, with the exception of the
message text, have a fixed width.
Table 161:
Description
yy/mm/dd
hh:mm:ss.uuu
shhmm
P-nnnnnn
T-nnnnn
name nnn:
162
(1 of 2)
I informational message
W warning message
F fatal message
(2 of 2)
Description
(nnnnn)
Message text...
163
Logged Data
prolog database-name
-online
The PROLOG utility removes all but the most recent entries from the log file. Use the -online
parameter when you need to truncate the log file for a database that is online. For complete
details on the PROLOG utility, see PROLOG utility.
164
_KeyEvt table
[2006/04/06@10:52:58.578-0400] P-6450
Starting the Save Key Events daemon.
[2006/04/06@10:52:58.578-0400] P-6450
Started.
T-1
I KEVTRDR 5: (13658)
T-1
I KEVTRDR 5: (2518)
165
Logged Data
The key event process is also visible in PROMON. The following example shows User Control
output from a database enabled to save key events:
User Control:
Usr Name
Type Wait
0 docdemo BROK -5 docdemo MON
-6
KERD --
Area
0
0
0
Dbkey
0
0
0
Trans
PID
0 11117
0 11220
0 11235
Sem
0
2
3
Srv
Login Time
0 04/07/06 13:50
0 04/07/06 13:50
0 04/07/06 13:50
The new process type, KERD, represents the key event process.
Optionally define a database area for your _KeyEvt table and add it to your database. If
you dont specify an area, the table is created in your Schema Area.
2.
Enable your database to save key events with the PROUTIL ENABLEKEYEVENTS
utility. See the PROUTIL ENABLEKEYEVENTS qualifier section on page 2046 for
details on the utility.
Note:
166
If you add a new area to your database for key events, update your structure description
file with PROSTRCT LIST. For more information on PROSTRCT, see Chapter 21,
PROSTRCT Utility.
See the PROUTIL DISABLEKEYEVENTS section on page 2036 for details on the utility.
Database activity
Database startup
(1 of 5)
Key events record stored
Database shutdown
Protrace generation
DBTOOL
PROBKUP
PROCOPY
PROLOG
Transaction id
167
Logged Data
Table 162:
Database activity
PROREST
PROSTRCT ADD
PROSTRCT BUILDDB
PROSTRCT CREATE
PROSTRCT REMOVE
PROSTRCT REORDER
PROSTRCT REPAIR
PROSTRCT UNLOCK
PROUTIL BIGROW
Number of clusters
PROUTIL BULKLOAD
168
(2 of 5)
PROUTIL CONV910
Codepage specified
PROUTIL DBAUTHKEY
Database activity
(3 of 5)
Key events record stored
PROUTIL DISABLEAUDITING
PROUTIL DISABLEKEYEVENTS
PROUTIL DISABLEJTA
PROUTIL DUMP
PROUTIL DUMPSPECIFIED
PROUTIL ENABLEAUDITING
PROUTIL ENABLEKEYEVENTS
PROUTIL ENABLELARGEFILES
PROUTIL ENABLEJTA
PROUTIL ENABLEPDR
PROUTIL IDXACTIVATE
PROUTIL IDXBUILD
PROUTIL IDXCHECK
PROUTIL IDXCOMPACT
Degree of compaction
PROUTIL IDXFIX
169
Logged Data
Table 162:
Database activity
PROUTIL IDXMOVE
PROUTIL LOAD
PROUTIL MVSCH
PROUTIL TABLEMOVE
PROUTIL TRUNCATE BI
PROUTIL TRUNCATE BI -F
(Force access to damaged database)
PROUTIL UPDATEVST
(4 of 5)
Database activity
RFUTIL AIMAGE TRUNCATE
(5 of 5)
Key events record stored
_KeyEvt table
The _KeyEvt table contains records generated by the key event process. While the _KeyEvt table
is considered part of the database metaschema, it has properties that differentiate it from other
metaschema tables and the schema is detailed, as discussed in the following sections:
_KeyEvt schema
1611
Logged Data
_KeyEvt schema
The _KeyEvt table stores key events in the database. Table 163 describes the active fields of
the _KeyEvt table.
Table 163:
Field name
Description
Data type
Format
_KeyEvt-Date
DATETIME-TZ
99/99/9999
HH:MM:SS.SSS+HH:M
_KeyEvt-Procid
INTEGER
->,>>>,>>>,>>9
_KeyEvt-ThreadId
INTEGER
->,>>>,>>>,>>9
_KeyEvt-MsgId
INTEGER
->,>>>,>>>,>>9
_KeyEvt-Usrtyp
CHARACTER
X(32)
_KeyEvt-Usrnum
INTEGER
->,>>>,>>>,>>9
_KeyEvt-Event
CHARACTER
X(200)
1612
Table move Move the _KeyEvt table to a new area with PROUTIL TABLEMOVE.
Truncate area Delete the data in the _KeyEvt tables and indexes with PROUTIL
TRUNCATE AREA.
Record deletion Records in the _KeyEvt table can be deleted with standard 4GL and
SQL statements.
This component . . .
PROMSGS.DLL
CATEGORY.DLL
PROMSGS file
(1 of 2)
Description
None
Brief
1613
Logged Data
Table 165:
Value
(2 of 2)
Description
Normal
Full
Column
Description
Date
Time
Source
Category
Event
The OpenEdge message number that was generated. These are the
same message numbers that are displayed in the standard database
.lg file.
User
Computer
1614
HKEY_LOCAL_MACHINE
SYSTEM
CurrentControlSet
Services
EventLog
Application
PROGRESS
<Database Name>
See the Microsoft Windows documentation for more information about editing registry files.
When the database engine tries to find the .dlls before this information is included in the
registry, it searches the current directory. If the .dll is not in the current directory, the engine
searches the directory where the executable is located. If the .dll is not in the same directory as
the OpenEdge executable, the engine searches the users path. If the .dll is not in the users
path, the engine generates a message stating that the .dll cannot be found, and it writes a
message to the OpenEdge event log file.
1615
Logged Data
The database-request statement cache is updated by individual clients, and is refreshed each
time a client performs a database operation. This cache displays ABL and SQL activity,
including:
ABL program names and line numbers of ABL code executing database requests
ABL program stacks of up to 32 program names and line numbers, beginning with the
current program name and line number executing a database operation
SQL statement caching occupies an area in shared memory used to store data structures for
identical statements across client sessions. This caching reduces memory consumption by
allowing clients to share data structure queries. In addition, SQL statement caching provides
faster query processing for statements present in the cache, enabling the database server to
expedite query responses by eliminating parsing and optimization phases.
ABL statement caching differs from SQL statement caching. Where SQL statement caching
provides only SQL statements that generate database server requests, ABL statement caching
provides program names and line numbers of ABL code that generated database server requests.
Multi-database queries are often complicated by large amounts of data and the physical
distribution of such information. Querying multiple data sources typically produces complete
result sets. Processing large amounts of data results in poor query response time, especially
when considering the physical distribution of the data. To expedite the distribution of these data
structures, client database-request statement caching can be used to explicitly monitor ABL and
SQL database activity and identify areas within an ABL program (or SQL statements)
associated with database requests.
Note:
When client database-request statement caching is activated, ABL stack traces indicate which
procedure executed a database operation, and what procedure generated the request. Each line
in the request contains the line number and additional procedure information, such as the name
of the procedure, whether it is an internal or external procedure, a user-defined function, or the
method of a class.
1616
The line number indicates the point in the ABL code where the statement was executing when
the ABL stack trace was generated. In some cases, the line number may be -1, indicating the end
of the procedure before returning to its caller. Line number information is found in the debug
listing file.
Note:
The debug listing file is generated by the DEBUG-LIST option in the COMPILE
statement. For more information, see OpenEdge Development: ABL Handbook.
The name of the procedure as it was executed by the RUN statement in the ABL
application. For example, if the ABL statement used either RUN test.p or RUN test.r,
then RUN test.p or RUN test.r appear as the name of the procedure.
A temporary file with (with a .ped extension) is displayed as the procedure name for
procedures executed using the Procedure Editor.
1617
Logged Data
Runs as part of database analysis, reporting on the lengths, but not the content of the
cluster free, free, RM, and index delete chains
The number of blocks processed is greater than the number reported in the object block
The actual chain may contain more dbkeys than are reported
Note:
1618
These inconsistencies may not indicate a problem with the chain; it may be the result
of reporting on a changing chain. For more information, see Chapter 20, PROUTIL
Utility.
At the end of the final roll forward or roll forward retry session (after all the phases of crash
recovery have been finished) the protection will be automatically turned off.
If you want to disable the protection (in situations where it is not desirable to roll forward
to the last extent), use the OPUNLOCK qualifier. For example:
Disable the protection mechanism, making the database accessible to other users or
utilities.
Complete the phases of the crash recovery. The database will be ready after this qualifier
is executed.
Note:
The OPUNLOCK qualifier should only be used for situations with missing AI files.
Once this qualifier is executed, the roll forward process stops.
Logged Data
process. For example, attempting to modify the target database before the entire roll forward
process is completed will not be allowed while this option is invoked.
Messages that may appear on the screen and .lg file during the utility's operation include:
1620
Please wait until the roll forward process is finished or use the rfutil -C roll opunlock
option to disable the protection.
Roll forward process has been stopped with the opunlock option.
Part IV
Reference
Chapter 17, Startup and Shutdown Commands
Chapter 18, Database Startup Parameters
Chapter 19, PROMON Utility
Chapter 21, PROSTRCT Utility
Chapter 20, PROUTIL Utility
Chapter 22, RFUTIL Utility
Chapter 23, Other Database Administration Utilities
Chapter 24, SQL Utilities
Chapter 25, Virtual System Tables
17
Startup and Shutdown Commands
This chapter describes the OpenEdge RDBMS startup and shutdown commands in alphabetical
order. It describes the purpose, syntax, and primary parameters for each command, as described
in the following sections:
PROAIW command
PROAPW command
PROBIW command
PROQUIET command
PROSERVE command
PROSHUT command
PROWDOG command
For a complete list and description of all parameters you can specify with these commands, see
Chapter 18, Database Startup Parameters.
command
Figure 171:
Syntax conventions
For example, the following command allows 100 users to access the sports database and then
set values for the database connection, performance, and network parameters:
Command components
Component
command
db-name
parameter, qualifier
value
Note:
172
Description
Use . . .
proserve
-servergroup server-group-name
Start a server or broker for a multi-user
OpenEdge database
proserve db-name
-S service-name
-H host-name
-N network-type
proshut db-name
proapw db-name
probiw db-name
proaiw db-name
prowdog db-name
proshut db-name
-S service-name
-H host-name
-N network-type
proshut db-name
173
PROAIW command
PROAIW command
Starts the after-image writer (AIW) process.
Syntax
proaiw db-name
Parameters
db-name
Specifies the database where you want to start the AIW process.
The AIW improves performance by writing notes to the after-imaging file. For more
information on the AIW, see Chapter 13, Managing Performance.
Notes
174
To stop the AIW, disconnect it by using the PROSHUT command. You can start and stop
the AIW at any time without shutting down the database.
The AIW counts as one user. You might have to increment the value of the Number of
Users (-n) parameter to allow for the AIW. However, the AIW does not count as a licensed
user.
You can increase the number of buffers in the after-image buffer pool by using the
After-image Buffers (-aibufs) parameter. Increasing the number of buffers when running
an AIW increases the availability of empty buffers to client and server processes.
Increasing the After-image Buffers parameter has no effect if the AIW is not running.
PROAPW command
PROAPW command
Starts an asynchronous page writer (APW).
Syntax
proapw db-name
Parameters
db-name
To stop an APW, disconnect it by using the PROSHUT command. You can start and stop
APWs at any time without shutting down the database.
Each APW counts as a user. You might have to increase the value of the Number of Users
(-n) parameter to allow for APWs. However, APWs do not count as licensed users.
The optimal number depends on your application and environment. To start, use one page
writer for each disk where the database resides. If data gathered from PROMON indicates
that this is insufficient, add more. For more information on PROMON see the Chapter 19,
PROMON utility.
For an application that performs many updates, start one APW for each disk containing
your database, plus one additional APW. Applications that perform fewer changes to a
database require fewer APWs.
175
PROBIW command
PROBIW command
Starts a before-image writer (BIW) process.
Syntax
probiw db-name
Parameters
db-name
176
To stop the BIW process, disconnect it by using the PROSHUT command. You can start
and stop the BIW at any time without shutting down the database.
The BIW process counts as one user. You might have to increment the value of the
Number of Users (-n) parameter to allow for the BIW. However, the BIW does not count
as a licensed user.
You can increase the number of before-image buffers with the Before-image Buffers
(-bibufs) parameter. Increasing the number of buffers increases the availability of empty
buffers to client and server processes.
PROQUIET command
PROQUIET command
Stops all writes to database files by enabling a quiet processing point.
Syntax
proquiet dbname -C
Parameters
{ {
enable
nolock
]|
disable
}|
bithreshold n
db-name
Specifies the name of the database where you are enabling or disabling a quiet processing
point.
enable | disable
Enables or disables a quiet processing point. Any processes that attempt transactions while
a quiet point is enabled must wait until the quiet point is disabled.
nolock
Allows you to enable a quiet point without waiting on shared memory latches.
bithreshold n
Specifies the maximum size to which BI recovery files can grow, where n is an integer
specifying the size of the threshold in MB. You can increase the size of the threshold above
the current value or reduce the size to one cluster larger than the size of the recovery log
file at the time the PROQUIET command is issued.
Note:
Though the above table lists the -C parameter to show the complete syntax, you do not
need to use the -C parameter in the PROQUIET syntax.
PROQUIET ENABLE stops all writes to the database; PROQUIET DISABLE ends the quiet
point to resume writes. PROQUIET is useful for advanced backup strategies. You can also use
the PROQUIET command with the bithreshold parameter to adjust the size of the recovery
log threshold online. Use the PROSERVE command with the -bithold startup parameter to set
the size of the primary recovery log threshold on startup.
For more information on using database quiet points, see Chapter 5, Backing Up a Database,
and Chapter 13, Managing Performance.
Notes
Enabling a no-lock quiet point on a on a database with after-imaging enabled does not
force an AI extent switch. The BUSY AI extent at the time the no-lock quiet point is
enabled must be rolled forward with ROLL FORWARD RETRY.
177
PROQUIET command
Examples
Use the PROQUIET command to manage the primary recovery area (BI) threshold before
your database stalls.
For example, to start a server and set a 500MB BI threshold, allowing the system to stall
if that threshold is reached, use the PROSERVE command as follows:
Assume a long running transaction causes the expansion of the BI, but before the threshold
is reached you receive the following message:
BI file size has grown to within 90% of the threshold value 523763712.
(6559)
After receiving message 6559, you decide to increase the BI threshold to 1GB while the
database remains online and investigate the cause of the unexpected BI growth before the
system stalls. Use the PROQUIET command, as follows:
The above command establishes a quiet point and increase the threshold. The database
does not stall.
Note: In practice, invoke PROQUIET commands by using a script so that they occur as
quickly as possible and with the least amount of impact to the online system.
When a database stalls because the BI threshold is reached, the stall causes an implicit
quiet point and the database engine writes a message to the log file. To expand the BI
threshold and continue forward processing, use the PROQUIET command with the
bithreshold parameter only, as follows:
178
PROSERVE command
PROSERVE command
Starts the broker, which in turn spawns the server. The server process coordinates all access to
the specified OpenEdge database.
Syntax
proserve
Parameters
db-name
-servergroup
server-group-name
] [
parameters
]}
db-name
Specifies the logical collection of server processes to start. The server-group-name you
specify must match the name of a server group in the conmgr.properties file. You create
server groups using the Progress Explorer Database Configuration Tools, which saves
them in the conmgr.properties file.
parameters
Specifies the startup parameters for the broker/server. See Chapter 18, Database Startup
Parameters, for a list of broker/server startup parameters.
Notes
You can specify only one database name when using PROSERVE to start a broker or
server group.
Typically, server groups share common attributes such as connection port, number of
servers, and how connected clients are distributed among the severs.
You create server groups using the Progress Explorer Database Configuration Tool, which
saves them in the conmgr.properties file. The server-group-name you specify with the
PROSERVE -servergroup parameter must match the name of a server group in the
conmgr.properties file. Do not edit the conmgr.properties file directly. Instead, use
Progress Explorer. For more information on the Progress Explorer, click the help icon
within the Progress Explorer application.
The behavior of the -servergroup parameter is similar to the behavior of the -pf
(parameter file) parameter. In effect, -servergroup causes a server to load the parameters
associated with the server group, including the database name.
179
PROSERVE command
It is possible to override the parameter values associated with a server group by adding
additional parameters to the PROSERVE command. For example, if the database buffer
pool is set to 10,000 within the configuration associated with a server group, you can
specify a larger value by adding an additional parameter:
Conversely, if you specify a startup parameter before the -servergroup parameter, the
startup parameter can be overridden when the same parameter is set in the server group
configuration file. For example, if you place the additional parameter before the
-servergroup parameter, the database buffer pool remains 10,000:
1710
PROSHUT command
PROSHUT command
Shuts down the OpenEdge database server and individual processes. Before you shut down the
broker, have all application users quit their sessions. If necessary, you can disconnect users by
using the PROSHUT commands Disconnect a User or Unconditional Shutdown parameters.
Syntax
proshut db-name
Parameters
[ -b | -by| -bn
| -C list | -C disconnect username
| -F | -Gw
| -H host-name | -S service-name
| -cpinternal codepage | -cpstream
] ...
codepage
db-name
Directs PROSHUT to perform a batch shutdown only if there are no active users.
-C list
Lists all of the users connected to the database. The list is printed out to the screen without
any page breaks. Use of this parameter is limited to local non-networked connections only.
-C disconnect usernum
Allows you to initiate a disconnect for the specified user. This is similar to option 1 of the
PROSHUT menu. Use of this parameter is limited to local non-networked connections
only.
-F
Starts an emergency shutdown. To use this parameter, you must run PROSHUT on the
machine where the server resides. This parameter is not applicable for remote shutdowns
or DataServer shutdowns.
1711
PROSHUT command
-Gw
Specifies the machine where the database server runs. If issuing the shutdown command
from a remote machine, specify the host name.
-S service-name
Specifies the database server or broker process. If issuing the shutdown command from a
remote machine, specify the service name.
-cpinternal codepage
An internationalization startup parameter that identifies the code page used in memory.
-cpstream codepage
An internationalization startup parameter that identifies the code page used for stream I/O.
When you enter the PROSHUT command without the -by, -bn, or -F parameters, the following
menu appears:
1
2
3
x
Disconnect a User
Unconditional Shutdown
Emergency Shutdown (Kill All)
Exit
The following table lists the menu options and their actions:
Option
Action
Prompts you for the number of the user you want to disconnect.
Prompts you to confirm your choice. If you cancel the choice, you cancel the
shutdown. If you confirm the choice, PROSHUT displays the following
message:
Emergency shutdown initiated...
PROSHUT marks the database for abnormal shutdown, kills all remaining
processes connected to the database, and deletes shared-memory segments and
semaphores. The database is in a crashed state. PROSHUT performs normal
crash recovery when you restart the database and backs out any active
transactions.
x
1712
PROSHUT command
Notes
You can shut down using the PROMON utilitys Shut Down Database menu option.
The user who shuts down the server must have started it, or be root (on UNIX).
When you initiate PROSHUT over a network, the amount of time that it takes to actually
shut down all of the OpenEdge processes and to free any ports varies depending on the
number of clients, brokers, and servers that must be shut down. The PROSHUT command
might return control to the terminal before all of the processes are stopped and resources
are freed.
If you specified a unique value for -cpinternal or -cpstream when you opened the
database, you must specify that same value for -cpinternal or -cpstream when you
close the database with the PROSHUT command. If you do not, PROSHUT uses the
values for -cpinternal and -cpstream found in the main startup parameter file created
during installation (such as OpenEdge-install-dir/startup.pf). If the values of
-cpinternal or -cpstream specified for your database do not match the values specified
in the main startup parameter file, you receive the following error message:
Code page conversion table for table-name to table-name was not found.
(6063)
Forced shutdown (-F) on a database started with no integrity (-i) requires two
confirmations. The first confirmation requires you to acknowledge that the database was
started with no integrity and you are initiating a forced shutdown. The second
confirmation is the normal forced shutdown confirmation. You must answer y to both for
the shutdown to execute. This type of shutdown can cause database corruption.
1713
PROWDOG command
Starts the OpenEdge Watchdog process.
prowdog db-name
Parameters
db-name
1714
If the Watchdog finds a process that is no longer active, it releases all the appropriate
record locks, backs out any live transactions, releases any shared-memory locks, and
closes the connection. If the lost process is a server, it disconnects and cleans up all
appropriate remote clients.
If the process was changing shared memory when it terminated, shared memory is in an
inconsistent state; the Watchdog forces a shutdown to protect the database.
The Watchdog cannot detect lost remote clients because remote clients are not associated
with a process. Instead, a network protocol timeout mechanism notifies the server that the
network connection was lost.
18
Database Startup Parameters
This chapter describes OpenEdge database server startup parameters. They are presented in
quick reference tables in the beginning of this chapter. Then, each startup parameter is described
in detail and listed alphabetically by syntax. The syntax of the parameters is the same for UNIX
and Windows unless otherwise noted.
Specifically, this chapter contains the following sections:
You can also include the -pf parameter in the parameter file to reference another parameter file.
Note:
182
If duplicate startup parameters are read from the startup line or .pf file, the last
duplicate parameter read takes precedence.
Parameter
Syntax
(1 of 2)
Purpose
After-image Buffers
-aibufs n
After-image Stall
-aistall
Blocks in Database
Buffers
-B n
Before-image Buffers
-bibufs n
Threshold Stall
-bistall
Recovery Log
Threshold
-bithold n
Direct I/O
-directio
Event Level
-evtlevel
Before-image Cluster
Age
-G n
Group Delay
-groupdelay n
-hash
No Crash Protection
-i
Increase parameters
online
-increaseto
-L n
Lock release
-lkrela
Maximum area
number
-maxAreas n
-Mf n
183
Parameter
Purpose
-Mpte
Shared-memory
Overflow Size
-Mxs n
Number of Users
-n n
-pinshm
Semaphore Sets2
-semsets n
Shared memory
segment size
-shmsegsize n
-spin n
184
Syntax
(2 of 2)
Server-type parameters
Parameter
Syntax
Purpose
Auto Server
-m1
Manual Server
-m2
Secondary Login
Broker
-m3
185
Parameter
186
Syntax
Purpose
Conversion Map
-convmap filename
Case Table
-cpcase tablename
Collation Table
-cpcoll tablename
-cpinternal codepage
-cplog codepage
-cpprint codepage
-cprcodein codepage
-cpstream codepage
-cpterm codepage
Parameter
Syntax
Purpose
Base Index
-baseindex n
Base Table
-basetable n
-indexrangesize n
-tablerangesize n
187
Parameter
SSL
Syntax
-ssl
Purpose
Requires that all brokers and all
connections use SSL
Note: SSL incurs heavy
performance penalties, depending
on the client, server, and network
resources and load.
Key Alias
-keyalias key-alias-name
-keyaliaspasswd
key-alias-password
188
No Session Cache
-nosessioncache
Session Timeout
-sessiontimeout n
Parameter
(1 of 2)
Syntax
Purpose
AdminServer
Port
-adminport { service-name
SQL Server
Java
Classpath
-classpath pathname
Host Name
-H host-name
Maximum
Clients Per
Server
-Ma n
Maximum
Dynamic Server
-maxport n
Minimum
Clients Per
Server
-Mi n
Minimum
Dynamic Server
-minport n
Maximum
Servers
-Mn n
Servers Per
Protocol
-Mp n
Maximum
Servers Per
Broker
-Mpb n
Network Type
-N network-type
Pending
Connection
Time
-PendConnTime n
| port }
189
Parameter
1810
Syntax
(2 of 2)
Purpose
Configuration
Properties File
-properties filename
Service Name
-S { service-name | port-number }
Server Group
-servergroup name
Identifies a logical
collection of server
processes to start
Century Year
Offset
-yy n
Usage type
Purpose
DataServer (DS)
Note:
1811
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
service-name
The port number the AdminServer uses to communicate with server groups. The default
port is 7832.
Use AdminServer Port (-adminport) to establish communication between a server group and an
AdminServer. The AdminServer uses this parameter internally. The -adminport setting must
match the -admin setting specified when the AdminServer was started.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-aiarcdir dirlist
dirlist
A comma separated list of directories where archived after-image files are written by the
AI File Management Utility. The directory names can not have any embedded spaces.
1812
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-aiarcdircreate
Use After-image File Management Archive Directory Create (-aiarchdircreate) to direct the
AI File Management utility to create the directories specified by -aiarchdir. This parameter is
useful only when running the AI File Management utility.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
86400
1201
120
-aiarcinterval n
1. For timed mode, the minimum value is 120. For on-demand mode, do not specify this parameter.
n
The number of seconds between mandatory extent switches by AI File Management. Omit
-aiarcinterval for on-demand mode.
Use After-image File Management Archive Interval (-aiarcinterval) to specify timed or
on-demand mode for the operation of AI File Management. If you specify timed mode, n sets
the elapsed time in seconds between forced AI extent switches. The minimum time is two
minutes, the maximum is 24 hours. If you omit this parameter to indicate on-demand mode,
there are no mandatory extent switches other than when an extent is full. You can modify the
interval on a running system with RFUTIL. Regardless of the mode, AI File Management
archives full AI extents every five seconds. This parameter is useful only when running the AI
File Management utility.
1813
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
20
-aibufs n
Validations using -aibufs can be run online as part of routine health checks. For more
information, see Chapter 20, PROUTIL Utility.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Mult user
default
DBS
-aistall
Use After-image Stall (-aistall) to suspend database activity if all AI files are filled. -aistall
ceases all database activity until the AI extent is emptied and sends this message to the log file:
When using after-image (AI) files, monitor the status of the files to ensure that the AI extents
do not run out of space and cause the database to hang. Without the use of -aistall, the
database shuts down when the AI files are filled.
1814
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default1
CC, DBS
1,000,000,000
or
125,000,0002
10
20
Maximum of
(8 * users) or
3000
-B n
On the AIX platform, when starting a database with large shared memory requirements
(for instance, when the -B exceeds the allotted system paging space), the system may
become unstable if the PSALLOC=early environment variable is not set. For more
information, see OpenEdge Getting Started: Installation and Configuration.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-baseindex n
The starting index number in the range of indexes for which you want to track access
statistics.
1815
File-Name
filename
_idx-num Index-Name
n1
index name1
n2
index name2
n3
index name3
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-basetable n
The starting table number in the range of tables for which you want to track access
statistics.
Use Base Table (-basetable) with Table Range Size (-tablerangesize) to specify the range
of tables for which you want to collect statistics. Access to the statistics is handled through the
Virtual System Tables (VSTs). Table statistics are stored in the _TableStat VST. To obtain table
numbers, use the following ABL code:
1816
_File-Number File-Name
n1
table name1
n2
table name2
n3
table name3
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
20
-bibufs n
Validations using -bibufs can be run online as part of routine health checks. For more
information, see Chapter 20, PROUTIL Utility.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-bistall
Use Threshold Stall (-bistall) with Recovery Log Threshold (-bithold) to quiet the database
when the recovery log threshold is reached, without performing an emergency shutdown. When
you use -bistall, a message is added to the database log (.lg) file stating that the threshold
stall is enabled.
1817
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
System
dependent1
System
dependent1
-bithold n
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-classpath pathname
pathname
1818
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-cluster qualifier
qualifier
Value
Description
startup
protected
The -cluster parameter is required for startup of all cluster-enabled databases. If used with a
database that is not cluster-enabled, the parameter is ignored.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
CS, DBS
-convmap filename
filename
1819
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
CS, DBS
Basic
Basic
-cpcase tablename
tablename
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
CS, DBS
Basic
Basic
-cpcoll tablename
tablename
1820
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
CS, DBS, DS
iso8859-1
iso8859-1
-cpinternal codepage
codepage
Do not use a 7-bit table with -cpinternal. Use 7-bit tables for converting data from a
7-bit terminal to another code page only. Do not use them for character conversion in
memory or for the database.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
CS, DBS, DS
-cpstream
-cpstream
-cplog codepage
codepage
The name of the code page for messages written to the log file.
1821
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
CS, DBS, DS
-cpstream
-cpstream
-cpprint codepage
codepage
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
CS, DBS, DS
-cpinternal
-cpinternal
-cprcodein codepage
codepage
The name of the code page for reading r-code text segments.
Use R-code In Code Page (-cprcodein) to read the r-code text segments, as if they were written
in the code page specified by -cprcodein, and convert them to the Internal Code Page
(-cpinternal) code page.
Caution: This parameter is for use during very rare situations and, in general, should not be
used.
1822
To retrieve the value of this startup parameter at run time, use the SESSION System handle. To
determine the code page of an r-code file, use the RCODE-INFO handle.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
CS, DBS, DS
ibm850
ibm850
-cpstream codepage
codepage
Terminals (includes character terminals and DOS Protected mode, but does not include
graphical interfaces or the Windows character interface)
READ-FILE, WRITE-FILE,
INPUT FROM
Note:
Do not use a 7-bit table with -cpstream. Use 7-bit tables for converting data from a
7-bit terminal to another code page only. Do not use them for character conversion in
memory or for the database.
To retrieve the value of this startup parameter at run time, use the SESSION System handle. To
determine the code page of an r-code file, use the RCODE-INFO handle.
1823
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
CS, DBS, DS
-cpstream
-cpstream
-cpterm codepage
codepage
To retrieve the value of this startup parameter at run time, use the SESSION System handle. To
determine the code page of an r-code file, use the RCODE-INFO handle.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
CC, DBS
Not enabled
Not enabled
-directio
Use Direct I/O (-directio) to open all files in unbuffered mode, which enables the OpenEdge
RDBMS to use an I/O technique that bypasses the operating system buffer pool and transfers
data directly from a buffer to disk. This technique avoids the overhead of maintaining the
operating system buffer pool and eliminates competition for operating system buffers between
OpenEdge programs and other programs.
You might improve performance by using the direct I/O feature. To use direct I/O, use Blocks
in Database Buffers (-B) to increase the size of the buffer pool, since OpenEdge database I/O
will not pass through the operating system buffer pool. Also, decrease the size of the operating
system buffer pool to compensate for the additional memory allocated to the database.
Note:
1824
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS, CS
Normal
Normal
-evtlevel value
value
Use Event Level (-evtlevel) to specify the level of information written to the Windows
Application Event Log. Valid values include:
Brief Error and Warning messages are written to the Event Log.
Normal Error and Warning messages are written to the Event Log along with any
message that is normally written to the log file (.lg). This is the default value.
Full Error, Warning, and Informational messages are written to the Event Log
along with any messages generated by the Message Statement.
For more information about using the Event Log, see Chapter 16, Logged Data.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
32
60
60
-G n
1825
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
1,000
-groupdelay n
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
CC, DBS, DS
-H
host-name
localhost1
The name (address) of the database server machine. This name is assigned to the machine
in your TCP/IP hosts file.
localhost
A reserved word that specifies that the Database server communicates only with clients on
the database server machine.
Use Host Name (-H) to identify the host name.
1826
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Default
CC, DBS
13
-hash n
The number of hash table entries to use for the buffer pool.
Caution: Do not use the -hash parameter unless directed to do so by Technical Support.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Default
CC, DBS
-i
Use No Crash Protection (-i) to run the database without integrity or database recovery. When
running without database integrity, the database engine writes fewer data and before-image
blocks to the disk. In this mode, some procedures (such as those that create and delete large
numbers of records) can run significantly faster than if they are running with database integrity.
When running with the -i parameter, transaction undo is supported. Therefore, there will still
be a before-image file, which might grow quite large during very long transactions.
Use this parameter to do bulk data loading or for large batch runs. It reduces the number of disk
input or output operations. Loading a database for the first time is a good example of a use for
this parameter.
Caution: If you run your database with the -i parameter and the database fails for any reason,
you cannot recover the database.
Do not use the -i parameter unless you have a complete backup of the database and can rerun
procedures in case of a system failure. If the system fails during an OpenEdge session that
started without crash protection, restore the backup copy and rerun the necessary procedures.
For information on restoring a database, see Chapter 6, Recovering a Database.
1827
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Default
CC, DB
Specifis the number of blocks in the database buffers to increase during startup.
-bibufs
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-indexrangesize n
The number of indexes for which you want to track access statistics.
Use Index Range Size (-indexrangesize) to specify the number of indexes for which you want
to collect statistics from virtual system tables (VSTs). See Chapter 25, Virtual System Tables,
for more information on VSTs.
1828
Operating
system
and syntax
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
CS, DS
IPv4
-ipver
{ IPv4 | IPv6 }
IPv4
Specifies Internet Protocol Version 4. Only network connections with IPv4 are allowed.
IPv6
Specifies Internet Protocol Version 6. IPv6 allows network connections with IPv6
addresses and mapped IPv4 addresses. Windows does not support V4 mapped addresses.
Note:
The -ipver startup parameter is case sensitive, and must be specified in all lower case.
The values IPv4 and IPv6 are not case sensitive, and can be specified in any case.
Multi-homed systems (those connected to several LANs) need multiple brokers if they
need to be accessed by clients from multiple networks. In particular, if you associate -H
with a non-local address, the client can not connect with -H localhost.
In previous multi-homed systems (those using IPv4) servers were bound to all interfaces
on the system (binding occurred for INADDR_ANY). Only one broker supported
connections from all LANs to which the system was connected. With IPv6, binding is
specific to the addresses specified with -H.
IPv4 and IPv6 are not compatible for environments using a dual protocol database with
two brokers listening on both TCP4 and TCP6.
Starting a server with IPv4 and a non-local network address binds it to all addresses on the
system, preventing further binds on the same port with a specific IPv6 address.
Conversely, if a server is already bound to a TCP6 address/port, starting up another server
on the same port with IPv4 may fail because on some systems (for example, LINUX) a
bind to INADDR_ANY reserves the port number in the IPv6 space.
1829
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
default_server
-keyalias key-alias-name
key-alias-name
Specifies the alias name of the SSL private key/digital certificate key-store entry to use.
Use Key Alias (-keyalias) to identify a SSL private key/digital certificate key-store other than
the default.
1830
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
password1
-keyaliaspasswd key-alias-password
1. The actual value used is the encrypted value of the string password.
key-alias-password
Specifies the encrypted SSL Key alias password to use to access the servers private
key/digital certificate key-store entry. The default is the encrypted value of password.
Use Key Alias Password (-keyaliaspasswd) to allow access to the Key Alias when you use a
Key Alias other than the default. The key-alias-password value must be encrypted. You can
use the genpassword utility, located in your installations bin directory, to encrypt the
password.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
System
dependent1
32
8192
-L n
The number of entries in the record locking table. If you specify a value that is not a
multiple of 32, the value you specify is rounded to the next highest multiple of 32.
Use Lock Table Entries (-L) to change the limits of the record locking table. Each record that is
accessed and locked by a user takes one entry. This is true whether the record is accessed with
SHARE-LOCK or EXCLUSIVE-LOCK. Increase the size of the lock table if the following
message appears:
1831
Validations using -L can be run online as part of routine health checks. For more
information, see Chapter 20, PROUTIL Utility.
Operating
system
and syntax
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
Client Session
-lkrela
Using this parameter instructs the database to use the original lock release mechanism installed
in previous versions of OpenEdge (versions 10.1A and higher).
Note:
1832
When using this parameter, keep in mind that the database engine can search all lock
chains to be released at transaction end and disconnect time. As a result, you may see
a degradation in performance.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-m1
Use Auto Server (-m1) to start an auto server. The OpenEdge broker uses the auto server
internally to start a remote user server. This is the default. You will never have to use this
parameter directly.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-m2
Use Manual Server (-m2) to manually start a remote user server after you start a broker (servers
are generally started automatically by the broker process). Use this parameter in the following
cases:
For debugging purposes, to start servers directly and observe their behavior
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-m3
In a network environment where more than one broker is using the same protocol, use
Secondary Login Broker (-m3) to start each secondary broker. The secondary broker logs in
clients and starts remote user servers.
1833
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
2048
4 users/server
-Ma n
The maximum number of remote users per database server. The default is the Maximum
Number of Users (-n) parameter value, divided by the Maximum Number of Servers (-Mn)
parameter value.
Use Maximum Clients per Server (-Ma) to specify the maximum number of remote users per
database server. The Maximum Clients per Server (-Ma), Minimum Clients per Server (-Mi), and
Maximum Servers (-Mn) startup parameters apply only to databases that are accessed from
remote network nodes.
In most cases, the default behavior is desirable. Note that the default calculation is usually high
because it assumes that all users are remote users, while the number specified with -n includes
local users. If servers become overloaded with clients, reset the -Mn parameter to increase the
number of servers.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
Database
Server
32000
Variable1
32000
32000
-maxAreas n
1. The minimum value is the current maximum area number in use for the database.
n
1834
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
System
dependent
System
dependent
2000
-maxport n
UNIX
Windows
Use
with
Maximum
value
Minimum
value1
Single-user
default
Multi-user
default
DBS
32,000
0, (users)/2
100
-maxxxids n
1. For a JTA-enabled database, the minimum value is (users)/2 where users is the value of the -n startup
parameter. For a database not enabled for JTA, the minimum is zero (0).
n
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
32,768
01
-Mf n
Value in seconds of the delay before the database engine synchronously writes to disk the
last before-image file records at the end of each transaction. It also specifies the interval
that the broker process wakes up to make sure all BI file changes have been written to disk.
The default is 3 for single-user batch jobs and for multi-user databases. Otherwise, the
default is zero (0).
Use Delayed BI File Write (-Mf) to improve performance on a heavily loaded system. Using -Mf
does not reduce database integrity. However, if there is a system failure, it is possible the last
few completed transactions will be lost (never actually written to the BI file).
When running with full integrity, at the end of each transaction the database engine does a
synchronous write to disk of the last BI file block. This write guarantees that the completed
transaction is recorded permanently in the database. If the user is notified that the transaction
has completed and the system or database manager crashes shortly afterwards, the transaction
is not lost.
Do not set -Mf on a lightly loaded system with little database update activity. Under these
conditions, the extra BI write is very important and does not impact performance. On a heavily
loaded system, however, the BI write is less important (the BI block will be written to disk very
soon anyway), and has a significant performance penalty. Setting -Mf to delay this extra BI write
saves one write operation per transaction, which can significantly improve performance. The
extra BI file write is delayed by default for batch jobs.
The last BI file record is only guaranteed to be written out to disk when a user logs out, or when
the server or broker process terminates normally. On multi-user systems, the n argument
determines the maximum length of time in seconds during which a completed transaction can
be lost.
1836
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-Mi n
The number of remote users on a server before the broker starts another server. See the
Maximum Servers (-Mn) section on page 1838 for more information.
Use Minimum Clients per Server (-Mi) to specify the number of remote users on a server before
the broker starts another server (up to the maximum number of servers). In addition, -Mi and -Mn
apply only to databases that are accessed from remote network nodes.
As remote users enter the database, the broker process starts just one server process for each n
remote users, until the maximum number of servers (specified by the -Mn parameter) is started.
If you specify a value of 1, the broker starts a new server for each of the first -Mn remote users.
Subsequent remote users are distributed evenly among the servers until the maximum number
of users (-n) or maximum clients per server (-Ma) limits are reached.
Typically, you can leave -Mi and -Mn at their default values. If you significantly increase -Mn,
you should also increase -Mi. For example, if you set -Mn to 10 to accommodate up to 40 or more
remote users, increase -Mi to 3 or 4 to prevent a situation where 10 servers are started for just
10 remote users.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
1,025
1,025
-minport n
1837
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
512
-Mn n
The maximum number of remote client servers that can be started on the system. The value
specified is always incremented by 1 to provide a server for the primary login broker.
Use Maximum Servers (-Mn) to limit the number of remote user servers that can be started by
the broker process. The performance tradeoff to consider is swapping overhead for many
servers versus overloading (slowing down) a server with too many clients. This parameter
applies only to databases that are accessed from remote network nodes. Also, use Minimum
Clients per Server (-Mi) to adjust the actual number of servers in use.
Note:
The maximum value for the number of servers that can be started for a database is
limted by available resources of the operating system.
See the Maximum Clients per Server (-Ma) section on page 1834 and the Minimum Clients
per Server (-Mi) section on page 1837 for more information.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
Value of -Mn
value of -Mn
-Mp n
1838
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-Mpb n
Digital UNIX
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-Mpte
Use VLM Page Table Entry Optimization (-Mpte) to allocate shared memory in multiples of
8MB at server startup for VLM64 support. This function is a binary switch that is off by default.
The -Mpte startup parameter turns on the function.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
DBS
System
dependent1
-Mxs n
Multi-user
default
16384 + (# of
* 300) /
1024
users
1. The maximum is limited only by the size of the signed integer data type on the system.
n
1839
Depending on the operating system, the database engine rounds the shared-memory area size to
the next 512-byte or 4K boundary.
Note:
Validations using -Mxs can be run online as part of routine health checks. For more
information, see Chapter 20, PROUTIL Utility.
When calculating Shared Memory Overflow (-Mxs), add the total number of bytes to the existing
default value. For example, if the total memory required for the mandatory field array is 80K,
-Mxs must be (80K + 16K + 4K (100 users).
Note:
These numbers are approximations, and may change from release to release.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
CC, DBS
System
dependent
-N network-type
network-type
1840
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
10,000
20
-n n
The maximum number of OpenEdge users on the system. After n users have connected to
the OpenEdge database, additional user startup attempts are rejected.
-n must be high enough to include local and remote users as well as background writers (APWs,
BIWs, and AIWs), PROWDOG processes, and PROMON sessions.
See the Maximum Clients per Server (-Ma) section on page 1834 and the Maximum
Servers (-Mn) section on page 1838 for more information.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-nosessioncache
1841
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-PendConnTime n
1842
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
-pf filename
filename
The name of the parameter file containing OpenEdge database startup parameters.
Use Parameter File (-pf) to name a parameter file that includes any number of startup
parameters to run an OpenEdge database. This parameter is especially useful if you regularly
use the same parameters to start your database, or if more parameters are specified than can fit
on the command line. This parameter can be included within the parameter file itself to
reference another parameter file.
Use multiple instances of -pf to name multiple parameter files. This allows you to specify
application-specific parameters in one parameter file, database-specific parameters in a second
parameter file, and user-specific parameters in yet another file.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-pinshm
The Pin Shared Memory (-pinshm) parameter does not have any arguments.
Using -pinshm will prevent the OS from swapping shared memory contents to disk, which can
help improve performance.
1843
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-properties filename
filename
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
CC
Unbuffered I/O
Unbuffered I/O
-r
Use Buffered I/O (-r) to enable buffered I/O to the before-image file. In most cases, avoid using
this parameter because it might put database integrity at risk.
Caution: A database running with the -r parameter cannot be recovered after a system failure.
If the system fails, you must restore the database from a backup and restart
processing from the beginning.
1844
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
CC, DBS
-S
service-name
port-number
service-name
The port number of the host; if using Progress Explorer, the port number of the
NameServer.
Use Service Name (-S) to specify the service or port number to be used when connecting to a
broker process or used by a broker process on the host machine. You must use this parameter
when you are starting:
The system administrator must make an entry in the services file that specifies the server or
broker name and port number.
When the broker spawns a server, the server inherits all of the network parameters (except the
Service Name parameter) from the broker. Because there is no restriction on the number of
brokers you can start, you can have multiple brokers running with different network parameters.
See the Server Group (-servergroup) section on page 1846 for more information.
Table 189 shows how the broker, server, and remote client interpret each of their parameters
when you use the -S parameter.
Table 189:
Module
Broker
Server
Remote Client
To run a multi-user OpenEdge session from a remote network node, use both the Host Name
(-H) and Service Name (-S) parameters.
1845
UNIX
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
Maximum
Number of
Users + 1
-semsets n
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-servergroup name
name
1846
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
180
-sessiontimeout n
Specifies in seconds the length of time an SSL session will be held in the session cache.
The default is 180 seconds.
Use Session Timeout (-sessiontimeout) to change the length of time that an SSL session will
be cached. Session caching allows a client to reuse a previously established SSL session if it
reconnects prior to the session cache timeout expiring.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Database
Server
System
dependent1
-shmsegsize n
Single-user
default
Multi-user
default
128 for 32-bit
systems, 1024
for 64-bit
systems.
1847
Values in MB
Values in GB1
32-bit platforms
128 (default)
256
512
1024
1g
2048
2g
4096
4g
1024 (default)
1g
2048
2g
4096
4g
8192
8g
16384
16g
32768
32g
64-bit platforms
Use Shared memory segment size (-shmsegsize) to specify the size of the largest shared
memory segment the server can allocate. Specifying shared memory segment size can improve
performance. Increasing the size of the shared memory segments decreases the number of
segments allocated, in turn decreasing the system resources needed to manage the segments. See
the Shared memory allocation section on page 1327 for more information.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
10,000, or
6000 * # CPUs1
-spin n
1. For single CPU systems, the default is 10,000. For multi-CPU systems, the default is 6000 * # of CPUs.
1848
SSL (-ssl)
Operating
system
and syntax
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-ssl
Use SSL (-ssl) to specify that all database and client connections use the Secure Sockets Layer
(SSL) for data privacy. SSL provides an authenticated and encrypted peer-to-peer TCP/IP
connection.
Note:
SSL incurs heavy performance penalties, depending on the client, server, and network
resources and load. For more information on SSL and the security features of
OpenEdge, see OpenEdge Getting Started: Core Business Services.
1849
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
DBS
-tablerangesize n
The number of tables for which you want to track access statistics.
Use Table Range Size (-tablerangesize) to specify the number of tables for which you want
to collect statistics.
UNIX
Windows
Use
with
Maximum
value
Minimum
value
Single-user
default
Multi-user
default
CS, DBS
1950
1950
-yy n
(1 of 2)
-yy
1900
50 to 99
1950 to 1999
00 to 49
1900 to 1949
50 to 99
1950 to 1999
00 to 49
2000 to 2049
1950
1850
(2 of 2)
-yy
1980
80 to 99
1980 to 1999
00 to 79
2000 to 2079
Notice that all two-digit year values expand into the 100-year period beginning with -yy.
To test the effect of -yy, start the database with a different -yy value and run the following
procedure:
Note:
If you use a hard-coded date containing a two-digit year in a .p file, the OpenEdge
RDBMS honors the -yy parameter and expands the two-digit year to a four-digit year
during compilation. However, this might not match the run time -yy. To prevent this
problem, use four digit years for hard-coded dates in programs.
This startup parameter provides the same functionality as the SESSION:YEAR-OFFSET attribute.
1851
1852
19
PROMON Utility
This chapter describes the OpenEdge database administration utility PROMON.
PROMON utility
PROMON utility
Starts the monitor that displays database information.
Syntax
promon db-name
Parameters
db-name
User Control
Locking and Waiting Statistics
Block Access
Record Locking Table
Activity
Shared Resources
Database Status
Shut Down Database
R&D.
T.
L.
C.
Advanced Options
2PC Transactions Control
Resolve 2PC Limbo Transactions
2PC Coordinator Information
Figure 191:
After the first screen of output from any option, press U to monitor the same parameters
continuously. The default interval is eight seconds, but you can change it on the Modify
Defaults screen (option M on the main menu).
192
PROMON utility
Notes
To collect output data, the monitor bars all users from shared memory for an instant and
takes a snapshot of its contents. Therefore, do not use the monitor when system
performance is being measured.
Certain monitor output data is not generally useful, but it might be helpful when debugging
unusual system problems.
193
User Control:
Usr Name
Type Wait
0 dba
BROK -1 dba
SERV -5 doc
MON
-6 bob
SELF REC
24 joe
REMC --
Area
0
0
0
12
0
Dbkey
0
0
0
385
0
Trans
0
0
0
0
174
Figure 192:
User numbers are assigned sequentially to database processes as they are started. The
broker process is always user (0).
Name
Table 191 lists the distinct types of database processes that can appear in the Type field.
Table 191:
Value
194
(1 of 2)
Process type
AIMD
AIW
APW
AUDA
AUDL
BAT
Batch user
BIND
BINL
BIW
Before-image writer
BKUP
Online backup
BROK
Broker process
FMA
Value
(2 of 2)
Process type
IACT
IDXC
IDXF
IDXM
MON
Monitor process
PSA
QUIE
PROQUIET utility
REMC
Remote client
RFUT
RFUTIL utility
RPLA
RPLE
RPLS
RPLT
RPLU
SELF
Self-service client
SERV
Remote-user server
SHUT
SQSC
SQLSCHEMA utility
SQDP
SQLDUMP utility
SQLD
SQLLOAD utility
SQSV
SQL server
TBLM
WDOG
Watchdog
Wait
Indicates that a process is waiting for a lock or latch. Table 192 lists common values and
describes the resource or event associated with each value.
195
(1 of 2)
Description
Lock waits
REC
IX
Index lock
SCHE
Schema lock
TRAN
Transaction commit
TXB
TXE
TXS
TXX
Resource waits
AIRD
AIWR
BIRD
BIWR
BKEX
BKSH
BKSI
BUFF
Shared buffer
DB
Database server
DBBK
DBRD
DBWR
DEAD
RGET
SRPL
Latch waits
196
AIB
BFP
BHT
BUF
CPQ
DLC
GST
IXD
LKF
LKP
LKT
LRS
LRU
MTX
PWQ
SCH
SEQ
TXQ
TXT
USR
(2 of 2)
197
BF2
BF3
BF4
BIB
LRI
LRX
OM
The previous values are for short-duration (micro- to milliseconds) latches used internally
by the OpenEdge RDBMS.
Area Dbkey
If the user has a REC wait, the combination of the area number and the db key, identifies
the record.
Trans
Transaction (task) number, if one is active. After the broker starts, numbers are assigned
sequentially to each transaction that accesses the database.
PID
The process ID as assigned by the operating system. The PID column typically displays
(0) for remote clients.
Sem
The number of the semaphore the process is using. Each process uses exactly one
semaphore. The database engine uses two semaphores (numbers 1 and 2) internally and
assigns the remaining semaphores one at a time (starting with the broker) as database
processes log in.
Srv
For remote clients (REMC), the user number of the server the client runs against.
Login
198
Record
2345
89
654
0
54
0
43
21
1123
12
345
20
101
10
175
15
Trans
567
12
32
0
6
0
7
3
302
1
89
6
20
2
34
3
Schema
145
9
33
0
3
0
2
2
54
0
30
0
15
0
0
0
Figure 193:
Type
Lock or Wait. The output display includes two lines for each process. The first shows the
number of record, transaction, and schema locks obtained, and the second shows the
number of times the process waited to obtain one of these locks.
Usr
The user number of the process. User number 999 represents all users.
Name
For the Lock type, the total number of record locks obtained. For the Wait type, the
number of times the user had to wait to obtain a record lock. Numbers are cumulative for
the life of the process.
Trans
For the Lock type, the number of times a transaction lock was issued. A transaction lock
is issued for the duration of a transaction when a record is deleted. If the transaction is
ultimately undone or backed out, the record is not deleted.
199
For the Lock type, the total number of times the schema lock was obtained. For the Wait
type, the number of times the user had to wait to obtain the schema lock. (There is one
schema lock; only one user at a time can update any part of the schema.)
1910
Block Access:
Type Usr Name
DB Requests
DB Reads
\Writes
472
3
BI Reads
\Writes
198
2
AI Reads
\Writes
0
0
8553
388
2
198
1
0
0
Acc
999 TOTAL...
17666
Acc
0 george
Acc
5 bill
946
59
1
0
1
0
0
Acc
6 sue
918
6
0
0
0
0
0
Acc
7 mary
909
1
0
0
0
0
0
Acc
8 erin
921
18
0
0
0
0
0
Figure 194:
The first line displays cumulative information for all users. The six read and write columns refer
to disk I/O. Reads and writes are always one block. Block size varies among systems, but is
usually 512 bytes, 1,024 bytes, or 2,048 bytes.
Type
1911
The number of times the database buffer system was searched to find a block. The buffer
system is searched every time a process attempts to access a record. If the block that holds
the desired record is already in memory, a disk read is not required. If the ratio of DB Reqst
to DB Read is not high (10 to 1 or greater), consider raising the value of the Blocks in
Database Buffers (-B) startup parameter. Regardless of the number of available buffers,
random record access causes a lower database request to disk read ratio than sequential
record access.
DB Read
The number of database disk block reads. A database block must be read from disk when
a process accesses a record whose containing block is not already in the database buffers.
Recall that for read-only database requests, the OpenEdge RDBMS uses private database
buffers if they are available, rather than the shared buffer pool (allocated with the Blocks
in Database Buffers (-B) parameter).
DB Write
The number of database block writes to disk. Once the database buffers are full, every disk
read overwrites an existing block; if the overwritten block has been modified since it was
read in, it must be written to disk. This accounts for the majority of block writes.
BI Read
The number of before-image (BI) file block reads. For example, the BI file is read when a
transaction is undone. The BI file has its own one-block input buffer and does not use the
database buffer pool.
BI Write
The number of BI file block writes. When a record is updated, a pretransaction copy of the
record is written to the BI file. When the transaction completes, the database engine writes
the last BI file block out to disk (assuming you are running the database with full integrity).
This post-transaction disk write accounts for the relatively high number of BI file writes,
but it can be delayed with the Delay BI File Write (-Mf) startup parameter.
In addition to record images, the database engine writes to the BI file various notes and
data required to reconstruct a damaged database. The BI file has its own one-block output
buffer and does not use the shared database buffer pool.
AI Read
The number of after-image (AI) file block reads. The AI file is read during crash recovery.
The AI file has a one-block input/output buffer and does not use the database buffer pool.
AI Write
The number of AI file block writes. When you run the database with after-imaging
enabled, a copy of each note written to the BI file is written to the AI file.
1912
Rec-id
Table Lock
102
102
102
18 EXCL
18 SHR
18 SHR
Flags
Tran State
Tran
Begin
Begin
Dead
2619
2620
0
Figure 195:
The size of the record locking table is set with the Locking Table Entries (-L) startup parameter.
See the chapter on locks in OpenEdge Development: ABL Handbook for more information on
locks and locking.
Usr
The chain type should always be REC, the record lock chain.
#
The record lock chain number. The locking table is divided into chains anchored in a hash
table. These chains provide for fast lookup of record locks by Rec-id.
Rec-id
The record ID for the lock table entry. The Rec-id column identifies the records locked by
each database process.
Table
One of five lock types: X (exclusive lock), S (share lock), IX (intent exclusive lock), IS
(intent share lock), or SIX (shared lock on table with intent to set exclusive locks on
records).
1913
There are five possible types of flags. Table 193 lists the flags and their meanings.
Table 193:
Flag
Flag values
Name
Description
Limbo lock
Queued lock
request
Upgrade request
No hold
Trans State
The state of the transaction. Table 194 lists the possible states.
Table 194:
Transaction states
State
1914
(1 of 2)
Description
Begin
Active
Dead
The transaction is complete, but the lock has not been released.
Prep
Phase 1
Phase 2
In phase 2.
Ready to commit.
Limbo transaction.
Active JTA
Idle JTA
Transaction states
State
(2 of 2)
Description
Prepared JTA
RollbackOnly
JTA
Committed JTA
Trans ID
1915
Activity
Event
c
Commits
0
Record Updates
9
Record Creates
0
DB Writes
BI Writes
9
AI Writes
Record Locks
0
Checkpoints
0
Total
Per Sec
50
0.2
Total
Per Se
Undos
0.
0.0
Record Reads
3709
16.
0.0
Record Deletes
0.
3
2
0.0
0.0
DB Read
BI Reads
473
198
0
210
0.0
1.0
Record Waits
0.
0.0
Buffs Flushed
0.
Event
0 %
0 %
AI Buf Waits
Writes by AIW
2168 K
439 blocks
1
AI Size
Figure 196:
1916
2.2
0.
0 %
0 %
0 K
The events that have occurred on the system and the cumulative total number of events
and the number of events per second. Table 195 defines the event types listed in the
Event field. For each event type, PROMON lists the cumulative total number of events
and the number of events per second.
Table 195:
Event types
Event type
Description
Commits
Undos
Record Updates
Record Reads
Record Creates
Record Deletes
DB Writes
DB Reads
BI Writes
BI Reads
AI Writes
Record Locks
Record Waits
Checkpoints
Buffers Flushed
Percentage of record accesses that result in record lock waits. A record lock wait occurs
when the database engine must wait to access a locked record.
BI Buf Waits
Percentage of before-image (BI) buffer waits. A BI buffer wait occurs when the database
engine must wait to access a BI buffer.
AI Buf Waits
Percentage of after-image (AI) buffer waits. An AI buffer wait occurs when the database
engine must wait to access an AI buffer.
1917
Percentage of database blocks written to disk by the APW; this is a percentage of the total
number of database blocks written by the database engine.
Writes by BIW
Percentage of BI blocks written to disk by the BIW; this is a percentage of the total number
of BI blocks written to disk by the database engine.
Writes by AIW
Percentage of AI blocks written to disk by the AIW; this is a percentage of the total number
of AI blocks written.
Buffer Hits
Percentage of buffer hits. A buffer hit occurs when the database engine locates a record in
the buffer pool and does not have to read the record from disk.
DB Size
Number of blocks on your databases free chain. The free chain is a chain of empty
database blocks.
RM chain
1918
Process types
Description
Servers
Users
Apws
1919
Shared Resources:
Busy After Image Extent:
Number of database buffers (-B):
Number of before image buffers (-bibufs):
Number of after image buffers (-aibufs):
Before-image truncate interval (-G):
No crash protection (-i):
Maximum private buffers per user (-Bpmax):
Current size of locking table (-L):
Locking table entries in use:
Locking table high water mark:
Maximum number of clients per server (-Ma):
Max number of JTA transactions (-maxxids):
Delay of before-image flush (-Mf):
Maximum number of servers (-Mn):
Maximum number of users (-n):
Before-image file I/O (-r -R):
Shared memory version number:
/dev/dbs/docsample.a2
3000
20
20
0
Not enabled
64
8192
1
3
5
0
3
4
21
Raw
10167
Figure 197:
Because most of the Shared Resources options output is self-explanatory, this list describes only
two items:
Shared memory version number
The version number of the shared-memory data structure. This structure varies slightly
between some releases of the database. Since broker, server, and client processes all access
the same shared-memory pool, their shared-memory version numbers must agree. If an
error message states that shared-memory version numbers do not match, make sure that
the broker and clients are running the same OpenEdge release.
Number of semaphores used (UNIX only)
On UNIX systems, shows the number of semaphores preallocated for this database.
Because each process requires one semaphore, semaphores are preallocated based on the
number of processes expected to access the database.
1920
1921
- 0 (seconds)
05/26/04 15:06
Yes
- 0
Figure 198:
1922
Database state
Description
2048
The number of empty blocks in the database. Empty blocks are created when the database
outgrows its existing block allocation, and also by record and index deletion. The database
engine expands the database by multiple blocks at a timeusually 8 or 16and most of
these are empty at first. Typically, the number of empty blocks increases and decreases as
the database grows.
Record blocks with free space
Free space is created when records are deleted and when the database grows into a new
block, but does not use the entire block. The database engine reuses empty space with new
records when possible, but does not return empty space to the operating system. This space
does not affect database performance. However, if you must reclaim this disk space for
your system, you can dump and reload the database.
Time since last truncate bi
The time in seconds since the before-image file was truncated. When the BI file is
truncated, all modified database blocks are written to disk.
1923
Disconnect a User
Unconditional Shutdown
Emergency Shutdown (Kill All)
Exit
Enter choice>
Figure 199:
Disconnect a User
Disconnects a single user from the database specified when PROMON was invoked.
PROMON prompts you to enter the user number of the user you want to disconnect.
(PROMON displays the user number of active users above this submenu.) Enter the user
number, then press RETURN.
Unconditional Shutdown
Shuts down the database in an orderly fashion. When you choose Unconditional
Shutdown, PROMON begins the shutdown when you press RETURN. PROMON does not
prompt you to confirm the shutdown.
Emergency Shutdown (Kill All)
Shuts down the database without performing cleanup and kills all processes connected to
the database. Before performing the shutdown, PROMON displays the following message
prompting you to confirm the shutdown:
If you enter anything other than y or yes, PROMON cancels the shutdown and returns to
the PROMON utility main menu.
1924
At the Enter your selection prompt, enter R&D. The R&D main menu appears:
09/19/03
OpenEdge Release 10 Monitor (R&D)
14:50
Main (Top) Menu
1. Status Displays ...
2. Activity Displays ...
3. Other Displays ...
4. Administrative Functions ...
5. Adjust Monitor Options
Enter a number, <return>, P, T, or X (? for help):
2.
Status Displays Shows information about the current state of the database and its
users.
Activity Displays Shows information about database activity in the recent past.
Activity displays shows the total number of each type of operation for the sample
period, the number of operations per minute, the number of operations per second,
and the number of operations per transaction.
Other Displays Shows information that does not fit into either the Status or
Activity categories.
Adjust Monitor Options Lets you change the way the monitor behaves.
1925
1926
Database
Backup
Servers
Processes/Clients
Files
Lock Table
Buffer Cache
Logging Summary
BI Log
AI Log
Two-phase Commit
Startup Parameters
Shared Resources
AI Extents
Servers by broker
Database
Database
Displays general database status information. Figure 1910 shows a sample Database Status
display.
09/15/06
11:01:01
Status: database
09/15/03 9:17
1:43:37
Open (1)
None
None
09/15/03 9:17
09/15/03 9:17
09/10/03 21:41
4029 bytes
324 (1268 kb)
20 (6%)
0 (0%)
4 (1%)
172
9
143
10003
Database state The current operating mode of the database. Table 197 describes the
possible states.
Table 197:
Database states
State
Description
Open.
Recovery.
Index repair.
Restore.
1927
Database
Database damaged flags The state of the database. Table 198 lists the possible flags.
Table 198:
State
Description
None
Normal.
Opened with -F
Crashed with -i
Crashed with -r
Integrity flags The integrity mode in which the database is running. Table 199 lists
the possible flags.
Table 199:
Integrity flags
State
1928
Description
None
Normal.
Executing with -i
Executing with -r
Most recent database open The date and time when the broker for this database was
started.
Previous database open The date and time when the database was previously started.
Local cache file time stamp The time stamp used to check the validity of the schema
cache file. See the description of the Schema Cache File (-cache) parameter in Chapter 18,
Database Startup Parameters, for more information.
Number of blocks allocated The total number of blocks allocated to the database.
RM Blocks with free space The total number of blocks in the RM chain.
Highest table number defined The number of tables defined in the database.
Database version number Identifies the structures of all database-related data that is
resident on disk. This number is used to match different executable versions with the
correct database structures.
Shared memory version number Identifies the version of all data structures that are
resident in shared memory and used by the database manager. This number is used to
ensure that all executables accessing shared memory agree on what data is kept in shared
memory and how that data is accessed. The shared-memory version number changes more
frequently than the database version number.
Backup
Backup
Displays information about database backups. Figure 1911 shows a sample Backup Status
display.
10/18/06
16:07
Status: backup
10/04/06 11:52
Never
Yes
0
Most recent full backup The date of the most recent full backup.
Most recent incremental backup The date of the most recent incremental backup.
Database changed since backup Yes, if the database has been modified since the last
backup. No, if the database has not changed.
Sequence of last incremental The count of incremental backups made since the last
full backup.
Servers
Displays status information about OpenEdge servers running on the system.
Figure 1912 shows a sample Servers Status display.
09/15/05
14:07:47
Sv
No
0
1
2
3
4
Pid
4899
22849
0
0
0
Status: Servers
Type
Protocol Logins
Login
Auto
Inactive
Inactive
Inactive
TCP
TCP
1
1
0
0
0
Pend.
Users
Cur.
Users
Max.
Users
Port
Num
0
0
0
0
0
0
1
0
0
0
5
5
0
0
0
2402
1025
0
0
0
1929
Processes/Clients
The display contains the following fields:
Type The server type. Table 1910 describes the possible types.
Table 1910:
Server types
State
Description
Broker
Auto
Manual
Login
Inactive
Protocol The communications protocol used by the server and its clients. TCP is the
only valid value for this field.
Pend. Users The number of users who have successfully connected to the broker, but
have not completed a connection to a server.
Max. Users The maximum number of users who can connect to a server.
Processes/Clients
Displays status information about OpenEdge processes. The menu options allow you to choose
all processes or a subset. Depending on your selection, the status display will contain a set of
the columns described in Table 1911.
Table 1911:
Column name
1930
Description
Usr
Name
Type
(1 of 2)
Processes/Clients
Table 1911:
Column name
(2 of 2)
Description
Wait
The wait type. Table 192 describes the possible types. If no value
is displayed, the process is not waiting on any database-related
events.
Login Time
Start Time
Trans id
Tx Start Time
Trans State
The state of the transaction. Table 194 lists the possible states.
Pid
Serv
For certain wait values, an additional field is displayed. Table 1912 lists the possible values.
Table 1912:
Additional information
REC
rowid
SCH
12 share lock
10 exclusive lock
TRAN
taskid
RGET
rowid
BKSH
dbkey
BKEX
dbkey
BIRD
dbkey (bi)
BIWR
dbkey (bi)
AIRD
dbkey (ai)
ARWR
dbkey (ai)
TXS
2 TXE update
3 TEX commit
1 TXE share
2 TXE update
3 TEX commit
TXB
1931
Processes/Clients
Table 1912:
Additional information
TXE
TXX
1 TXE share
2 TXE update
3 TEX commit
1 TXE share
2 TXE update
3 TEX commit
All Processes Displays status information about OpenEdge processes. Figure 1913
shows a sample Processes/Clients Status display.
09/25/03
16:09
Usr
0
5
6
7
8
Name
SYSTEM
rgw
Type
BROK
BIW
WDOG
APW
MON
Wait
------
0
0
0
0
0
Trans id
0
0
0
0
0
Login time
09/25/03 08:31
09/25/03 08:33
09/25/03 08:33
09/25/03 08:34
09/25/03 08:57
Blocked Clients The clients listed are waiting for a database-related resource.
Figure 1914 shows a sample Blocked Clients Status display. The clients listed are
waiting for a database-related resource.
09/25/03
16:10
Usr
Name
Type
eadams SELF
Wait
REC
Trans id
3937
Login time
09/25/03 14:51
1932
Processes/Clients
09/25/03
16:10
Usr
0
Name
Type
Login time
Tx Start Time
rgw
SELF
09/25/03 08:31
Trans id
Trans State
2601
Begin
Local Interactive Clients Displays status information about local interactive client
processes. Figure 1916 shows a sample Local Interactive Clients status display.
09/25/03
16:10
Usr
0
Name
rgw
Type Wait
BROK MTX
Trans id
0
Login time
09/25/03 10:31
Pid
16147
Local Batch Clients Displays status information about local batch client processes.
Figure 1917 shows a sample Local Batch Clients Status display.
09/25/03
16:10
Usr
Name
Type Wait
0
0
guy
rgw
SELF MTX
SELF REC
Trans id
0
3937
402
0
Login time
09/25/03 10:31
09/25/03 10:36
Pid
678
687
1933
Files
09/25/03
16:18
Usr
Name
Type Wait
rgw
BROK MTX
Trans id
0
Login time
Serv
09/25/03 10:36
09/25/03
16:10
Usr
Name
Type
0
5
6
SYSTEM BROK
rgw
MON
BIW
Start time
Pid
09/25/03 10:31
09/25/03 11:12
09/25/03 10:33
16140
16145
16146
Files
Displays status information about OpenEdge database files. Figure 1920 shows a sample Files
Status display.
09/28/06
12:15:05
Status: Files
File name
/dev/dbs/docsample.db
/dev/dbs/docsample.b1
/dev/dbs/docsample.d1
/dev/dbs/docsample_7.d1
/dev/dbs/docsample_8.d1
/dev/dbs/docsample_9.d1
/dev/dbs/docsample_10.d1
/dev/dbs/docsample_11.d1
/dev/dbs/docsample.a1
/dev/dbs/docsample.a2
/dev/dbs/docsample.a3
/dev/dbs/docsample_1000.d1
/dev/dbs/docsample_1000.d2
/dev/dbs/docsample_2000.d1
/dev/dbs/docsample_2000.d2
Size Extend
(KB) (KB)
632
2168
1656
248
1144
248
2040
2040
504
504
120
632
120
632
120
1934
512
512
512
512
512
512
512
512
0
0
512
0
512
0
512
Lock Table
The display contains the following fields:
File name The name of the file, including the path specification
Lock Table
Displays status information about the lock table. See Chapter 13, Managing Performance, for
a description of the lock table and its functions. When you choose this menu option, PROMON
gives you three choices:
09/28/06
12:06:54
Usr Name
24 docusr1
10_docusr2
23 dba
2
2
-1
RECID Flags
385 X
385 S
12108 S
Trans State
Q H
Begin
Begin
Dead
Usr The user number of the user who owns the lock entry
1935
Buffer Cache
Flag
Description
Share lock
Exclusive lock
Request is queued, and the user is waiting for a conflicting lock held
by another user
Purged
Hold
Buffer Cache
Displays status information about buffers. Figure 1922 shows a sample Buffer Cache Status
display.
09/25/03
16:21
Total buffers:
Hash Table Size:
Used Buffers:
Empty Buffers:
On lru Chain:
On apw queue:
On ckp queue:
Modified buffers:
Marked for ckp:
Last checkpoint number:
170
43
57
113
168
0
0
170
0
0
1936
On lru Chain The number of buffers on the least recently used (LRU) chain.
Logging Summary
Marked for ckp The number of buffers currently marked for checkpoint.
Last checkpoint number The most recent checkpoint number. As checkpoints begin,
they are assigned a sequential number from the start of the session. The number is also the
number of checkpoints that have occurred.
Logging Summary
Displays status information for the database logging function. Figure 1923 shows a sample
Logging Summary Status display.
09/25/06
16:22
Crash protection:
Delayed Commit:
Before-image I/O:
Before-image cluster age time:
BI Writer status:
Two-Phase Commit:
After-image journalling:
After-image I/O:
AI Writer status:
Yes
3 seconds
Reliable
0
Executing
*** OFF ***
Yes
Reliable
Not executing
Delayed Commit The current value of the Delayed BI File Write (-Mf) parameter.
BUFFERED if buffered I/O is being used for BI writes. Buffered I/O is not
recommended
Before-image cluster age time The period of time that must pass before a BI cluster
is reused. This period ensures that database blocks flushed at checkpoint are moved from
the UNIX buffers on disk. When this occurs, the transaction is recorded on disk.
1937
BI Log
BUFFERED if buffered I/O is being used for AI writes. Buffered I/O is not
recommended
BI Log
Displays status information for before-image logging. Figure 1924 shows a sample BI Log
Status display.
09/25/03
16:23
Status:BI Log
60 seconds
8192 bytes
512 kb (524288 bytes)
1
2168
521177 (100%)
09/25/03 13:56
5
0
1938
Before-image cluster age time The period of time that must pass before a BI cluster
is reused. This period ensures that database blocks flushed at checkpoint are moved from
the buffers on disk. When this occurs, the transaction is durably recorded on disk.
Before-image log size (kb) The size of the BI log file, in kilobytes.
Bytes free in current cluster The number of free bytes remaining in the current BI
cluster.
AI Log
AI Log
Displays status information for after-image logging, if after-imaging is enabled. Figure 1925
shows a sample AI Log Status display.
09/15/05
11:35:24
Status: AI Log
08/18/05 10:47
08/18/05 10:47
09/15/05 11:31
1
3
13
5
8192 bytes
2168 K
After-image begin date The date of the last AIMAGE BEGIN command.
After-image new date The date of the last AIMAGE NEW command.
After-image open date The date when the AI log was last opened.
Current after-image extent The extent number of the current (busy) extent.
Number of AI buffers The number of buffers in the AI buffer pool. You can change
the number of AI buffers with the After-image Buffers (-aibufs) startup parameter. The
default value is 1.
After-image block size The size of the after-image block. You can change the AI block
size. This reduces I/O rates on disks where AI files are located. See Chapter 13,
Managing Performance, for more information.
1939
Two-phase Commit
Two-phase Commit
Displays status information about two-phase commit, if enabled. Figure 1926 shows a sample
Two-Phase Commit Status display.
09/25/03
17:02:19
Coordinator nickname:
Coordinator priority:
After-image journalling:
mydemo
0
Yes
Coordinator nickname The nickname used to identify the coordinator database. You
specify a nickname when you enable two-phase commit.
Coordinator priority The priority for the coordinator database. You specify the
priority when you enable two-phase commit.
Startup Parameters
Displays values of OpenEdge database startup parameters. Figure 1927 shows a sample
Startup Parameters Status display.
10/13/05
16:23
Maximum clients:
Maximum servers:
Maximum clients per server:
Lock table size:
Database buffers:
APW queue check time:
APW scan time:
APW buffers to scan:
APW max writes/scan:
Spinlock tries before timeout:
Before-image buffers:
After-image buffers:
Max number of JTA transactions:
21
4
5
8192 entries
168 (672 kb)
100 milliseconds
1 seconds
1
25
6000
5 (5 kb)
5 (5 kb)
0
1940
Maximum clients per server The value of the Maximum Clients per Server (-Ma)
parameter
Shared Resources
Lock table size The value of the Lock Table Entries (-L) parameter
Database buffers The value of the Blocks in Database Buffers (-B) parameter
APW queue check time The value of the Page Writer Queue Delay (-pwqdelay)
parameter
APW scan time The value of the Page Writer Scan Delay (-pwsdelay) parameter
APW buffers to scan The value of the Page Writer Scan (-pwscan) parameter
APW max writes/scan The value of the Page Writer Maximum Buffers (-pwwmax)
parameter
Spinlock tries before timeout The number of times a process will spin (-spin) trying
to acquire a resource before waiting
Max number of JTA transactions The value of the Maximum JTA Transaction Ids
(-maxxids) parameter
Shared Resources
Displays status information about shared database resources. Figure 1928 shows a sample
Shared Resources Status display.
09/25/03
16:24
Active transactions:
Lock table entries in use:
Lock table high water mark:
Number of servers:
Total users:
Self service:
Remote:
Batch:
Watchdog status:
BIW status:
AIW status:
Number of page writers:
Number of monitors:
Number of semaphores allocated:
Shared memory allocated:
segment was not locked in memory)
1
8 of 8192
9
0 (5 allocated)
1 (20 allocated)
1
0
0
*** Not executing ***
Executing
*** Not executing ***
1
1
29
1408 K (1 segments. The last
1941
Lock table entries in use The number of lock table entries that are currently being
used, and the total number of entries available
Lock table high water mark The maximum number of lock table entries that were in
use simultaneously
Total users The number of users, broken down by the following types: Self service,
Remote, and Batch
BIW status Executing, if the BIW is executing. Not executing, if the BIW is not
executing
AIW status Executing, if the AIW is executing. Not executing, if the AIW is not
executing
Shared memory allocated The amount of shared memory allocated, in kilobytes and
in segments. See Chapter 13, Managing Performance, for more information on
shared-memory allocation.
09/20/06
13:17:44
Seg
Id
Size
91242
858972160
Used
849424036
Free
9548124
1942
AI Extents
The display contains the following fields:
Seg The segment number. Each segment receives a sequential number when it is
created
Used The amount of shared memory used from the segment, in bytes
Free The amount of shared memory allocated but not yet used in the segment, in bytes
AI Extents
Displays the status of the database AI extents, if after-imaging is enabled. Figure 1930 shows
a sample AI Extents status display.
09/15/05
11:42:07
Area
13
14
15
Status: AI Extents
Status
Type
BUSY
EMPTY
EMPTY
Fix
Fix
Var
File
Number
Size
(KBytes)
1
0
0
505
505
121
Extent Name
/usr1/101A/doc_sample.a1
/usr1/101A/doc_sample.a2
/usr1/101A/doc_sample.a3
Area The area number of the extent. Each AI extent has a unique area number
Status The status of the extent; possible values are: BUSY, EMPTY, FULL, LOCKED,
and ARCHIVED
Extent Name The exact file name the operating system uses to reference this extent
1943
09/29/06
10:51:13
:
:
:
:
7.00 KB
64
63
1
Rdy Status
Y BUSYNP
Y REG
Messages Locked by
1
0
Total Message Entries The total number of messages that can be queued
1944
OpenEdge RDBMS
Servers by broker
Table 1914 lists the possible values for Status.
Table 1914:
Status values
Value
Description
RUN
Running.
ACT
Active.
INACT
BUSY
Busy.
REG
Registered.
PREREG
UNK
Unknown.
Servers by broker
Displays the number of servers for each broker.
12/10/07
09:05:46
Sv
No
Pid
Type
Protocol
Logins
Pend.
Users
Cur.
Users
Max.
Users
Port
Num
Type The process type; Table 191 lists the possible types
1945
12/10/07
09:06:10
1946
Activate For All Users Activates Client Database-Request Statement Caching for all
clients. For more information, see the Activating database-request statement caching for
all users section on page 1948.
Activate For All Future Users Activates Client Database-Request Statement Caching
for all future client connections to the database. For more information, see the Activating
database-request statement caching for all future users section on page 1949.
Deactivate For All Users Deactivates Client Database-Request Statement Caching for
all clients.
Specify Directory for Statement Cache Files Specify the directory for all temporary
database-request statement cache files. For more information, see the Specifying a
directory for database-request statement caching files section on page 1953.
Enter the type of statement caching to activate (1-Single, 2-Stack, 3-One Time
Request, Q-Quit):
Single Only the current ABL program and line number, or a single SQL statement is
reported by the ABL client.
Stack The current ABL program and line number. This option displays up to 31 prior
ABL program names and line numbers, or a single SQL statement.
One Time Request The ABL or SQL client report tracing information once. Once
reported, database-request statement caching is turned off.
Quit Quit this function and return to the Client Database-Request Statement Caching
menu.
If you choose option 2 to activate the Stack type of statement caching for an individual user,
the following menu appears:
Enter the type of statement caching to activate (1-Single, 2-Stack, 3-One Time
Request, Q-Quit): 2
12/18/07
12:23:14
User
5
6
7
8
Name
Type
Login time
User-5
User-6
User-7
User-8
SELF
SELF
SELF
SELF
12/18/07
12/18/07
12/18/07
12/18/07
12:23
12:24
12:24
12:24
0
0
0
0
If you select Single, Stack, or One Time Request for the type of statement caching to activate,
the list of database connections displays:
Remote database servers, followed by the remote connections they are servicing
1947
Database-request statement caching is activated if the user number is valid, or if the user
is specified in the list. If the user is a self-serve or remote client, database-request
statement caching is activated for that client. If the user number represents a remote
database server, database-request statement caching is activated for all clients serviced by
that database server.
Note: Once activated, database-request statement caching can not be cancelled. You must
deactivate for selected users, all users, or all future users.
The user number entry is terminated when you press ENTER without specifying a valid
numeric character, or when you press ENTER after entering a non-numeric character.
If you enter a valid numeric string, you can continue entering valid user numbers.
Enter the type of statement caching to activate (1-Single, 2-Stack, 3-One Time
Request, Q-Quit):
Single Only the current ABL program and line number, or a single SQL statement is
reported by the ABL client.
Stack The current ABL program and line number. This option displays up to 31 prior
ABL program names and line numbers, or a single SQL statement.
One Time Request The ABL or SQL client report tracing information once. Once
reported, database-request statement caching is turned off.
Quit Quit this function and return to the Client Database-Request Statement Caching
menu.
If you choose option 2 to activate the Stack type of statement caching for all clients, the
following menu appears:
12/18/07
12:53:58
1948
Single Only the current ABL program and line number, or a single SQL statement is
reported by the ABL client.
Stack The current ABL program and line number. This option displays up to 31 prior
ABL program names and line numbers, or a single SQL statement.
Quit Quit this function and return to the Client Database-Request Statement Caching
menu.
If you choose option 2 to activate the Stack type of statement caching for all future clients, the
following menu appears:
12/18/07
12:58:12
Once the type of database-request statement caching is selected, tracing is activated for all
future users. An informational message indicates if database-request statement caching was
successful, or if caching failed.
Note:
1949
If you choose option 4 to deactivate database-request statement caching for selected users, the
following menu appears:
12/18/07
13:12:30
User
5
6
7
8
Name
Type
Login time
User-5
User-6
User-7
User-8
SELF
SELF
SELF
SELF
12/18/07
12/18/07
12/18/07
12/18/07
13:33
13:34
13:34
13:34
0
0
0
0
In the Deactivate For Selected Users window the list of database connections displays:
Remote database servers, followed by the remote connections they are servicing
Once the user information is displayed, you can deactivate database-request statement caching
for selected users by specifying the corresponding user number. Based on the user number, the
following actions are performed:
1950
Database-request statement caching is deactivated if the user number is valid, and if the
user is specified in the list. If the user is a self-serve or remote client, database-request
statement caching is deactivated for that client. If the user number represents a remote
database server, database-request statement caching is deactivated for all clients serviced
by that database server.
The user number entry is terminated when you press ENTER without specifying a valid
numeric character, or when you press ENTER after entering a non-numeric character.
If you enter a valid numeric string, you can continue entering valid user numbers.
12/18/07
13:38:46
User
Name
Remote Address
5
6
7
8
User-5
User-6
User-7
User-8
Login time
SELF
SELF
SELF
SELF
12/18/07
12/18/07
12/18/07
12/18/07
Serv
13:33
13:34
13:34
13:34
0
0
0
0
Type
Cache Update
L1
L1
L1
12/18/07 13:35
12/18/07 13:35
12/18/07 13:35
IPV#
12/18/07
13:39:02
User number
User name
User type
Login date/time
6
User-6
SELF
12/18/07 13:34
: 102:/bin/read_all_records_again.p_01234_.p
1951
Field
Description
User Number
User Name
User Type
Login date/time
1952
Statement cache
12/18/07
15:49:15
Enter the name of the directory. The directory name can not exceed 255 bytes in length, and
must be created prior to displaying accessing this option.
Note:
If the directory name is not specified, and the database-request statement cache
information is greater than 256 bytes, the information is written to a file in the current
working directory.
1953
Summary
Servers
Buffer Cache
Page Writers
BI Log
AI Log
Lock Table
Space Allocation
Index
Record
Other
Table 1916 describes the actions PROMON takes, based on the input received.
Table 1916:
1954
(1 of 2)
If you type . . .
PROMON will . . .
ENTER
Summary
Table 1916:
(2 of 2)
If you type . . .
PROMON will . . .
Summary
Displays general information about database activity. Figure 1931 shows a sample Summary
Activity display.
09/25/03
16:27
Event
Commits
Undos
Record Reads
Record Updates
Record Creates
Record Deletes
Record Locks
Record Waits
Activity: Summary
from 09/25/03 15:31 to 09/25/03 16:04 (2 min. 48 sec.)
Total
1208
0
1334
1208
0
0
3625
0
Per Sec.
0.7
0.0
0.8
0.7
0.0
0.0
2.3
0.0
BI Buf Waits
Writes by BIw
BI size:
Free blocks:
Active trans:
|Event
Total
Per Sec.
|DB Reads
57
|DB Writes
3
|BI Reads
13
|BI Writes
131
|AI Writes
0
|Checkpoints
0
|Flushed at chkpt 0
0%
96%
512K
0
1
AI Buf Waits
Writes by AIW
AI Size:
RM chain:
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0%
0%
0K
1
1955
Summary
The Summary Activity display shows the number of events that have occurred on the system,
including the cumulative total and the number of events per second. The display includes the
following events:
Record Waits The number of times users have waited to access a locked record
Flushed at chkpt The number of database buffers that have been flushed to disk
because they were not written by the time the checkpoint ended
Rec Lock Waits The percentage of record accesses that result in record lock waits. A
record lock wait occurs when the database engine must wait to access a locked record.
For optimum performance, try to keep this number as low as possible. You can lower this
number by modifying your application to perform shorter transactions so that record locks
are held for shorter periods of time.
BI Buf Waits The percentage of BI buffer waits. A BI buffer wait occurs when the
database engine must wait to access a BI buffer.
For optimum performance, try to keep this number as low as possible. To decrease this
number, allocate more BI buffers with the Before-image Buffers (-bibufs) parameter.
AI Buf Waits The percentage of AI buffer waits. An AI buffer wait occurs when the
database engine must wait to access an AI buffer.
For optimum performance, try to keep this number as low as possible. To decrease this
number, allocate more AI buffers with the After-image Buffers (-aibufs) parameter.
1956
Summary
Writes by APW The percentage of database blocks written to disk by the APW; this
is a percentage of the total number of database blocks written.
For optimum performance, try to keep this number as high as possible. To increase this
number, start more APWs and increase the cluster size.
Writes by BIW The percentage of BI blocks written to disk by the BIW; this is a
percentage of the total number of BI blocks written to disk. For optimum performance, try
to keep this percentage fairly high. You can increase the number of BI buffers with the
-bibufs parameter.
Writes by AIW The percentage of AI blocks written to disk by the AIW; this is a
percentage of the total number of AI blocks written. For optimum performance, try to keep
this percentage fairly high. You can increase the number of AI buffers with the -aibufs
parameter.
Free blocks The number of blocks on the databases free chain. The free chain is a
chain of previously used and then deallocated database blocks.
RM chain The number of blocks on the databases RM chain. The RM chain is a chain
of partially filled database blocks.
Buffer hits The percentage of buffer hits. A buffer hit occurs when the database engine
locates a database record in the buffer pool and does not have to read the record from disk.
For optimum performance, keep this number as high as possible. To increase this number,
allocate more buffers with the -B parameter. Increase the number of buffers until one of
the following occurs:
The number of I/Os per second is reduced to less than half the capacity of the drives
1957
Servers
The last line of the Summary Activity display summarizes the current number of each type of
process running against the database at the time you run the Activity option. Table 1917
defines the process types.
Table 1917:
Process types
Field
Description
Servers
Clients
APWs
Servers
Displays information about server activity. Figure 1932 shows a sample Servers Activity
display. A separate page is displayed for each server.
09/25/03
16:28
Messages received
Messages sent
Bytes received
Bytes sent
Records received
Records sent
Queries received
Time slices
Activity: Servers
from 09/25/03 13:56 to 09/25/03 13:57 (19 sec)
Total
204
152
13252
40683
0
122
54
51
Per Min
131
98
8549
26247
0
79
35
32
Per Sec
2.19
1.6
142.49
437.45
0.00
1.31
0.58
0.54
Per Tx
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
1958
Buffer Cache
Time slices The number of query time slice switches. A time slice switch occurs when
the server moves to processing the next client without completing the query.
Buffer Cache
Displays activity information about the database buffer cache (also called the buffer pool).
Figure 1933 shows a sample Buffer Cache Activity display.
09/25/03
16:28
Logical Reads
Logical Writes
O/S reads
O/S writes
Checkpoints
Marked at checkpoint
Flushed at checkpoint
Writes deferred
LRU skips
LRU writes
APW Enqueues
Per Min
73
0
25
6
0
0
0
0
0
0
0
Per Sec
1.21
0.00
0.42
0.10
0.00
0.00
0.00
0.00
0.00
0.00
0.00
Per Tx
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
Hit Ratio: 68 %
Logical Reads The number of client requests for database block read operations.
Logical Writes The number of client requests for database block write operations.
Marked at checkpoint The number of blocks scheduled to be written before the end
of a checkpoint.
Flushed at checkpoint The number of blocks that were not written during the
checkpoint and that had to be written all at once at the end of the checkpoint.
Writes deferred The total number of changes to blocks that occurred before the blocks
were written. Each deferred write is potentially an I/O operation saved.
1959
Page Writers
LRU skips The number of times a buffer on the LRU chain was skipped because it was
locked or modified.
LRU writes The number of blocks written to free a buffer for a read operation.
APW Enqueues The number of modified buffers placed on the APW queue for
writing.
Hit Ratio The percentage of buffer cache requests that did not require a physical disk
I/O operation.
Page Writers
Displays information about asynchronous page writer (APW) activity. Figure 1934 shows a
sample Page Writers Activity display.
09/26/03
16:29
Total DB writes
APW DB writes
scan writes
APW queue writes
ckp queue writes
scan cycles
buffers scanned
bufs checkpointed
Checkpoints
Marked at checkpoint
Flushed at checkpoint
Number of APWs:
Total
3
0
8
2
0
0
0
173
8211
0
0
Per Min
0
0
25
6
0
0
0
0
0
0
0
Per Sec
0.00
0.00
0.42
0.10
0.00
0.00
0.00
0.11
5.22
0.00
0.00
Per Tx
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.14
6.79
0.00
0.00
1960
Total DB writes The total number of database write operations performed by all
processes.
APW DB writes The number of database write operations performed by the APW.
This is a subset of the total number of DB writes:
Scan writes The number of buffers written during the scan cycle.
APW queue writes The number of buffers written to clear the APW queue.
Ckp queue writes The number of buffers written from the checkpoint queue.
Scan cycles The number of scan cycles. During a scan cycle, the APW scans a
portion of the buffer pool to look for modified buffers.
BI Log
Marked at checkpoint The number of buffers that were scheduled to be written before
the end of the checkpoint.
Flushed at checkpoint The number of blocks that were not written during the
checkpoint and had to be written all at once at the end of the checkpoint.
BI Log
Displays information about BI log activity. Figure 1935 shows a sample BI Log Activity
display.
09/28/06
15:22:38
Activity: BI Log
09/28/06 10:50 to 09/28/06 15:22 (4 hrs 32 min)
Total BI writes
BIW BI writes
Records written
Bytes written
Total BI Reads
Records read
Bytes read
Clusters closed
Busy buffer waits
Empty buffer waits
Log force waits
Log force writes
Partial writes
Input buffer hits
Output buffer hits
Mod buffer hits
BO buffer hits
Total
Per Min
Per Sec
Per Tx
1517
956
116270
9996664
386
26923
2330572
19
588
29
0
0
222
388
135
431
53169
6
4
428
36768
1
99
8572
0
2
0
0
0
1
1
0
2
196
0.09
0.06
7.13
612.80
0.02
1.65
142.87
0.00
0.04
0.00
0.00
0.00
0.01
0.02
0.01
0.03
3.26
758.50
478.00
58135.00
4998332.00
193.00
13461.50
1165286.00
9.50
294.00
14.50
0.00
0.00
111.00
194.00
67.50
215.50
26584.50
BIW BI writes The number of writes to the BI file performed by the before-image
writer (BIW). For good performance, this number should be high in relation to the total
number of BI writes.
Total BI Reads The number of BI blocks read from the BI file (to undo transactions).
Records read The number of BI records (notes) read from the BI file.
Bytes read The amount of data read from the BI file, in bytes.
1961
BI Log
1962
Clusters closed The number of BI clusters filled and closed in preparation for reuse.
Busy buffer waits The number of times a process had to wait for someone else to finish
before being able to write to a BI file.
Empty buffer waits The number of times a process had to wait because all buffers
were full.
Partial Writes The number of writes to the BI file made before the BI buffer is full.
This might happen if:
The Delayed BI File Write (-Mf) parameter timer expired before the buffer was filled.
An APW attempts to write a block whose changes are recorded in a BI buffer that
has not been written. Because BI notes must be flushed before the AI note is flushed,
the AIW writes the data in the BI buffer before the buffer is full so it can do the AI
write.
An AIW ran ahead of the BIW. Because BI notes must be flushed before the AI notes
can be written, the AIW writes the BI buffer before it is full, so it can do the AI write.
Input buffer hits The number of times a note needed to back out a transaction was
found in the Input BI buffer. The Input BI buffer is a shared buffer that contains the BI
block needed for a transaction backout if the note could not be found in the Output BI
buffer or the Mod BI buffers. The Input BI buffer is read back into memory from the BI
log.
Output buffer hits The number of times a note needed to back out a transaction was
found in the Output BI buffer. The Output BI buffer is a shared buffer that contains the BI
notes currently being written to disk.
Mod buffer hits The number of times a note needed to back out a transaction was
found in the Mod BI buffers. The Mod BI buffers are shared buffers that are being updated
with notes for currently executing transactions. The Mod BI buffers have not been written
to disk.
BO buffer hits The number of times a note needed to back out a transaction was found
in a private BO (Back out) buffer. Back out buffers are private buffers allocated, one per
user, after transaction backout processing twice locates the required note in the Input BI
buffer. BO buffers reduce contention for the Input BI buffer when more than one user is
simultaneously backing out transactions.
AI Log
AI Log
Displays after-imaging activity. Figure 1936 shows a sample AI Log Activity display.
09/13/03
11:36:56
Activity: AI Log
from 09/12/03 13:56 to 09/13/03 11:23 (21 hrs 27 min)
Total AI writes
AIW AI writes
Records written
Bytes written
Busy buffer waits
Buffer not avail
Partial Writes
Log force waits
Total
131
127
3630
129487
13
0
4
0
Per Min
0
0
25
0
0
0
0
0
Per Sec
0.08
0.08
2.31
82.42
0.00
0.00
0.00
0.00
Per Tx
0.10
0.10
3.00
107.19
0.01
0.00
0.00
0.00
AIW AI writes The number of AI writes performed by the after-image writer (AIW).
This is a subset of the total AI writes.
Busy buffer waits The number of times a process had to wait because a buffer was
held.
Buffer not avail The total number of times a process had to wait because a buffer was
not available.
Partial writes The number of writes to the AI file made before the AI buffer is full.
This might happen if:
The Delayed BI File Write (-Mf) parameter timer expired before the buffer was filled
An APW attempts to write a block whose changes are recorded in an AI buffer that
has not been written
1963
Lock Table
Lock Table
Displays information about Lock Table Activity. The database engine stores record locks in the
lock table. If a user tries to acquire a lock on a record, and the lock table overflows, the server
aborts.
Figure 1937 shows a sample Lock Table Activity display.
04/29/04
15:11:21
Requests:
Share
Intent Share
Exclusive
Intent Exclusive
Share Intent Excl
Upgrade
Record Get Lock
Table Lock
Record Lock
Grants:
Share
Intent Share
Exclusive
Intent Exclusive
Share Intent Excl
Upgrade
Record Get Lock
Table Lock
Record Lock
Waits:
Share
Intent Share
Exclusive
Intent Exclusive
Share Intent Excl
Upgrade
Record Get Lock
Table Lock
Record Lock
Requests Cancelled
Downgrades
Shared Find
Exclusive Find
Total
Per Min
Per Sec
Per Tx
3715
0
8
0
0
0
3505
20
198
710
0
2
0
0
0
670
4
38
11.83
0.00
0.03
0.00
0.00
0.00
11.16
0.06
0.63
61.92
0.00
0.13
0.00
0.00
0.00
58.42
0.33
3.30
3715
0
8
0
0
0
3505
20
198
710
0
2
0
0
0
670
4
38
11.83
0.00
0.03
0.00
0.00
0.00
11.16
0.06
0.63
61.92
0.00
0.13
0.00
0.00
0.00
58.42
0.33
3.30
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0
0
0
0
0
0
0
0
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
1964
Lock Table
The display lists the following operations:
Requests
Intent Share The number of user requests for an intent share lock
Intent Exclusive The number of user requests for an intent exclusive lock
Share Intent Exclusive The number of user requests for a shared lock with intent
exclusive
Upgrade The number of user requests to upgrade a lock from shared lock to
exclusive lock
Record Get Lock The number of user requests for a record get lock
Grants
Share Intent Exclusive The number of granted share lock with intent exclusive
requests
Record Get Lock The number of granted requests for record get locks
Waits
Intent Share The number of times users waited for an intent share lock
Intent Exclusive The number of times users waited for an intent exclusive lock
Share Intent Exclusive The number of times users waited for a share lock with
exclusive intent
Record Get Lock The number of times users waited for a record get lock
Table Lock The number of times users waited for a table lock
09/25/03
16:31
Database reads
Index blocks
Data blocks
BI reads
AI Reads
Total reads
Total
57
13
44
13
0
70
Per Min
0
0
0
0
0
0
Per Sec
0.03
0.00
0.02
0.00
0.00
0.04
Per Tx
0.04
0.01
0.03
0.00
0.05
0.05
Database writes
Index blocks
Data blocks
BI writes
AI writes
Total writes
3
0
3
131
0
134
0
0
0
0
0
0
0.00
0.00
0.08
0.00
0.00
0.08
0.00
0.00
0.00
0.10
0.00
0.11
1966
10/18/06
12:30:13
Total
/scratch/shannon/101B/sep25.db
Reads
0
Writes
0
Extends
0
0.27
0.0
0.00
0.0
0.00
0.0
Per Min
Per Sec
Per Tx
15
82
1.36
0.0
0.09
0.0
0.00
0.0
Per Min
Per Sec
136
16
0.27
0.0
0.00
0.0
Per Min
2.27
Per Tx
25
Total
/scratch/shannon/101B/sep25_7.d1
Reads
Writes
0
Extends
0
Per Tx
16
Total
/scratch/shannon/101B/sep25.d1
Reads
0
Writes
0
Extends
0
Per Sec
Total
/scratch/shannon/101B/sep25.b1
Reads
0
Writes
0
Extends
0
Per Min
Per Sec
0.0
Per Tx
1
0
5
0
0.09
0.00
0.00
0.0
0.00
0.0
1967
Space Allocation
Space Allocation
Displays information about Space Allocation. Figure 1940 shows a sample Space Allocation
Activity display.
09/26/03
14:52:41
Database extends
Take free block
Return free block
Alloc rm space
Alloc from rm
Alloc from free
Bytes allocated
rm blocks examined
Remove from rm
Add to rm, front
Add to rm, back
Move rm front to back
Removed locked rm entry
Total
57
13
44
13
0
70
3
0
3
131
0
134
0
Per Min
0
0
0
0
0
0
0
0
0
0
0
0
0
Per Sec
0.03
0.00
0.02
0.00
0.00
0.04
0.00
0.00
0.00
0.08
0.00
0.08
0.00
Per Tx
0.04
0.01
0.03
0.00
0.05
0.05
0.00
0.00
0.00
0.10
0.00
0.11
0.00
1968
Take free block The number of times a block was used from the free chain
Return free block The number of times a block was returned to the free chain
Alloc rm space The number of times space was allocated for a record or record
fragment
Alloc from rm The number of times space was allocated from the rm chain
Alloc from free The number of times space was allocated from the free chain
rm blocks examined The number of blocks examined in the rm chain while looking
for space for a record fragment
Add to rm, front The number of blocks added to the front of the rm chain
Add to rm, back The number of blocks added to the back of the rm chain
Move rm front to back The number of blocks moved from the front to the back of the
rm chain
Remove locked rm entry The number of rm chain entries that were locked
Index
Index
Displays information about index activity. Figure 1941 shows a sample Index Activity display.
09/14/03
17:09:35
Activity: Index
from 09/14/03 17:01 to 09/14/03 17:07 (6 min)
Total
0
312
8
0
0
0
Per Min
Per Sec
0
1
0
0
0
0
0.00
0.02
0.00
0.00
0.00
0.00
Per Tx
0.00
104.00
2.67
0.00
0.00
0.00
Find index entry The number of times an index entry was looked up
Remove locked entry The number of old locks released at transaction end
Record
Displays information about record activity. Figure 1942 shows a sample Record Activity
display.
09/14/03
17:04:35
Read record
Update record
Create record
Delete record
Fragments read
Fragments created
Fragments deleted
Fragments updated
Bytes read
Bytes created
Bytes deleted
Bytes updated
Activity: Record
from 09/14/03 9:24 to 09/14/03 14:38 (5 hrs 14 min)
Total
Per Min
Per Sec
2374
18
10
3
2610
24
17
4
409795
1813
1310
1042
0
0
0
0
8
0
0
0
1307
6
4
3
0.13
0.00
0.00
0.00
0.14
0.00
0.00
0.00
21.7
0.10
0.07
0.06
Per Tx
791.33
6.00
3.33
1.00
870.00
8.00
5.67
1.33
136598.33
604.33
436.67
347.33
1969
Other
The display shows the following operations:
Other
Displays information about miscellaneous activity. Figure 1943 shows a sample Other
Activity display.
09/14/03
17:04:35
Activity: Other
from 09/14/03 17:01 to 09/14/03 17:01 (10 sec)
Commit
Undo
Wait on semaphore
Flush master block
Total
Per Min
Per Sec
Per Tx
3
0
0
3
0
0
0
18
0.00
0.00
0.00
0.30
1.00
0.00
0.00
1.00
1970
Wait on semaphore The number of times a process had to wait for a resource
Flush master block The number of times the database master block was written to disk
Performance Indicators
Checkpoints
Performance Indicators
Displays activity statistics related to performance. Figure 1944 shows a sample Performance
Indicators display.
09/19/03
17:04:35
Commits
Undos
Index Operations
Record Operations
Total o/s i/o
Total o/s reads
Total o/s writes
Background o/s writes
Partial log writes
Database extends
Total waits
Lock waits
Resource waits
Latch timeouts
Total
Per Min
Per Sec
Per Tx
0
0
0
3
28
23
5
5
1
0
0
0
0
0
0
0
0
18
0
0
0
0
0
0
0
0
0
0
0.00
0.00
0.00
0.30
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
Index Operations The total of all operations performed on indexes (for example, index
additions and deletions)
1971
Performance Indicators
1972
Record Operations The total of all operations on records (for example, record
additions and deletions)
Total o/s i/o The total number of read and write operations performed
Background o/s writes The total number of writes performed by background writers
(APWs, BIW, and AIW); see Chapter 13, Managing Performance, for more information
on background writers
Partial log writes Writes to the BI file made before the BI buffer is full. This might
happen if:
The Delayed BI File Write (-Mf) parameter timer expired before the buffer was filled
An APW attempts to write a block whose changes are recorded in a BI buffer that
has not been written. Because BI notes must be flushed before the AI note is flushed,
the AIW writes the data in the BI buffer before the buffer is full so it can do the AI
write
An AIW ran ahead of the BIW. Because BI notes must be flushed before the AI notes
can be written, the AIW writes the BI buffer before it is full, so it can do the AI write
Database extends The total number of times the database was made larger by
allocating operating system blocks
Lock waits The number of times the database engine waited for a lock to be released
Resource waits The number of times the database engine waited for a resource, such
as a row lock, a buffered lock in shared memory, or a transaction end lock, to become
available
Latch timeouts The number of times a spinlock expired, causing the process to nap,
when attempting to acquire a latch
Buffer pool hit rate The percentage of times that the database engine found a record
in the buffer pool and did not have to read the record from disk
09/25/03
16:19
Usr
0
5
6
7
Name
SYSTEM
rgw
---------Database ------Access
Read
Write
22
3
3
3955
8
0
0
49
3
0
0
1
---- BI ----Read
Write
13
0
0
0
---- AI ----Read
Write
1
127
0
2
2
0
0
0
1
0
0
0
Database Access The number of database access operations performed by the process
Database Read The number of database read operations performed by the process
Database Write The number of database write operations performed by the process
1973
09/25/03
16:19
Usr
Num
0
5
6
7
User
Name
SYSTEM
rgw
0
0
0
0
0
0
0
0
0
0
0
0
1974
Record Waits The number of times the user had to wait for a record lock
Trans Locks The number of times a deleted record is protected until transaction end
Trans Waits The number of times a user had to wait for a record marked for deletion
by an active transaction
Schema Waits The number of times the user had to wait for a schema lock
Checkpoints
Displays information about checkpoints. Figure 1947 shows a sample Checkpoints display.
10/18/06
14:50:57
Ckpt
No. Time
25 14:50:51
24
23
22
21
20
19
18
14:50:48
14:50:43
14:50:36
14:50:25
14:50:05
14:49:51
14:48:20
Checkpoints
Len
Freq
Flushes
62
0
0
6
0
17
0
91
3
5
7
11
20
14
91
61
61
60
71
68
68
76
59
61
59
70
65
65
63
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
0
0
1
0
3
0
Database Writes CPT Q The number of blocks written from the checkpoint queue by
the APWs
Database Writes Scan The number of blocks written by the APWs during the scan
cycle
Database Writes APW Q The number of blocks written by the APW queue and
replaced on the least recently used (LRU) chain by APWs
Database Writes Flushes The total number of blocks not written during the checkpoint
that had to be written all at once at the end of the checkpoint
1975
PROMON Utility
Updates
Creates
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Enter <return> for more, A, L, R, S, U, Z, P, T, or X (? for help):
1976
Reads The number of reads the user has performed on the table
Updates The number of updates the user has performed on the table
Creates The number of creates the user has performed on the table
Deletes The number of deletes the user has performed on the table
Delete
Index
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
1
2
3
4
5
6
7
8
9
10
11
Reads
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Creates
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Deletes
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Splits
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Block Dels
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1977
PROMON Utility
This display shows the following information:
Reads The number of times read access has occurred to the index by the user
Creates The number of times create access has occurred to the index by the user
Deletes The number of times delete access has occurred to the index by the user
Splits The number of split operations that have occurred to the index by the user
Block dels The umber of block deletes that have occurred to the index by the user
By default, the display will show the user activity for up to 100 indexes. The monitored indexes
are selected by the startup parameters -baseindex and -indexrange. See Chapter 18,
Database Startup Parameters,for more information on these parameters,
1978
Restricted Options
Terminate a Server
09/25/03
16:19:34
Usr
8
Name
rjw
Trans id
178
Trans State
Active
Crd Nam
-
Crd Tx Id
0
Trans State The transactions state. Table 194 lists possible transaction states.
1979
PROMON Utility
Note: To commit transactions on a database that is shut down, you must use the 2PHASE
RECOVER qualifier of PROUTIL.
1980
01/25/02
16:34:27
36000
01/25/02
16:19:34
APW
APW
APW
APW
100 milliseconds
1 seconds
1
25
APW queue check time Lets you dynamically adjust the value of the Page Writer
Queue Delay (-pwqdelay) parameter.
Note: Though you can set the value of -pwqdelay, it is a self-tuning parameter. When the
database encounters heavy loads, -pwqdelay decreases its value so that the APW
writes more often. When the demand on the database lessens, -pwqdelay increases
its value so that the APW writes less often. The default value, in milliseconds, for
-pwqdelay is 100.
APW buffers scan time Lets you dynamically adjust the value of the Page Writer Scan
Delay (-pwsdelay) parameter. The default value, in seconds, for -pwsdelay is 1.
1981
PROMON Utility
APW buffers per scan Lets you dynamically adjust the value of the Page Writer Scan
(-pwscan) parameter.
APW writes per scan Lets you dynamically adjust the value of the Page Writer
Maximum Buffers (-pwwmax) parameter.
Restricted Options
This option can only be used under the supervision of Progress Technical Support. Selecting this
option will prompt you for a restricted option key which Technical Support will provide if
necessary.
Terminate a Server
Lets you select a server for termination. Figure 1954 shows the Terminate a Server display.
01/20/05
15:38:45
Sv
No
Pid
0
1
2
3
4
19952
17872
0
0
0
Type
Login
Login
Inactive
Inactive
Inactive
Protocol Logins
Pend.
Users
Cur.
Users
Max.
Users
Port
Num
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
4
5
0
0
0
0
2054
0
0
0
TCP
1982
Memory overwrite checking is performed on record and index write operations only.
Index and record block operations are checked prior to before-image note writing and the
subsequent database block write. The length of the intended operation is checked to
prevent operations that exceed the database block size.
Index block splits are checked during the strategic points in an index block split operation.
Index insert operations are checked during the strategic points of the insertion of a key
element.
12/10/07
10:19:51
-MemCheck:
-DbCheck:
-AreaCheck:
-IndexCheck:
-TableCheck:
disabled
disabled
disabled
disabled
disabled
-DbCheck Enables database consistency check. When this option is enabled, the
consistency check is applied to all index blocks and record blocks (except BLOB blocks)
in the entire database.
-AreaCheck Enables area consistency check. When this option is enabled, the
consistency check is applied to all index blocks and record blocks (except BLOB blocks)
in the specified area.
-IndexCheck Enables index consistency check. When this option is enabled, the
consistency check is applied to all index blocks of the specified index. This check is for
index blocks only.
-TableCheck Enables table consistency check. When this option is enabled, the
consistency check is applied to all record blocks (except BLOB blocks) of the specified
table. This check is for record blocks only.
1983
PROMON Utility
The following precedence rules apply when enabling multiple parameters at the same time:
Each option can only be enabled once. For example, the following parameters are invalid:
-AreaCheck Customer Area -AreaCheck Order Area.
If the -DbCheck option is used with any other options, -DbCheck will have precedence and
the consistency check will be applied on the whole database.
-AreaCheck <area name>, -IndexCheck <index name> and -TableCheck <table name>
can be used at the same time. The consistency check will be enabled for the superset of
these options.
1984
01/25/02
16:19:34
24 lines
Yes
10 seconds
10 seconds
5 seconds
10
All areas
Clear screen for first page If Yes, the system clears the screen before the main menu
and before every list.
Monitor sampling interval The interval at which PROMON samples data, in seconds.
Pause between displays The delay between pages of output when in continuous
monitoring mode, in seconds. (Type u for the Continue uninterrupted option.)
Pause between screens The delay after the last page of output when in continuous
monitoring mode, in seconds.
Number of auto repeats The number of times the display is repeated when auto repeat
is selected on the activity displays.
Change working area Displays the area being monitored. By default, PROMON
monitors all areas.
To change a monitor option value, enter the option number, and then enter the new value as
prompted.
1985
PROMON Utility
Transaction Control:
Usr Name
Trans
Login
2 paul
757 07/25/01
.
.
.
.
.
.
.
.
.
.
.
.
Time
10:15
R-comm?
yes
.
.
.
.
.
.
Limbo?
yes
.
.
.
Crd?
no
Coord Crd-task
sports1
760
.
.
.
.
.
.
.
.
.
Figure 1957: Sample output for PROMON 2PC Transaction Control option
Note:
If you run PROMON against a database where no limbo transaction has occurred,
PROMON does not display any field information on the Transaction Control screen.
Usr
The transaction number on the machine where you are running PROMON.
Login
The date the user running the distributed transaction began the session.
Time
The time the user running the distributed transaction began the session.
R-comm?
The state of the transaction. If this field is yes, the transaction is in a ready-to-commit state.
This means that the first phase of two-phase commit is complete.
Limbo?
Whether a transaction is a limbo transaction. If one or more limbo transactions occur, the
Limbo field displays a yes for each transaction listed. A yes in this field means that you
must resolve transactions.
1986
Whether the database you are running PROMON against is the coordinator.
Coord
1987
PROMON Utility
Figure 1958: Sample output for PROMON Resolve 2PC Limbo Transactions
option
Abort a Limbo Transaction
When you choose Abort a Limbo Transaction, PROMON prompts you to enter the user
number of the transaction you want to abort. (PROMON displays the users number in the
User column of the 2PC Transaction Control screen.) Enter the users number, then
press RETURN.
Commit a Limbo Transaction
When you choose Commit a Limbo Transaction, PROMON prompts you to enter the
user number of the transaction you want to commit. (PROMON displays the user number
in the Usr column of the 2PC Transaction Control screen.) Enter the user number, then
press RETURN. PROMON displays a message similar to the following:
Note:
1988
To commit transactions on a database that is shut down, you must use the 2PHASE
RECOVER qualifier of PROUTIL.
If the coordinator database is shut down and you cannot run PROMON against it, you
must use the 2PHASE COMMIT qualifier of PROUTIL to determine whether it
committed the transaction.
1989
PROMON Utility
WARNING:
WARNING:
1.
2.
3.
Q.
When you choose Display all JTA Transactions, PROMON displays the following
information about all the active JTA transactions: internal transaction id, user id, JTA
state, and external JTA transaction id.
Rollback a JTA Transaction
When you choose Rollback a JTA Transaction, PROMON prompts you for a transaction
id to rollback and requires you to confirm your request before rolling back the transaction.
Caution: Rolling back a JTA transaction independent of the external transaction
manager, can compromise referential integrity. Choose this option only when
your JTA transaction manager has encountered a non-recoverable failure.
Commit a JTA Transaction
When you choose Commit a JTA Transaction, PROMON prompts you for a transaction
id to commit and requires you to confirm your request before committing the transaction.
Caution: Committing a JTA transaction independent of the external transaction manager,
can compromise referential integrity. Choose this option only when your JTA
transaction manager has encountered a non-recoverable failure.
1990
Page size:
24
Clear screen for first page:
Short pause after each page:
Long pause after last page:
Monitor Sampling Interval:
APW queue delay: 500 ms.
APW queue start: 1
APW scan delay:
3 sec.
APW scan count:
9
APW write limit: 5
BIW scan delay:
0
Group commit delay: 10
Yes
4
8
30 sec.
Clear screen for first page If Yes, the system clears the screen before the main menu
and before every list.
Short pause after each page The delay in seconds between pages of output when in
continuous monitoring mode. (Enter u for the Continue Uninterrupted option.)
Long pause after last page The delay after the last page of output when in continuous
monitoring mode.
APW queue delay The value of the Page Writer Queue Delay (-pwqdelay) startup
parameter. -pwqdelay is a self-tuning parameter. When the database encounters heavy
loads, -pwqdelay decreases its value so that the APW writes more often. When the
demand on the database lessens, -pwqdelay increases its value so that the APW writes less
often. The default value, in milliseconds, for -pwqdelay is 100.
APW queue start The value of the Page Writer Queue Minimum (-pwqmin) startup
parameter. Do not change this value unless directed to do so by Technical Support.
APW scan delay The value of the Page Writer Scan Delay (-pwsdelay) startup
parameter. The default value, in seconds, of -pwsdelay is 1.
1991
PROMON Utility
1992
APW scan count The value of the Page Writer Scan (-pwscan) parameter. Do not
change this value unless directed to do so by Technical Support.
APW write limit The value of the Page Writer Maximum Buffers (-pwwmax) parameter.
Do not change this value unless directed to do so by Technical Support.
BIW scan delay The value of the BI Writer Scan Delay startup parameter (-bwdelay).
The default value, in seconds, is 0.
Group commit delay The value of the Group Delay startup parameter (-groupdelay).
The default value, in seconds, is 10.
20
PROUTIL Utility
This chapter describes the OpenEdge database administration utility PROUTIL.
Parameters
db-name
Specifies the qualifier that you want to use. You can supply the qualifiers described in
Table 201.
Note:
PROUTIL and its qualifiers support the use of internationalization startup parameters
such as -cpinternal codepage and -cpstream codepage. See Chapter 18, Database
Startup Parameters, for a description of each database-related internationalization
startup parameter.
Table 201 lists all of the PROUTIL qualifiers. See the appropriate section for each of the
qualifiers for more detailed information.
Table 201:
Qualifier
202
(1 of 4)
Description
2PHASE BEGIN
2PHASE COMMIT
2PHASE END
2PHASE MODIFY
2PHASE RECOVER
AUDITARCHIVE
AUDITLOAD
BIGROW
BULKLOAD
BUSY
CHANALYS
Qualifier
(2 of 4)
Description
CODEPAGE- COMPILER
CONV910
CONVCHAR
CONVFILE
DBANALYS
DBAUTHKEY
DBIPCS
DESCRIBE
DISABLEAUDITING
DISABLEJTA
DISABLEKEYEVENTS
DISPTOSSCREATELIMITS
Display the block toss and create limits for all tables and
BLOBS in a specified area
DUMP
DUMPSPECIFIED
ENABLEAUDITING
ENABLEJTA
ENABLEKEYEVENTS
ENABLELARGEFILES
ENABLELARGEKEYS
ENABLESEQ64
ENABLESTOREDPROC
203
Qualifier
204
(3 of 4)
Description
HOLDER
IDXACTIVATE
IDXANALYS
IDXCOMPACT
IDXMOVE
IDXBUILD
IDXCHECK
IDXFIX
INCREASETO
IOSTATS
LOAD
MVSCH
RCODEKEY
REVERT
SETAREACREATELIMIT
SETAREATOSSLIMIT
SETBLOBCREATELIMIT
SETBLOBTOSSLIMIT
SETTABLECREATELIMIT
SETTABLETOSSLIMIT
Qualifier
(4 of 4)
Description
TABANALYS
TABLEMOVE
TRUNCATE AREA
TRUNCATE BI
UPDATESCHEMA
UPDATEVST
WBREAK- COMPILER
WORD-RULES
205
Parameters
-crd
-tp nickname
] ...
db-name
Specifies that the database receive top priority when PROUTIL assigns a coordinator
database. If you do not specify a database, PROUTIL randomly assigns a coordinator
database from the available databases.
Specifying a single coordinator database can greatly simplify the process of resolving
limbo transactions because PROUTIL does not have to consult several different
coordinating databases.
-tp nickname
Identifies a unique nickname or alternate name that PROUTIL uses to identify the
coordinator database. If you do not specify a nickname, PROUTIL automatically assigns
the name of the database without the .db extension as the nickname.
Specifying a nickname simplifies resolving limbo transactions when two databases have
the same pathname but are on different machines.
Notes
206
You cannot use the PROUTIL 2PHASE BEGIN qualifier against a database that is online.
A TL (Transaction Log) area is required to hold the transaction log data generated when
two-phase commit is in use. You must create and maintain a TL area for your database in
order to use two-phase commit.
Parameters
db-name
Specifies the number of the transaction where you want to check the commit status.
If the coordinator committed the transaction, PROUTIL displays a message similar to the
following:
Once you determine that the coordinator database committed the transaction, you must also
commit the transaction on the database where the limbo transaction occurred. If the coordinator
did not commit the transaction, you must terminate the transaction on the database where the
limbo transaction occurred.
207
Parameters
db-name
208
Parameters
-crd
-tp nickname
] ...
db-name
Determines whether or not the database can serve as a coordinator database. If you specify
-crd against a database that is a candidate for coordinator database, it is no longer a
candidate. If you specify -crd against a database that is not a candidate, it becomes one.
-tp nickname
Identifies a new nickname for the coordinator database. Specifying a nickname simplifies
resolving limbo transactions when two databases have the same pathname but are on
different machines.
209
Parameters
db-name
If you type y, PROUTIL commits the transaction. If you type n, PROUTIL aborts the
transaction.
2010
Syntax
[date-range]
[-recs num-recs] [-nodelete] [-checkseal]
[-directory directory-name]
[-userid user-id [-password passwd] ]
UNIX
Windows
db-name
When specified, date-range limits the archived records as described in the following
table:
If your date range is . . .
Blank
One date
Specifies that AUDITARCHIVE copy the audit data records to the archive file, leaving
the audit data in the database. If not specified, AUDITARCHIVE will delete the audit data
records as they are archived.
2011
Specifies that AUDITARCHIVE verify that the seal of each audit data record matches the
database MAC key prior to writing it to the audit archive file. This option requires that the
audit policies in effect when writing the audit data records have specified a Data Security
Level of DB Passkey. For more information on creating audit policies and data security,
see OpenEdge Getting Started: Core Business Services.
-directory directory-name
Specifies the directory where the audit archive file is written. If you do not specify a value
for directory-name, the file is written to your current working directory. If an audit data
archive file for your database already exists in the output directory, you are prompted to
confirm if you wish to proceed and overwrite the existing file, or exit the utility.
If you are purging your audit data and do not want the data written to the archive file, you
must explicitly specify the bit bucket (/dev/null) as the output destination in
directory-name.
-userid username
Specifies the user name of the user privileged to execute the archive. If you are using the
_User table, username must be defined there, and you must include the -userid parameter
to identify the user. If you are not using the _User table, use of -userid is optional; in this
scenario, username specifies a local operating system login ID. Regardless of the source,
username must have the Audit Data Archiver privilege in order to execute
AUDITARCHIVE.
-password passwd
Specifies the password for username. If you are using the _User table, passwd is the
password defined for the user there. The value of passwd can be clear text or encrypted. If
you are not using the _User table, and have specified a local operating system ID with
-userid, passwd must be the encrypted value of the DB Pass key. See OpenEdge Getting
Started: Core Business Services or the Data Administration online Help for information
on the DB Pass Key. For information on encrypting a password, see the description of the
genpassword utility in OpenEdge Getting Started: Installation and Configuration.
If you do not supply a password corresponding to username, the utility will prompt for it.
AUDITARCHIVE archives audit data from the _aud_audit-data and
tables. It also copies data from the _db-detail, _client-session,
and _aud-event tables. The data archived is written to one file, named db-name.abd so that the
data from all the tables is treated as a unit. The file is sealed with a time stamp and the MAC
seal currently stored in the database.
_aud-audit-data-value
Archiving audit data is an auditable event. If your active audit policy is configured to audit the
execution of AUDITARCHIVE, then every time it runs an audit event will be generated. The
audit data record that audits the archive event will not be deleted from the database when
AUDITARCHIVE executes.
2012
If an archive file exists in the specified output directory, AUDITARCHIVE will not
overwrite it. AUDITARCHIVE issues a warning message and provides you with the
choice of overwriting the existing file or exiting.
If your archive file reaches the maximum file size for your operating system, a second file,
will be written.
db-name2.abd,
2013
Syntax
proutil db-name -C auditload archive-file-name
UNIX
Windows
db-name
Specifies that AUDITLOAD verify that the seal of each audit data record matches the
database MAC key prior to loading it into the archive database. This option requires that
the audit policies in effect when the audit data records were generated, specified a Data
Security Level of DB Passkey. For more information on creating audit policies and data
security, see OpenEdge Getting Started: Core Business Services. During the
AUDITLOAD, the _db-detail record containing the database MAC key is loaded into
the archive database before any of the audit data records, making it available for the
verification process.
-userid username
Specifies the user name of the user privileged to execute the load of the audit archive. If
you are using the _User table, username must be defined there, and you must include the
-userid parameter to identify the user. If you are not using the _User table, use of -userid
is optional; in this scenario, username specifies a local operating system login ID.
Regardless of the source, username must have the Audit Data Archiver privilege in order
to execute AUDITLOAD.
2014
Specifies the password for username. If you are using the _User table, passwd is the
password defined for the user there. The value of passwd can be clear text or encrypted. If
you are not using the _User table and have specified a local operating system ID with
-userid, passwd must be the encrypted value of the DB Pass key. See OpenEdge Getting
Started: Core Business Services or the Data Administration online Help for information
on the DB Pass Key. For information on encrypting a password, see the description of the
genpassword utility in OpenEdge Getting Started: Installation and Configuration.
If you do not supply a password corresponding to username, the utility will prompt for it.
Prior to beginning the load, AUDITLOAD verifies the data seal of the archive file. If
AUDITLOAD encounters a duplicate record, it is ignored without error.
Loading archived audit data is an auditable event. If the active audit policy on the archive
database is configured to audit AUDITLOAD, then every time it runs, an audit event will be
generated.
Notes
Multiple instances of AUDITLOAD can simultaneously run against the same database.
AUDITLOAD generates and updates index information for each record at the time it is
loaded.
2015
Parameters
-r
db-name
2016
You can calculate the number of BI clusters for a database by dividing the BI file physical
size by the BI cluster size. For example, a database BI file with a BI cluster size of 128K
and a physical size of 917,504 has 7 BI clusters.
By default, four BI clusters are created at startup. Any BI clusters specified by n are added
to the original four.
See Chapter 13, Managing Performance, for information about using the BIGROW
qualifier.
The bulk loader works only with OpenEdge databases. Some non-OpenEdge databases
offer a similar bulk-loading tool. If such a tool is not available, use the standard load
option in the Data Administration tool or Data Dictionary.
Syntax
proutil db-name
Parameters
[-yy n]
-C BULKLOAD fd-file
[-B n]
db-name
Century Year Offset startup parameter; n specifies a four-digit year (1900, for example)
that determines the start of a 100-year period in which any two-digit DATE value is defined.
The default is 1950, but the -yy n value must match the -yy n value used when the data
was dumped.
fd-file
You must create the bulk loader description file and load the data definitions (.df) files for
the database before you can run the Bulk Loader utility. See Chapter 15, Dumping and
Loading, for information about creating the bulk loader description file and loading data
definitions files.
You cannot use BULKLOAD to load protected audit data. For more information on
auditing and utility security and restrictions, see the Auditing impact on database
utilities section on page 916.
2017
Parameters
db-name
Return code
Example
Description
Database is in use
64
This example shows how you might use the BUSY qualifier on UNIX in a script that tests
whether the database is busy:
2018
Parameters
db-name
2019
Parameters
inputfile
Specifies the name of the text conversion map file. This file must follow a specific format.
outputfile
2020
A conversion map file is a code page description file that contains various tables of information
and uses a specific format. For more information about conversion map files, see OpenEdge
Development: Internationalizing Applications.
Parameters
db-name
Disable after-imaging.
There is always a chance that your schema could become corrupt during conversion. If the
conversion fails, your database cannot be recovered. If this happens, revert to the backup
copy of the Version 9 database and begin the conversion again.
2021
Parameters
[
[
analyze
| charscan | convert ]
] [ character-list ]
codepage
db-name
Scans the database and displays the fields that would be converted using the convert
function.
charscan
Searches every character field for the occurrence of any character from the provided
character list and reports the table name, field name, record ID of a match, and the total
number of occurrences. In addition, PROUTIL CONVCHAR CHARSCAN performs the
same analysis that PROUTIL CONVCHAR ANALYSIS performs.
If invalid data is entered with the charscan option, PROUTIL CONVCHAR generates an
error message and continues with the scan.
convert
Converts a databases character data to the target code page and labels the database.
Note: PROUTIL CONVCHAR CONVERT does not convert DBCODEPAGE CLOBs
(character large objects) to the new code page; instead, it changes DBCODEPAGE
CLOBs to COLUMN-CODEPAGE CLOBs with their existing settings.
codepage
Specifies the value of any single-byte, double-byte, or Unicode code page. Possible values
are undefined or any (target) code page name for a conversion table in your conversion
map file (by default, OpenEdge-install-dir/convmap.cp). If you specify a code page
name, the PROUTIL CONVCHAR utility must find the appropriate conversion table in
the conversion map file. If you specify undefined, no conversions take place and the
databases code page name is changed to undefined.
2022
Note also that hex and decimal values can be mixed. For example:
If a range contains valid and invalid characters, PROUTIL CONVCHAR ignores the
invalid character. For example, this syntax:
Syntax
proutil sports2000 -C convchar charscan 1253 "49854 - 50050"
2023
If charscan is selected but no character list is specified, the analyze function performs.
When converting a databases character set, PROUTIL CONVCHAR converts all of the textual
data in the database. For more information about character set processing, see OpenEdge
Development: Internationalizing Applications.
2024
{
|
proutil -C convfile
Parameters
analyze
] }
file-name
Specifies the name of the file containing the conversion table. This file requires the
following format:
the # character
/*240-255*/240
1250-852.dat
1250-il2.dat
1254-857.dat
852-1250.dat
852-il2.dat
857-1254.dat
cn850.dat
cn8859-1.dat
2025
cnnull.dat Performs no conversion; you might want to use this file as a template
file for creating new conversion tables
il2-1250.dat
il2-852.dat
If you create your own conversion table, you must understand how PROUTIL
CONVFILE uses the table. For each character in file-name, PROUTIL CONVFILE uses
the characters numeric value to index it against the table. When it locates the
corresponding cell, PROUTIL CONVFILE replaces the character in the text file with the
character value in that cell. Therefore, you must make a copy of the text file before you
run the PROUTIL CONVFILE utility, because if the utility does not completely convert
the file, the data in it is corrupted.
analyze
Displays the number of occurrences of each character in file-name. You might be able to
determine the character set of file-name by comparing the output of this option against
different code pages.
2026
Parameters
db-name
Combined
Size Tot %
42.8K
1.
3661B
0.
15779B
0.
269.4K
11.
87.6K
3.
1274B
0.
2284B
0.
1728B
0.
key:
bytes
kilobytes
megabytes
gigabytes
2027
Parameters
db-name
You use the DBAUTHKEY qualifier to set, change, and remove authorization keys. When
you compile source code, the value of this key is included in your r-code. CRC-based
r-code that does not include the correct authorization key will not run against your
database. The following table lists the values you must enter for each task:
Task
2028
old-key value
new-key value
Authorization key
Once you set the authorization key, do not forget it. You cannot change or remove the
authorization key without knowing its current value.
After you set, change, or remove the authorization key for a database, you can use the
RCODEKEY qualifier to update your r-code without recompiling it.
Parameters
db-name
:
:
:
:
:
:
:
:
/dev/dbs/source
150.0
8192
64
Thu Jul 20 13:36:52
Thu Jul 20 13:40:51
Thu Jul 20 13:40:51
Wed Jul 19 20:13:46
: 8192
: 32
: Thu Jul 20 13:40:23 2006
:
:
:
:
:
2006
2006
2006
2006
8192
Thu Jul 20 13:36:55 2006
Thu Jul 20 13:36:57 2006
13
2
Database Features
ID
---1
9
10
11
Feature
-------------------------------OpenEdge Replication
64 Bit DBKEYS
Large Keys
64 Bit Sequences
Failover Clusters
Figure 201:
Active
-----Yes
Yes
Yes
Yes
Yes
Details
------Source database
2029
The largest cluster size, in blocks, for all of the database data areas.
Create Date
Block Size
units of 16K
Last OpenDate
The date and time the before-image log was last opened
2030
Block Size
Begin Date
Last AIMAGE NEW The date and time the last RFUTIL AIMAGE NEW command
Nickname
Coordinator Priority
Backup Information
Database Features
ID
Feature
Active Is the feature active? Yes/No. Features, such as Auditing, can be enabled,
but not active.
Details Provides special information pertinent to the feature. For example, for
OpenEdge Replication, the database type is listed.
2031
Database features
ID
Note
2032
Feature
OpenEdge Replication
Database Auditing
JTA
AI Management
64-bit DB Keys
10
11
64-bit Sequences
12
Failover Clusters
Example
ID
Indicates the shared-memory segment number. One database can own more than one
segment.
InUse
Specifies whether the segment is in use. Yes or No values are displayed only if the segment
is number zero (0). All other segments show a hyphen (-). To determine whether a set of
segments is in use, check the InUse value of segment 0 for the relevant database.
Database
The PROMON Activity screen shows the amount of shared memory used by OpenEdge
processes and the number of shared-memory segments.
On UNIX, use ipcs -m to determine the size and number of shared-memory segments in
use. Subtract from SHMALL to see how many shared-memory segments are left. If the
broker dies or is killed by means other than the database shutdown command, it does not
release attached shared-memory segments. Use PROUTIL DBIPCS to verify shared
memory is frozen; use the UNIX command ipcrm -m to free it.
2033
Parameters
[-userid
username
[-password
passwd
]]
db-name
Identifies a user with the Audit Administrator privilege required to execute this utility.
-password passwd
Auditing was not fully disabled because auditing data tables are not empty.
(13647)
Auditing has been deactivated, no additional auditing records will be
recorded. (13649)
When the audit data tables are empty, and auditing is successfully disabled, the following
message is displayed:
For more information on auditing states, see the Auditing states section on page 96.
Notes
2034
Only users with the Audit Administrator privilege is allowed to execute this utility.
Disabling auditing does not remove any recorded audit data or the auditing tables.
Access to the audit data remains restricted to authorized users when auditing is disabled.
Parameters
db-name
2035
PROUTIL DISABLEKEYEVENTS
PROUTIL DISABLEKEYEVENTS
Disables the storage of key events for the database.
Syntax
proutil db-name -C disablekeyevents
Parameters
db-name
2036
Events in the _KeyEvt table are not deleted when PROUTIL DISABLEKEYEVENTS is
executed.
Parameters
db-name
The number of the area to have toss and create limits displayed.
Note
The database must be offline when displaying create and toss limits.
2037
{[-thread
Parameters
db-name
Specifies the database where the dump will occur. If the database is not within the current
working directory, you need to define the complete path.
owner-name
Specifies the owner of the table containing the data you want to dump. You must specify
an owner name unless the tables name is unique within the database, or the table is owned
by PUB. By default, ABL tables are owned by PUB.
table-name
Specifies the name of the table containing the data you want to dump.
directory
Specifies the name of the target directory where the data will be dumped.
-index num
Specifies the index to use to dump the tables contents. Note that word indexes are not
allowed. If you choose not to use this option, the command uses the primary index to dump
the table.
-thread n
For databases with an Enterprise license, indicate if an online dump is threaded. Specify
zero (0) for a single-threaded dump, one (1) for a threaded dump.
-threadnum nthreads
For a threaded dump, specify the maximum number of threads to create. The default value
is the number of system CPUs. The actual number of threads created may be less than
nthreads. PROUTIL DUMP determines the number of threads to create based on the
complexity of the index the DUMP follows.
-dumplist dumpfile
For a threaded dump, create a file, dumpfile, that lists all the files created by this utility.
The file, dumpfile, can be used as an input parameter to binary load (PROUTIL LOAD).
PROUTIL DUMP writes data from a table to a dump file or files. When the procedure finishes,
it reports the number of records written to the dump file. For a threaded dump, the number of
records written by each thread is also reported.
2038
See Chapter 15, Dumping and Loading, for more information about the DUMP
qualifier.
The PROUTIL DUMP and LOAD utilities use cyclic redundancy check (CRC) values to
establish the criteria for loading.
The OpenEdge database provides a flexible storage architecture and the ability to relocate
objects, such as tables and indexes, while the database remains online. As a result, when
you perform a binary load operation, the table numbers in a binary dump file might not
match the table numbers in the target database. Therefore, when you perform a binary load
operation, the criteria for loading tables is based solely on cyclic redundancy check (CRC)
values, and not table numbers.
For example, when you dump a table, the PROUTIL utility calculates a CRC value for the
table and stores it in the header of the binary dump file. When you load the table,
PROUTIL matches the CRC value stored in the header with the CRC value of the target
table. The values must match or the load is rejected.
You can load a binary dump file created with a previous version of the PROUTIL DUMP
utility, because the current version of PROUTIL LOAD uses the CRC value established
when the file was originally dumped. Consequently, the database maintains backwards
compatibility.
PROUTIL DUMP writes data from a table to a dump file or files. The name of the resulting
dump files depends on the owner of the table. By default, ABL tables are owned by PUB.
When tables owned by PUB are dumped to a file by an offline or single-threaded online
dump, the filename is the table name with .bd appended. For example, tablename.bd.
However, when tables owned by anyone other than PUB are dumped to a file, the resulting
filename contains the owner name and table name. For example,
ownername_tablename.bd
On systems that have a 2GB-file-size limitation, PROUTIL DUMP creates multiple files
when you dump a table larger than 2GB. For example, when you dump data from a table
with the name customer that is 6.4GB, PROUTIL DUMP creates four binary dump files:
customer.bd, customer.bd2, and customer.bd3, each of which is approximately 2GB,
and customer.bd4, which is approximately 0.4GB. The PROUTIL DUMP procedure
adds header blocks to the binary dump files. As a result, the total size of the binary dump
files is slightly larger than the table itself.
On systems without 2GB-file-size limitation, PROUTIL DUMP, running singly threaded,
creates only one binary dump file regardless of the size of the table.
For all threaded dumps, PROUTIL DUMP creates multiple files, one for each thread. The
first thread creates the file tablename.bd; the second thread creates the file
tablename.bd2; each additional thread creates a file with an incremental number,
tablename.bdn. Use the -dumpfile option to generate a list of the files created by the
threaded dump.
If the file specified by the -dumpfile option exists, PROUTIL DUMP will overwrite the
existing file.
2039
2040
PROUTIL DUMP does not record the code page of the data it writes to the dump file.
Dumped data can only be loaded to a database with the same code page as the source
database. Use the ASCII dump and load utilities to perform a code page conversion with
your dump and load. For more information on the ASCII utilities, see the Dumping user
table contents with a Data tool section on page 1516.
You cannot use DUMP to dump protected audit data. For more information on auditing
and utility security and restrictions, see the Auditing impact on database utilities section
on page 916.
[owner-name.]table-name.field-name
[-preferidx
Parameters
db-name
Specifies the database where the dump will occur. You must completely define the path.
owner-name
Specifies the owner of the table containing the data you want to dump. You must specify
an owner name unless the tables name is unique within the database, or the table is owned
by PUB. By default, ABL tables are owned by PUB.
table-name
Specifies the name of the table containing the data you want to dump.
field-name
Specifies the name of the index field to be used to select the data you want to dump.
operator1
Specifies the first operator in the range: EQ (equals to), GT (greater than), LT (less than), GE
(greater than or equal to), or LE (less than or equal to).
low-value
Specifies the first value against which the field-name value will be compared.
AND
Specifies a bracketed range for the binary dump. The first value and operator, combined
with the second value and operator, define a subset. You can not use AND to dump two
sets of non-contiguous result.
operator2
Specifies the second operator in the range: LT (less than), or LE (less than or equal to).
high-value
Specifies the first value against which the field-name value will be compared.
directory
Specifies the name of the target directory where the data will be dumped. A directory must
be specified.
2041
By default, the dump uses the primary index to sort rows. Specifying -preferidx orders
the rows by the specified index.
index-name
Specifies the name of the target directory where the data will be dumped. A directory must
be specified.
Notes
2042
If a field needs to be compared to a string containing other than alpha characters (including
spaces or numbers), single quote the string. For example: John Smith, 11-14-2001,
or 25 Main St.
If specifying a monetary value, do not include the money unit. For example, instead of
$5.50, enter 5.5.
If specifying a negative number, escape the minus sign with a backslash (\). For example:
proutil testdb -C dumpspecified PUB.Item.Item-NUM LT \-90.
The value entered for field-name must be the name of an index field.
You cannot use DUMPSPECIFIED to dump protected audit data. For more information
on auditing and utility security and restrictions, see the Auditing impact on database
utilities section on page 916.
[indexarea
Parameters
Index-Area-Name
] [deactivateidx ]
db-name
Name of the area where the auditing data tables will be located. If Area-Name contains
spaces, you must enclose the name in quotation marks.
indexarea Index-Area-Name
Name of the area where the indexes for the auditing data tables will be located. If
Index-Area-Name contains spaces, you must enclose the name in quotation marks.
deactivateidx
If your database is disabled, and your audit data tables are not empty, the following error
message is displayed:
Auditing can not be enabled because auditing data tables are not empty.
(13650)
If your database has auditing deactivated, successfully enabling the database results in the
following message:
For more information on auditing states, see the Auditing states section on page 96.
2043
2044
When auditing is first enabled for a database, the meta-schema to support auditing is added
to the database, and the event table is populated.
While not required, it is recommended that the auditing tables and indexes be placed in
their own area.
See the Auditing tables section on page 910 for brief descriptions of the meta-schema
tables created when auditing is enabled and the indexes that can be deactivated. For
detailed information, see OpenEdge Getting Started: Core Business Services.
Parameters
db-name
Databases enabled for JTA transactions must perform crash recovery in multi-user mode.
Enabling the database for JTA transactions disables after-imaging. You must re-enable
after-imaging after enabling the database for JTA transactions.
2045
Parameters
db-name
Name of the area where the key events table is located. If area-name is not specified, the
key events table, _KeyEvt, is created in the Schema Area.
Notes
2046
Parameters
db-name
2047
Parameters
db-name
Specifies the name of database to enable for large key entry support.
Large key entries increase the amount of user data in an index from approximately 200 bytes to
approximately 1970 bytes.
Notes
Large key support can cause run time problems for older clients. A client from 10.1A or
earlier cannot update or delete any row or field that is referenced by a large index key.
PROUTIL ENABLELARGEKEYS issues a warning and requires confirmation before
executing, as shown in the following example:
2048
If the database is online when enabling large keys, only newly connected clients will be
able to perform operations with large keys.
A database with large keys enabled, cannot be reverted to Release 10.1A format with
PROUTIL REVERT.
Databases with 4K and 8K block sizes, created with Release 10.1C, have large key support
by default.
Parameters
db-name
A database with 64-bit sequence support enabled, cannot be reverted to Release 10.1A
format with PROUTIL REVERT.
Existing sequences with the upper limit specified as the Unknown value (?) were
previously bounded by the maximum of a signed 32-bit integer. They are now bounded by
the maximum of a signed 64-bit integer.
2049
Parameters
db-name
Name of the database to be enabled for 64-bit stored procedures and triggers.
Notes
2050
The database may be online or offline when enabling stored procedures and triggers.
Databases created with Release 10.1C have 64-bit stored procedures and triggers by
default.
Once the 64-bit stored procedures and triggers tables have been added to the database, they
cannot be removed. However, the existence of the stored procedures and triggers tables
and views are harmless.
Parameters
db-name
When the HOLDER qualifier completes, it sets a return code that you can test in a UNIX
script or Windows batch file. This information is useful to obtain before performing a
database backup or shutting down a database. The return codes for the HOLDER qualifier
on UNIX are shown in the following table:
Return code
Description
14
16
Any other nonzero return code indicates that an error has occurred.
Return codes can be added or changed from one release to the next. Use scripts that depend
on a specific return code with caution.
This example shows how you might use the HOLDER qualifier on UNIX in a script that
performs a database backup:
2051
2052
[useindex
Parameters
[owner-name.]table-name.index-name
[recs n] [refresh t]
table-name.index-name]
db-name
[owner-name.]table-name.index-name
Specifies the index being activated. You must specify the table and index names. Specify
the owner name if tables in your database have more than one owner.
useindex table-name.index-name
Specifies an index to use to while accessing the records in your table to build the index
being activated. This index must be active. If you omit this parameter, the primary index
will be used.
recs n
Specifies the number of records to process in one transaction. If you omit this parameter,
IDXACTIVATE will process 100 records per transaction by default.
refresh t
Specifies the frequency in seconds to update the display of clients that are blocking the
index activation. The default refresh rate is 60 seconds. You can set it as high as 300
seconds. Connected clients with a schema timestamp earlier than the indexs schema
timestamp will prevent activation of the index.
Notes
If IDXACTIVATE detects clients with an earlier schema timestamp than the index, you
may wait for those clients to disconnect or forcibly disconnect them with PROSHUT. The
index cannot be activated until those clients are disconnected from the database.
2053
Parameters
db-name
2054
% Util Factor
2.7
1.0
91.4
57.7
95.5
54.6
69.5
1.5
2.1
1.5
2.1
1.9
67.2
98.2
73.8
1.9
1.0
1.8
3.6
1.0
all
db-name
Specifies that you want to rebuild all your indexes. PROUTIL automatically rebuilds all
your indexes without asking about disk space requirements.
table
[owner-name.]table-name
Specifies that you want to rebuild the indexes defined for the named table. When the table
option is used on the command line, PROUTIL automatically rebuilds the indexes defined
for the table without asking about disk space requirements.
area area-name
Specifies that you want to rebuild all the indexes defined in the named area. When the area
option is used on the command line, PROUTIL automatically rebuilds the indexes defined
for the area without asking about disk space requirements.
schema schema-owner
Specifies that you want to rebuild all the indexes owned by the named schema-owner.
When the schema option is used on the command line, PROUTIL automatically rebuilds
the indexes owned by the schema-owner without asking about disk space requirements.
activeindexes
Specifies that you want all currently inactive indexes rebuilt. When an inactive index is
rebuilt, it becomes active.
2055
Specifies the name of the directory where the temporary files are stored. If you do not use
this parameter, PROUTIL places temporary files in the current working directory.
-SS sort-file-directory-specification
Identifies the location of a multi-volume sort file specification. If you use the Sort
Directory Specification (-SS) parameter, PROUTIL does not use the Temporary Directory
(-T) parameter.
-TB n
Specifies that the index rebuild will be performed using Speed Sort. n indicates the
allocated block size, in kilobytes.
-TM n
Specifies the merge number. n indicates the number of blocks or streams to be merged
during the sort process.
-B n
Specifies that the index rebuild will use index grouping. n indicates the number of index
groups used by IDXBUILD and must be a value between 8 and 64. The default value is
48. Note that a temporary file is created for each index group.
A large -SG value requires more memory allocation and more file handles. To determine
the amount of memory (in kilobytes) needed for each index group, add 1 to the merge
number (the value of -TM) and multiply the sum by the speed sort block size (the value of
-TB). Memory consumption for each index group equals (-TM + 1) * -TB.
-pfactor n
Specifies that the index rebuild will use a packing factor. n indicates the maximum
percentage of space used in an index block and must by a value between 60 and 100. The
default value is 100.
2056
the following:
(a/A) - Rebuild
(s/S) - Rebuild
(r/R) - Rebuild
(c/C) - Rebuild
(t/T) - Rebuild
(v/V) - Rebuild
All
Prompts you to verify whether you have enough disk space for index sorting.
Some
Prompts you for the indexes you want to rebuild by first entering the table
name, and then the index name. IDXBUILD then prompts you to verify
whether you have enough disk space for index sorting.
By Area
Prompts you for the area containing the indexes you want to rebuild, then
prompts you for the indexes in the area, then prompts you to verify whether
you have enough disk space for index sorting.
By
Schema
Prompts you for the schema owner of the indexes you want to rebuild, then
prompts you for the indexes, then prompts you to verify whether you have
enough disk space for index sorting.
By Table
Prompts you for the table containing the indexes you want to rebuild, then
prompts you for the indexes, then prompts you to verify whether you have
enough disk space for index sorting.
By
Activation
Prompts you to chose active or inactive indexes, then prompts you for the
indexes, then prompts you to verify whether you have enough disk space for
index sorting.
Quit
Repairs corrupted indexes in the database. (Index corruption is typically signaled by error
messages.)
2057
Use the Temporary Directory (-T) startup parameter to identify or redirect temporary files
created by the PROUTIL utility to a specified directory when sorting and handling space
issues.
Use the Speed Sort (-TB), Merge Number (-TM), Sort Grouping (-SG) and Blocks in
Database Buffers (-B) startup parameters to improve index rebuild performance.
Use the following formulas to determine whether you have enough free space to sort the
indexes:
2058
If you rebuild all the indexes in your database, sorting the indexes requires free space
that can be as much as 75 percent of the total database size.
If you rebuild an individual index, sorting that index requires free space that can be
as large as three times the size of one index entry times the number of records in the
file.
The Index Rebuild utility rebuilds an index or set of indexes in three phases:
The utility scans the database by area, clearing all index blocks that belong to the
indexes and rebuilds and adds those blocks to the free-block list.
The utility scans the database by area and rebuilds all the index entries for every data
record. If you chose to sort the index, the utility writes the index entries to the sort
file. Otherwise, the utility writes the index entries to the appropriate index at this
point.
If you indicated that you have enough space to sort the indexes, the utility compacts
the index by sorting the index entries in the sort file into groups and entering those
entries in one index at a time.
See Chapter 13, Managing Performance, for more information about using the
IDXBUILD qualifier.
The IDXCHECK utility has been extended to validate the physical consistency of
index blocks while the database is online. For more information see Using the
IDXCHECK qualifier online.
Syntax
proutil db-name -C idxcheck
area area-name
Parameters
schema
|table [owner-name.]table-name |
schema-owner]
all
db-name
[owner-name.]table-name
Specifies that you want to check the indexes defined for the named table.
area area-name
Specifies that you want to check all the indexes defined in the named area.
schema schema-owner
Specifies that you want to check all the indexes owned by the named schema-owner.
If you do not specify all, table, area, or schema, the following menu appears:
2059
PROUTIL Utility
Table 204 describes the options.
Table 204:
Option
Action
All
By Area
Prompts you for the area containing the indexes you want to check.
By Schema
Prompts you for the schema owner of the indexes you want to check
By Table
Prompts you for the table containing the indexes you want to check
Some
Validation
Quit
PROUTIL IDXCHECK lets you know whether you have to perform an index rebuild before
actually running PROUTIL IDXBUILD. IDXCHECK performs the following operations for
each index it checks:
Reads the contents of the index and the contents of the file, verifies that all the values in
the records are indexed, and verifies that each value in the index is in the associated record
Performs various checks on the data structures in the index to verify that there is no
corruption
IF PROUTIL IDXCHECK completes successfully, it ensures that all FIND, CAN-FIND, GET,
FOR EACH, and PRESELECT statements that use those indexes work correctly. If errors result, the
2060
When PROUTIL IDXCHECK finds any corruption, it displays error messages. If error
messages appear, save the database and the output messages for analysis by Technical
Support, back up your database, and then run PROUTIL IDXBUILD.
IDXCHECK displays error and warning messages on the screen and logs them in the log
file. It also displays and logs a success or failure indication, along with the number of
errors and warnings issued.
IDXCHECK might also display warning messages. Although these messages indicate that
some condition is a problem, the index is still valid. Check the log file for details.
To execute PROUTIL IDXCHECK while the database is online, use the Validation
option. See Table 204.
Option
Action
Quit
Note:
See Chapter 14, Maintaining Database Structure, for a description of how to monitor
the progress of this utility using the _UserStatus virtual system table (VST).
2061
PROUTIL Utility
Parameters
owner-name.]table-name.index-name
db-name
Specifies the owner of the table containing the data you want to dump. You must specify
an owner name unless the tables name is unique within the database, or the table is owned
by PUB. By default, ABL tables are owned by PUB.
table-name
Specifies the degree of index compaction. You can specify an integer >=50 and <=100.
The default value is 80. If you do not specify n, 80 is used.
Notes
2062
Performing index compaction reduces the number of blocks in the B-tree and possibly the
number of B-tree levels, which improves query performance.
Phase 1 If the index is a unique index, the delete chain is scanned and the index
blocks are cleaned up by removing deleted entries.
Phase 2 The nonleaf levels of the B-tree are compacted starting at the root
working toward the leaf level.
In addition to compacting an index, this utility clears dead entries left after entries have
been deleted from unique indexes.
Because index compacting is performed online, other users can use the index
simultaneously for read or write operation with no restrictions. Index compacting only
locks one to three index blocks at a time, for a short time. This allows full concurrency.
No other administrative operation on the index is allowed during the compacting process.
In rare cases where the required percentage of compaction is very high, the compacting
percentage might not be reached. Repeating the compacting process a second time might
obtain better results.
See Chapter 14, Maintaining Database Structure, for a description of how to monitor the
progress of this utility using the _UserStatus virtual system table (VST).
2063
PROUTIL Utility
The IDXFIX utility has been extended to validate the physical consistency of index
blocks. It can also be run online. When run online, it performs physical consistency
validation of index blocks only, no other validations are performed. To run a complete
check, execute IDXFIX utility in offline mode. The syntax has not changed.
Parameters
[-userid
username
[-password
passwd
db-name
Indicates that repetitive messages are not sent to the screen or the log file.
-NL
Identifies a user privileged to access protected audit data when executing this utility.
-password passwd
2064
Option
Action
Scans the index for corrupt index entries. You can specify whether to scan all
indexes or a specific set of indexes.
Prompts you for the table and indexes for which you want to run a
cross-reference check.
For the specified indexes the following processing takes place:
For each index entry, fetch the associated record by ROWID and validate
that the fields in the record match the fields used to build the index key.
Allows you to rebuild multiple indexes based on one known index. At the
prompts, enter the following:
At the Table name: prompt, enter the name of the table for the indexes.
At the first Index name: prompt, enter the name of the source index. This
is the good index to be used for locating the rows of the table.
The utility asks you to verify your entries, and then rebuilds the specified
indexes.
6
Prompts you to specify the RECID of the record you want to delete.
Deletes one record and all its indexes from the database. Use this option when
a record has damaged indexes.
Note: You can specify either the area name or area number as the area. If an
area is invalid, IDXFIX displays a message declaring the area invalid, then
halts the action.
7
Notes
PROUTIL IDXFIX performs the following operations for each index it checks:
Reads the contents of the index and the contents of the file, verifies that all the values
in the records are indexed, and verifies that each value in the index is in the
associated record
Performs various checks on the data structures in the index to verify that there is no
corruption
2065
PROUTIL Utility
2066
IDXFIX displays error messages on the screen and logs them in the log file. It also displays
and logs a success or failure indication, along with the number of errors and warnings
issued. IDXFIX might also display warning messages. Although these messages indicate
that some condition is a problem, the index is still valid. Check the log file for details.
Index Fix does not provide a comparison of the index scan and the database scan when you
run them online because the database can change during operation.
Index Fix is designed to wait if a record is in the process of being updated, thus ensuring
that it does not incorrectly change a user action in process. However, because changes can
occur during the scans, the reported data might not exactly match the database at
completion. Index Fix displays this warning when you run it online.
You can run IDXFIX online or offline. However, to run Index Fix online, it must be run
as a self-service session.
Index Fix does not delete or disable indexes, but when you run a full database scan with
fix offline and it is successful, it enables a disabled index if no errors are found.
Enabling indexes online is not advisable because it is not possible to detect changes that
are being made to indexes while the process is running.
While IDXFIX can ensure that an index is complete and correct, it cannot make any
improvement to the indexs utilization level.
See Chapter 14, Maintaining Database Structure, for a description of how to monitor the
progress of this utility using the _UserStatus Virtual System Table (VST).
IDXFIX requires additional security when auditing is enabled for your database. Only a
user with Audit Archiver privilege can execute this utility. Depending on how user
identification is established for your database, you may need to specify the -userid and
-password parameters to execute this utility. For more information on auditing and utility
security, see the Auditing impact on database utilities section on page 916.
If an index and table reside in different areas, and if an area number (representing the
location of the area table) is provided for RECID validation, index blocks residing in
another area will not be verified.
Select one of
All
Some
By Area
By Schema
By Table
By Activation
Quit
the following:
(a/A) - Fix all the indexes
(s/S) - Fix only some of the indexes
(r/R) - Fix indexes in selected areas
(c/C) - Fix indexes by schema owners
(t/T) - Fix indexes in selected tables
(v/V) - Fix selected active or inactive indexes
Option
Action
All
Some
Prompts you for the indexes you want to fix by first entering the table
name, and then the index name; IDXFIX then prompts you to verify the
action
By Area
Prompts you for the area containing the indexes you want to fix, and then
prompts you for the indexes in the area
By Schema
Prompts you for the schema owner of the indexes you want to fix, then
prompts you for the indexes, then prompts you to verify the action
By Table
Prompts you for the table containing the indexes you want to fix, then
prompts you for the indexes, then prompts you to verify whether you have
enough disk space for index sorting
By
Activation
Prompts you to chose active or inactive indexes, then prompts you for the
indexes, then prompts you to verify the action
Quit
Notes
Repairs corrupted indexes in the database. (Index corruption is typically signaled by error
messages.)
2067
PROUTIL Utility
Parameters
owner-name.]table-name.index-name area-name
db-name
Specifies the owner of the table containing the data you want to dump. You must specify
an owner name unless the tables name is unique within the database, or the table is owned
by PUB. By default, ABL tables are owned by PUB.
table-name
Specifies the area name of the target application data area into which the index is to be
moved. Area names that contain spaces must be quoted. For example, Area Name.
The PROUTIL IDXMOVE utility operates in two phases:
Notes
2068
Phase 1 The new index is being constructed in the new area. The old index remains in
the old area, and all users can continue to use the index for read operations.
Phase 2 The old index is being killed, and all the blocks of the old index are being
removed to the free block chain. For a large index, this phase might take a significant
amount of time. During this phase all operations on the index are blocked until the new
index is available; users accessing the index might experience a freeze in their
applications.
While you can move indexes online, no writes to the table or its indexes are allowed during
the move. The IDXMOVE utility acquires a SHARE lock on the table, which blocks all
attempts to modify records in the table. Run the utility during a period when the system is
relatively idle or when users are doing work that does not access the table.
No other administrative operation on the moved index will be allowed during the move of
the index. It will be blocked. For example, you cannot run an index move utility and at the
same time run the index fix or the index compacting utilities on the same index.
Because the index move utility needs to acquire a share lock on the table, there is a
possibility that it will have to wait before it can acquire the lock and start operating.
You might be able to improve performance by moving indexes that are heavily used to an
application data area on a faster disk.
The _UserStatus virtual system table displays the utilitys progress. See Chapter 14,
Maintaining Database Structure, for more information.
2069
PROUTIL Utility
Parameters
db-name
Use Blocks in Database Buffers (-B) to increase the number of blocks in the database
buffers.
-L
Use Lock Table Entries (-L) to increase the limit of the record locking table.
-bibufs
You can execute any number of parameters in a single statement. For example:
proutil db-name -C increaseto -B 10240 -L 20480 -bibufs 360 -aibufs 500
Only one utility with the -C increaseto qualifier can connect to an online database at a
time.
Parameters specified using the increaseto qualifier are restricted by currently supported
limits. For example, the maximum increase for the number of blocks in the database buffer
is limited to 1,000,000,000 for 64-bit platforms.
Note: For more information on limitations of -B, -L, -bibufs, and -aibufs, see
Chapter 18, Database Startup Parameters.
2070
You must have security privileges on the directly connected database to use the
increaseto qualifier.
Parameters
db-name
Specifies the active database where you are running database I/O statistics.
The statistics provided by PROUTIL IOSTAT include buffered, unbuffered, and logical I/O
database operations. The statistics are cumulative from database startup.
Example
FILE
Displays the name of the file where the statistics are displayed. The file types can include:
database files (.db extensions), Before-image files (.bi extensions), After-image files
(.ai extensions), and application data extents (.dn extensions).
BUFFERED
Displays the number of buffered reads and writes to the database file for the associated
row.
UNBUFFERED
Displays the number of unbuffered reads and writes to the database file for the associated
row.
LOGICAL
Displays the number of client requests for database read and write operations for the
associated row.
2071
PROUTIL Utility
Notes
2072
IOSTATS provides a useful alternative to PROMON when you are only interested in
statistics on your database extents.
You can use IOSTATS to determine if your files are opened in buffered or unbuffered
mode.
[[-T
Parameters
-dumplist dumpfile
dir-name]
db-name
Specifies the database where you want to load the data. You must completely define the
path.
filename
Specifies a single binary dump file that you want to load. You must completely define the
path.
-dumplist dumpfile
Specifies a file containing a list of fully qualified binary dump files to load. Use the file
created by PROUTIL DUMP, or create one. For more information on dump files, see the
Loading table contents in binary format with PROUTIL section on page 1520.
build indexes
Indicates that PROUITL LOAD will simultaneously build the indexes and perform the
load.
-TB blocksize
Specifies that the index rebuild will be performed using Speed Sort. blocksize indicates
the allocated block size, in kilobytes.
-TM n
Specifies the merge number. n indicates the number of blocks or streams to be merged
during the sort process.
-T dir-name
Specifies the name of the directory in which the temporary files are stored. If you do not
use this parameter, PROUTIL places temporary files in the current working directory.
-SS sort-file-directory-specification
Identifies the location of a multi-volume sort file specification. If you use the Sort
Directory Specification (-SS) parameter, PROUTIL does not use the Temporary Directory
(-T) parameter.
2073
PROUTIL Utility
Notes
See Chapter 15, Dumping and Loading, for more information about the LOAD
qualifier.
The PROUTIL DUMP and LOAD utilities use cyclic redundancy check (CRC) values to
establish the criteria for loading.
The OpenEdge database provides a flexible storage architecture and the ability to relocate
objects, such as tables and indexes, while the database remains online. As a result, when
you perform a binary load operation, the table numbers in a binary dump file might not
match the table numbers in the target database. Therefore, when you perform a binary load
operation, the criteria for loading tables is based solely on cyclic redundancy check (CRC)
values, and not table numbers.
For example, when you dump a table, the PROUTIL utility calculates a CRC value for the
table and stores it in the header of the binary dump file. When you load the table,
PROUTIL matches the CRC value stored in the header with the CRC value of the target
table. The values must match or the load is rejected.
You can load binary dump files created with a previous version of the PROUTIL DUMP
utility, because the current version of PROUTIL LOAD uses the CRC value established
when the file was originally dumped. Consequently, the OpenEdge database maintains
backwards compatibility.
However, you cannot use PROUTIL LOAD from Version 8.3 or earlier to load a binary
dump file created using the Version 9.0 or later of the PROUTIL DUMP utility. The earlier
versions of PROUTIL DUMP and LOAD did not use CRC values to establish the criteria
for loading, but instead used other mechanisms, such as:
2074
Looking up table RECIDs in a target database using the table number stored in the
header of the binary dump file
Matching table numbers in the header of the binary dump file with table numbers in
a target database
Comparing the number of fields in the binary dump file with the number of fields in
the target database
When using PROUTIL LOAD with the Build Indexes option, PROUTIL marks the
existing indexes as inactive. Once PROUTIL successfully creates the indexes, it marks the
indexes active. This means that if the binary load is aborted for any reason, PROUTIL
leaves the table with no active indexes.
You cannot use LOAD to load protected audit data. For more information on auditing and
utility security and restrictions, see the Auditing impact on database utilities section on
page 916.
Parameters
db-name
After conversion, it is possible to move data into new areas by using PROUTIL dump and
load or bulkload qualifiers, the Database Administration tool, the Database Dictionary, or
ABL code. However, areas will continue to hold disk space after the removal of data,
because the schema remains. Once the areas schema is moved by using PROUTIL
MVSCH, the area can be truncated.
You must truncate the databases BI file before using PROUTIL MVSCH.
Caution: PROUTIL with the MVSCH qualifier is a nonrecoverable utility. If the execution
fails, you cannot connect to the database.
2075
PROUTIL Utility
Parameters
...
db-name
When you use the DBAUTHKEY qualifier to set, change, and remove authorization keys,
you can use the RCODEKEY qualifier to update your r-code without compiling it.
The following table lists the values you must enter for each task:
Task
2076
old-key value
new-key value
Files
Authorization key
R-code files
Change the
authorization key
Current
authorization key
New
authorization key
R-code files
Remove the
authorization key
Current
authorization key
R-code files
Once you set the authorization key, do not forget it. You cannot change or remove the
authorization key without knowing its current value.
Parameters
db-name
The name of the database to revert. You cannot revert a database created with Release
10.1C. You can only revert a database migrated from a prior release.
Caution: PROUTIL REVERT is a non-recoverable utility. If PROUTIL REVERT terminates
abnormally, you must restore your database from backup.
The large database features of Release 10.1C require at least one manual step (running this
utility) to revert a database to the 10.1A format. PROUTIL REVERT will analyze the database
and determine if it can be reverted with this utility. If PROUTIL REVERT cannot revert the
database, a series of manual steps, including dumping and loading the data, is required to revert
the database to a 10.1A format.
PROUTIL REVERT cannot revert a database if any of the following conditions exist:
The database has enabled support for large key entries for indexes.
The database has a Type II area with a high water mark utilizing 64-bit DB Keys.
The database has a LOB with segments utilizing 64-bit block values.
2077
PROUTIL Utility
PROUTIL REVERT executes as follows:
1.
Determines that the user has sufficient privilege to execute the command. The privilege
check is limited to file system access to the database.
2.
Analyzes the features of the database to determine if the database can be reverted by the
utility. If not, the utility issues messages indicating why the database cannot be reverted,
and exits.
The following sample output is from an attempt to revert a database that does not meet the
reversion requirements:
Revert Utility
--------------------Feature
-------------------------------64 Bit DBKEYS
Large Keys
64 Bit Sequences
Revert:
Revert:
Revert:
Revert:
2078
The
The
The
The
database
database
database
database
Enabled
------Yes
Yes
Yes
Active
-----Yes
Yes
Yes
3.
Prompt the user to confirm that the database has been backed up.
4.
Perform the physical fixes necessary to revert the database. Fixes include the following:
5.
6.
Revert Utility
--------------------Feature
-------------------------------Database Auditing
64 Bit DBKEYS
Enabled
------Yes
Yes
Active
-----Yes
Yes
When PROUTIL REVERT completes successfully, the database is in 10.1A format. You
should perform a backup with 10.1A; previous backups in 10.1C format are incompatible with
the reverted database.
Notes
PROUTIL REVERT expects to find the 10.1A database utilities directory, 101dbutils, in
your installation bin directory. If the 10.1A utility directory is not found, PROUTIL
REVERT cannot rebuild the 10.1A VSTs, and issues a message. You must then rebuild
the VSTs with the 10.1A version of PROUTIL UPDATEVST.
2079
PROUTIL Utility
Parameters
db-name
Specifies the database where you are setting the create limit.
area-number
Specifies the area where you are setting the create limit.
create-limit
Specifies the new create limit. The create limit is the amount of free space that must remain
in a block when you are adding a record to the block.
Notes
2080
SETAREACREATELIMIT will set the limit for all tables and BLOBs in the specified
area.
The create limit value must be greater than 32 and less than the block size minus 128 bytes.
For databases with a 1K block size, the default create limit is 75. For all other database
block sizes, the default create limit is 150.
For more information on setting create limits, see the Database fragmentation section on
page 1332.
Parameters
db-name
Specifies the database where you are setting the toss limit.
area-number
Specifies the area where you are setting the toss limit.
toss-limit
Specifies the new toss limit. The toss limit is the minimum amount of free space that must
exist in a block for the block to remain on the free chain as a candidate for holding
additional records.
Notes
SETAREATOSSLIMIT will set the limit for all tables and BLOBs in the specified area.
The toss limit value must be greater than zero (0) and less than the block size minus 128
bytes.
For databases with a 1K block size, the default toss limit is 150. For all other database
block sizes, the default toss limit is 300.
For more information on setting toss limits, see the Database fragmentation section on
page 1332.
2081
PROUTIL Utility
Parameters
db-name
Specifies the database where you are setting the create limit.
blob-id
Specifies the BLOB object for which you are setting the create limit.
create-limit
2082
For a Type II area, SETBLOBCREATELIMIT sets the BLOB create limit for the
specified BLOB. For a Type I area, SETBLOBCREATELIMIT issues a warning that all
tables and BLOBs in an area must have the same create limit, and prompts you to confirm
setting the create limit for all tables and BLOBs in the area containing the specified BLOB.
The create limit value must be greater than 32 and less than the block size minus 128 bytes.
For databases with a 1K block size, the default create limit is 75. For all other database
block sizes, the default create limit is 150.
For more information on setting create limits, see the Database fragmentation section on
page 1332.
Parameters
db-name
Specifies the database where you are setting the toss limit.
blob-id
Specifies the BLOB object for which you are setting the toss limit.
toss-limit
For a Type II area, SETBLOBTOSSLIMIT sets the BLOB toss limit for the specified
BLOB. For a Type I area, SETBLOBTOSSLIMIT issues a warning that all tables and
BLOBs in an area must have the same toss limit, and prompts you to confirm setting the
toss limit for all tables and BLOBs in the area containing the specified BLOB.
The toss limit value must be greater than zero (0) and less than the block size minus 128
bytes.
For databases with a 1K block size, the default toss limit is 150. For all other database
block sizes, the default toss limit is 300.
For more information on setting toss limits, see the Database fragmentation section on
page 1332.
2083
PROUTIL Utility
Parameters
db-name
Specifies the database where you are setting the create limit.
table-name
Specifies the table where you are setting the create limit.
create-limit
Specifies the new create limit. The create limit is the amount of free space that must remain
in a block when you are adding a record to the block.
Notes
2084
For a Type II area, SETTABLECREATELIMIT sets the table create limit for the specified
table. For a Type I area, SETTABLECREATELIMIT issues a warning that all tables and
BLOBs in an area must have the same create limit, and prompts you to confirm setting the
create limit for all tables and BLOBs in the area containing the specified table.
The create limit value must be greater than 32 and less than the block size minus 128 bytes.
For databases with a 1K block size, the default create limit is 75. For all other database
block sizes, the default create limit is 150.
For more information on setting create limits, see the Database fragmentation section on
page 1332.
Parameters
db-name
Specifies the database where you are setting the toss limit.
table-name
Specifies the table where you are setting the toss limit.
toss-limit
For a Type II area, SETTABLETOSSLIMIT sets the table toss limit for the specified table.
For a Type I area, SETTABLECREATELIMIT issues a warning that all tables and
BLOBs in an area must have the same toss limit, and prompts you to confirm setting the
toss limit for all tables and BLOBs in the area containing the specified table.
The toss limit value must be greater than zero (0) and less than the block size minus 128
bytes.
For databases with a 1K block size, the default create limit is 150. For all other database
block sizes, the default create limit is 300.
For more information on setting toss limits, see the Database fragmentation section on
page 1332.
2085
PROUTIL Utility
Parameters
db-name
Table
Total number of record fragments found in the database for the table.
2086
Degree of record fragmentation for the table. This value is determined by the number of
fragments divided by the ideal number of fragments (for example, if the data is freshly
loaded). A value of 1.0 is ideal. A value of 2.0 indicates that there are twice as many
fragments than there would be when the data is freshly loaded.
Use the Index value to determine whether to dump and reload your data to improve
fragmentation. If the value is 2.0 or greater, dumping and reloading will improve
performance. If the value is less than 1.5, dumping and reloading is not warranted.
Scatter Factor
See Chapter 13, Managing Performance, for more information about database
fragmentation.
2087
PROUTIL Utility
[owner-name.]table-name
Parameters
table-area
[index-area]
db-name
Specifies the owner of the table containing the data you want to dump. You must specify
an owner name unless the tables name is unique within the database, or the table is owned
by PUB. By default, ABL tables are owned by PUB.
table-name
Specifies the area name of the target application data area into which the table is to be
moved. Area names with spaces in the name must be quoted, for example, Area Name.
index-area
Optionally, specifies the name of the target index area. If the target index area is supplied,
the indexes will be moved to that area. Otherwise they will be left in their existing location.
You can move indexes to an area other than the area to which the table is being moved.
Area names with spaces in the name must be quoted, for example, Area Name.
Notes
If you omit the index-area parameter the indexes associated with the table will not be
moved.
Moving the records of a table from one area to another invalidates all the ROWIDs and
indexes of the table. Therefore, the indexes are rebuilt automatically by the utility whether
you move them or not. You can move the indexes to an application data area other than
the one to which you are moving the table. If you want to move only the indexes of a table
to a separate application data area, use the PROUTIL IDXMOVE utility.
Moving a tables indexes with the TABLEMOVE qualifier is more efficient than moving
a table separately and then moving the indexes with the IDXMOVE utility. Moving a table
separately from its indexes wastes more disk space and causes the indexes to be rebuilt
twice, which also takes longer.
2088
Phase 1 The records are moved to the new area and a new primary key is built.
Phase 4 All the old indexes are killed and the _StorageObject records of the
indexes and the table are updated.
Although PROUTIL TABLEMOVE operates in phases, it moves a table and its indexes
in a single transaction. To allow a full recovery to occur when a transaction is interrupted,
every move and delete of each individual record is logged. As a result, moving a table
requires the BI Recovery Area to be several times larger than the combined size of the
table and its indexes. Therefore, before moving your table, determine if your available disk
capacity is sufficient to support a variable BI extent that might grow to more than three
times the size of the table and its indexes.
While you can move tables online, no access to the table or its indexes is recommended
during the move. The utility acquires an EXCLUSIVE lock on the table while it is moving.
An application that reads the table with an explicit NO-LOCK might be able to read
records, but in some cases might get the wrong results, since the table move operation
makes many changes to the indexes. Run the utility during a period when the system is
relatively idle, or when users are doing work that does not access the table.
No other administrative operation on any index of the moved table is allowed during the
table move.
There is a possibility that the utility will have to wait for all the necessary locks to be
available before it can start, which may take some time
The _UserStatus virtual system table displays the utilitys progress. See Chapter 14,
Maintaining Database Structure, for more information.
2089
PROUTIL Utility
[-userid
Parameters
username
[-password
passwd
] ]
db-name
Specifies the database that contains the application data storage areas that you want to
truncate.
area-name
Specifies the name of the storage area you want to truncate. When you specify the area
name, PROUTIL truncates the area even if it contains storage objects. If no area name is
specified, PROUTIL truncates all areas not containing objects.
-userid username
Identifies a user privileged to access protected audit data when executing this utility.
-password passwd
2090
Use of this qualifier is an important step in removing application data storage areas and
extents from a database.
Deleting the contents of storage areas with this feature also allows for rapid dumping and
loading. Use PROUTIL TRUNCATE AREA after dumping data, but before initiating the
load.
TRUNCATE AREA requires additional security when auditing is enabled for your
database. Only a user with Audit Archiver privilege can execute this utility on an area
containing audit data. Depending on how user identification is established for your
database, you may need to specify the -userid and -password parameters to execute this
utility. For more information on auditing and utility security, see the Auditing impact on
database utilities section on page 916.
To use this command, the database must be offline and after-imaging must be disabled.
If the storage area does not contain any storage objects, then the command simply resets
the hi-water mark. If the storage area does contain tables and or indexes, their names are
listed and you must confirm to truncate the storage area.
Indexes in other storage areas that are on tables in the storage area being truncated are
marked as inactive.
Empty index root blocks for indexes in the area being truncated are recreated.
PROUTIL TRUNCATE AREA recreates any template records in the new area.
2091
PROUTIL Utility
Uses the information in the before-image (BI) files to bring the database and after-image
(AI) files up to date, waits to verify that the information has been successfully written to
the disk, then truncates the before-image file to its original length.
Sets the BI cluster size using the Before-image Cluster Size (-bi) parameter.
Sets the BI block size using the Before-image Block Size (-biblocksize) parameter.
Syntax
proutil db-name -C truncate bi
Parameters
{[
-G n
]|
-bi size
-biblocksize size
db-name
Specifies the number of seconds the TRUNCATE BI qualifier waits after bringing the
database and AI files up to date and before truncating the BI file. The default wait period
is 60 seconds. You might specify a shorter period for practice or test purposes only.
Caution: Do not decrease the value for any significant database, because a system crash
could damage the database if the BI file is truncated before the writes to the
database and AI files are flushed from the operating system buffer cache.
-bi size
Specifies the size of the cluster in kilobytes. The number must be a multiple of 16 ranging
from 16 to 262,128 (16K to 256MB). The default cluster size is 524K. If you use a value
that is not a multiple of 16, PROUTIL rounds the value up to the next multiple of 16.
-biblocksize size
Specifies the size of the BI blocks in each buffer in kilobytes. The valid values are 1, 2, 4,
8, and 16. The default -biblocksize is 8K. A value of zero (0) tells PROUTIL to use the
default block size. The block size cannot be smaller than the database block size.
Notes
2092
The Before-image Block Size (-biblocksize) parameter changes the BI block size so that
the database engine reads and writes the blocks as one block.
Use the PROSTRCT STATISTICS qualifier to display the block size for a database.
If you change the BI block size or cluster size before backing up a database, when you
restore the database, the blocks and clusters will be the size you specified before the
backup.
PROUTIL reads all the BI blocks according to the size of the first block it reads. For
example, if the first BI block is 8K, PROUTIL reads and writes each block on an 8K
boundary.
For performance reasons, you might want to run PROUTIL BIGROW to increase the
number of BI clusters available to your database before starting your server.
2093
PROUTIL Utility
Parameters
db-name
2094
PROUTIL UPDATESCHEMA updates an existing database from a prior release with the
newest meta-schema for the current release. All new tables, fields, and indexes are added.
Parameters
db-name
The PROUTIL utility with UPDATEVST qualifier deletes all existing VST schema
information, then recreates all the VSTs from the most current information.
See Chapter 25, Virtual System Tables, for a list of the virtual system tables that
OpenEdge provides.
2095
PROUTIL Utility
Parameters
src-file
Specifies an integer between 1 and 255 (inclusive) that will uniquely identify the compiled
word-break table. PROUTIL WORD-BREAK COMPILER names the compiled version
of the word-break table proword.rule-num. For example, if rule-num is 34, the name of
the compiled word-break table file is proword.34.
Notes
See OpenEdge Development: ABL Handbook for more information about word indexes
and word-break table syntax.
2096
To apply the word-break rules to a database, use the WORD-RULES qualifier with the
PROUTIL utility.
21
PROSTRCT Utility
This chapter describes the OpenEdge database administration utility PROSTRCT.
PROSTRCT utility
PROSTRCT utility
Creates and maintains an OpenEdge database. For example, you can use PROSTRCT and its
qualifiers to perform the following tasks:
Display storage usage statistics including information about storage areas in a database
Syntax
prostrct qualifier db-name
Parameters
structure-description-file
qualifier
Specifies the qualifier that you want to use. You can supply the following qualifiers:
ADD
ADDONLINE
BUILDDB
CLUSTER CLEAR
CREATE
LIST
REMOVE
REORDER AI
REPAIR
STATISTICS
UNLOCK
Details of the qualifiers are found in the PROSTRCT subsections listed in the following
pages.
db-name
212
PROSTRCT utility
Notes
You must define your database structure in a .st file before you create a database.
PROSTRCT CREATE creates a database control (DB) area from the information in the
.st file.
See Chapter 1, Creating and Deleting Databases, for a complete description of structure
description files and storage areas.
213
Parameters
structure-description-file
][
-validate
db-name
Parses the contents of the .st file for syntax errors. When -validate is specified, the
database is not created. Each line of the structure file is read and evaluated. If errors are
detected, the type of error is reported along with the line number. For a discussion of the
syntax and rule checking -validate provides, see the Validating structure description
files section on page 1422.
Use PROSTRCT ADD to add new storage areas and extents to new or existing storage areas.
Notes
214
You can use PROSTRCT ADD to add areas and extents only when the database is offline.
Use PROSTRCT ADDONLINE to add areas and extents to an online database. For details,
see the PROSTRCT ADDONLINE qualifier section on page 215.
The new structure description file cannot identify existing extent files. It can only contain
the definitions for new extent files. See Chapter 1, Creating and Deleting Databases, for
a complete description of structure description files and storage areas.
Run PROSTRCT LIST after updating the database with PROSTRCT ADD. PROSTRCT
LIST verifies that the .st contains the updated information.
Parameters
structure-description-file
] [
-validate
db-name
Parses the contents of the .st file for syntax errors. When -validate is specified, the
database is not created. Each line of the structure file is read and evaluated. If errors are
detected, the type of error is reported along with the line number. For a discussion of the
syntax and rule checking -validate provides, see the Validating structure description
files section on page 1422.
Use PROSTRCT ADDONLINE to add new storage areas and extents to new or existing storage
areas to an online database.
Notes
You may not have more than one instance of PROSTRCT ADDONLINE executing
at a single time.
If currently connected users will not have sufficient privileges to access the new
extents, you may proceed with the online add, but an under-privileged user will be
disconnected from the database the first time it tries to open one of the new extents.
This open may occur at an unpredictable time, such as when the user needs to find
space in the buffer pool. PROSTRCT ADDONLINE verifies the status of current
users before adding new extents, and prompts you to confirm that you wish to
proceed.
If your database has after-imaging enabled, you must update your target database to match
the source database. Failure to update the target database will cause RFUTIL ROLL
FORWARD to fail when applying after-image extents. PROSTRCT ADDONLINE issues
a warning, and prompts you to continue as follows:
Adding a new area online with ai enabled will cause the roll forward
utility to fail unless the corresponding areas are also added to the
target database. (13705)
Do you want to continue (y/n)? (13706)
y
Parameters
structure-description-file
db-name
216
Parameters
db-name
Use of the PROSTRCT command to clear the cluster setting is for emergency use only.
Under normal circumstances, the PROCLUSTER command should be used to disable a
cluster-enabled database. See the Disabling a cluster-enabled database section on
page 1112.
Clearing the cluster setting does not clean up cluster-specific objects associated with the
database. You must manually remove these objects.
217
Parameters
[
[
[
structure-description-file
-blocksize blocksize
-validate
db-name
Specifies the .st file you want PROSTRCT to access for file information. The default .st
file is db-name.st. If you have a structure description file that has the same name as the
database you are creating, you do not have to specify structure-description-file.
PROSTRCT automatically creates the database from the structure description file that has
the same name as your database with the extension .st.
-blocksize blocksize
Specifies the database block size in bytes (for example -blocksize 1024). The maximum
number of indexes allowed in a database is based on the database block size. See
Chapter 2, OpenEdge RDBMS Limits, for more information on database limits.
-validate
Parses the contents of the .st file for syntax errors. When -validate is specified, the
database is not created. Each line of the structure file is read and evaluated. If errors are
detected, the type of error is reported along with the line number. For a discussion of the
allowed syntax in a description file, see the Creating a structure description file section
on page 13.
The PROSTRCT CREATE utility allows you to specify the minimum amount of information
necessary to create a database. You must specify the area type and extent location. If the extent
is fixed length, you must also specify the size. You need not provide specific filename or file
extensions. The utility will generate filename and file extensions for all database files according
to the following naming convention:
218
The control area (.db) and the log file (.lg) are placed in the directory specified by the
command line db-name parameter.
If a relative pathname is provided, including using common . (dot) notation, the relative
pathname will be expanded to an absolute pathname. Relative paths begin in your current
working directory.
For BI extents, the filename is the database name with a .bn extension, where n represents
the number of extents created.
For AI extents, the filename is the database name with a .an extension, where n represents
the logical order in which the AI areas will be accessed during normal processing.
For TL extents, the filename is the database name, with a .tn extension.
For Schema area extents, the filename is the database name with a .dn extension, where n
represents the order extents were created and will be used.
For application data extents, the filename is the database name and an area number. The
area number is a unique identifier that differentiates between different areas. User extent
filenames also have a .dn extension, where n represents the order extents were created and
will be used.
The newly created database does not contain any metaschema information. Rather, it consists
of the database control (DB) area and whatever primary recovery (BI), after-image (AI),
two-phase commit transaction log (TL), and application data (.dn) areas you defined in the .st
file.
After you create a void database, you must add metaschema information. The OpenEdge
RDBMS provides empty databases, each the size of a supported database block size. The empty
database and the database you want to copy it to must have the same block size.
Caution: Never use operating system file commands to copy an OpenEdge database. Instead,
use the PROCOPY or PRODB utilities.
Notes
You cannot create a database if one already exists with the same name.
See Chapter 14, Maintaining Database Structure, for more information about the
CREATE qualifier with the PROSTRCT utility.
219
Parameters
structure-description-file
db-name
Specifies the multi-volume database whose structure description file you want to update.
structure-description-file
Specifies the structure description file PROSTRCT creates. If you do not specify the
structure description file, PROSTRCT uses the base name of the database and appends a
.st extension. It replaces an existing file of the same name.
It provides storage area, transaction log, and records per block information in the .st file it
produces. Also, PROSTRCT LIST displays storage area names and extent information
including the extent type, size, number, and name.
Notes
2110
Use PROSTRCT LIST any time you make changes to the structure of the database to
verify that the change was successful.
See Chapter 14, Maintaining Database Structure, for more information about the LIST
qualifier with the PROSTRCT utility.
Parameters
db-name
Specifies the database where you want to remove a storage area or extent.
extent-token
d Removes
bi Removes
a before-image extent
ai Removes
an after-image extent
tl Removes
a data extent
storage-area
Before you remove extents from the TL area, you must disable two-phase commit.
You can verify that the deletion occurred and update the structure description file using
the LIST qualifier with the PROSTRCT utility.
See Chapter 14, Maintaining Database Structure, for more information about the
REMOTE qualifier with the PROSTRCT utility.
2111
Parameters
db-name
2112
Reorder AI extents only if you cannot empty extents, perhaps because of OpenEdge
Replication locking an extent.
REORDER AI will move empty AI extents to follow the busy extent in switch order.
REORDER AI will rename AI extents and associate them with different extent areas. Use
PROSTRCT LIST to generate a new .st file for your database immediately after
reordering. For more information on generating a new .st file, see the PROSTRCT LIST
qualifier section on page 2110.
See the Managing after-imaging files section on page 79 for more information about
the REORDER AI qualifier.
Parameters
structure-description-file
db-name
Specifies the .st file containing the updated extent information. If you omit the
structure-description-file, PROSTRCT REPAIR uses the db-name.st file to update
the control information.
Notes
You cannot use the REPAIR qualifier to add or remove extents. You can only use it to
change the location of existing extents.
You must manually move the .db file or the data extent. PROSTRCT REPAIR simply
updates the file list of the .db file to reflect the new locations of database extents.
2113
PROSTRCT Utility
Parameters
db-name
Notes
2114
Database name
Primary database block size and the before-image and after-image block sizes
Total number of all blocks (active blocks, active free blocks, and empty blocks)
When PROSTRCT STATISTICS is run against a database that is in use, values may be
changing as the display is output.
Do not run this utility against a database that has crashed. You must recover the database
first.
See Chapter 14, Maintaining Database Structure, for more information about the
STATISTICS qualifier with the PROSTRCT utility.
Parameters
-extents
db-name
Replaces missing extents with empty extents if any database files are missing.
Notes
When the database finds an inconsistency among the data and recovery log, it generates
an error message and stops any attempt to open the database. Typically, inconsistencies
between files are a result of accidental misuse of operating system copy utilities, deletion
mistakes, or incorrectly administered backup and restore procedures.
If the first data file (.d1) is missing, the database cannot open because of the missing
master block. PROSTRCT UNLOCK with the -extents parameter, however, creates an
empty file with the same name and location as the missing file that allows the database to
open. This function helps enable access to a severely damaged database.
PROSTRCT UNLOCK does not repair damaged databases. It opens databases with
inconsistencies in dates and missing extents, but these databases still need to be repaired
before they can be used. For information on repairing databases, see the Restoring a
database section on page 520.
2115
PROSTRCT Utility
2116
22
RFUTIL Utility
This chapter describes the OpenEdge database administration utility RFUTIL.
Parameters
db-name
Specifies a particular utility or function when you use the rfutil command.
qualifier
Specifies the qualifier that you want to use. You can supply the qualifiers listed in
Table 221.
Note:
RFUTIL and its qualifiers support the use of internationalization startup parameters
such as -cpinternal codepage and -cpstream codepage. See Chapter 18, Database
Startup Parameters, for a description of each database-related internationalization
startup parameter.
Table 221:
Qualifier
222
(1 of 2)
Description
AIARCHIVER DISABLE
AIARCHIVER ENABLE
AIARCHIVER END
AIARCHIVER SETDIR
AIARCHIVER SETINTERVAL
AIARCHIVE EXTENT
AIMAGE AIOFF
AIMAGE BEGIN
AIMAGE END
Qualifier
(2 of 2)
Description
AIMAGE EXTRACT
AIMAGE NEW
AIMAGE QUERY
AIMAGE SCAN
AIMAGE TRUNCATE
AIVERIFY PARTIAL
AIVERIFY FULL
MARK BACKEDUP
ROLL FORWARD
SEQUENCE
Note:
223
Parameters
db-name
224
After imaging is not disabled with this command, only the AI File Management Utility.
If the database is online and the AI File Management daemon is running, the daemon is
first shut down, then AI File Management is disabled.
Parameters
db-name
The database broker will automatically start the AI File Management daemon when
the broker starts.
Earlier versions of OpenEdge are not allowed to directly connect to your database when
AI File Management is enabled. Clients may still connect with a network connection.
225
Parameters
db-name
226
Use this command to stop the AI File Management Utility daemon without shutting down the
database server. This command does not stop after image process. The AI extents will continue
to fill as part of normal database activity.
Parameters
[-aiarcdircreate ]
db-name
Directs the daemon to create the directories specified by the -aiarcdir parameter, if they
do not currently exist.
Note
Use this command against a database that is online with an active AI File Management Utility
daemon.
227
Parameters
db-name
Specifies the operation mode of the daemon process and the time the daemon will sleep
between mandatory extent switches in timed mode.
When n is a value between 120 and 86,400, the daemon executes in timed mode, and n
indicates the length of time, in seconds, that the daemon will wait between mandatory
extent switches.
Note
228
Use this command against a database that is online with an active AI File Management Utility
daemon.
Parameters
db-name
Specifies the output file of the archived extent. The file cannot exist. If a file exists with
the specified name, AIARCHIVE EXTENT will fail.
Notes
If the database is online and AI File Management is enabled, the daemon process cannot
be running when this command is executed.
If AI File Management is enabled, the extent archive is logged to the AI File Management
log file (database-name.archival.log.) If AI File Management is not enabled, the extent
is still archived, but the execution is not logged.
This command implicitly empties the AI extent that is archived, marking it EMPTY.
229
Parameters
db-name
Prevent database crashes caused by a pending lack of disk space. Instead of switching AI
extents, you can disable after-imagine.
Caution: If you disable after-imaging and the database crashes, you cannot roll forward.
Notes
2210
If the database is online, re-enable after-imaging with PROBKUP. See the PROBKUP
utility section on page 2313 for details.
Parameters
db-name
You have not marked the database as backed up since the last time the database was
modified.
2211
Parameters
db-name
2212
When you use the RFUTIL AIMAGE END qualifier on a database with AI areas, RFUTIL
marks all of the AI areas as empty. This means that you cannot access any information that
you did not back up for roll-forward recovery.
See the PROBKUP utility section on page 2313, for enabling after-imaging when
the database is online.
See the RFUTIL AIMAGE BEGIN qualifier section on page 2211, for enabling
after-imaging when the database is offline.
Parameters
extent-number
extent-path
db-name
If you do not specify either an extent number or an extent path, RFUTIL marks the oldest
full extent as empty.
If the extent being marked empty is a variable length extent, RFUTIL will truncate the
extent.
This command cannot be executed against a database that has enabled AI File
Management.
2213
Parameters
db-name
2214
Parameters
db-name
Note
Extent Type Type of each file. This is either fixed length or variable length.
Extent Status Status of each file. This is either empty, full, or busy.
Start/Date Time Time each file began logging AI notes. This is not applicable to
empty files.
The status of a file might be full even though there is space left over in the file. This can happen
after an online backup because file switch-over occurs at the time of the backup, whether or not
the current file is full. RFUTIL still marks the file as full because, like a full file, it must be
archived and marked empty before the database can reuse it.
2215
Parameters
db-name
Identifies the AI file containing the blocks to be extracted. The extent must be marked as
full or locked.
-o output-file
2216
Extracting blocks from an AI extent is only beneficial for fixed length extents. There will
be minimal savings of disk space when extracting blocks from a variable length extent.
See the Managing after-imaging files section on page 79 for more information on
extracting AI blocks from an extent.
Parameters
db-name
You can use this qualifier whether the database is offline or online.
Use this qualifier only when after-imaging is enabled and you have just backed up the AI
area.
2217
Parameters
db-name
Specifies the information you want to gather about the AI extent. You can supply one of
the options from Table 222.
Table 222:
Query option
Value returned
EXTNUM
STATUS
TYPE
SIZE
USED
NAME
SEQUENCE
STARTDATE
ALL
search-option
Specifies how you are identifying the AI extent to query. Supply one of the options from
column one of Table 223.
2218
Specifies the match criteria for the search-option. Supply the value indicated in column
two of Table 223 that matches your search-option.
Table 223:
Search-option
Notes
Search-value
EXTNUM
NAME
SEQUENCE
2219
RFUTIL Utility
Parameters
verbose
-a ai-name
db-name
Provides more information from the AI area, including the transaction number, the date
and time the transaction began or ended, and the user ID of the user who initiated the
transaction. You might want to try this on a small test AI area before running it on the
after-image file associated with your database.
-a ai-name
2220
The specified database does not have to be the database that corresponds to the AI area.
You can use a dummy database to use this command with an AI area for an online
database.
Parameters
db-name
Specifies the size of the AI blocks in each buffer, in kilobytes. The valid values are 1, 2,
4, 8, and 16. The block size cannot be smaller than the database block size.
Notes
After executing this command to change the AI block size, you must perform a full backup
of the database before you can re-enable after-imaging. If you change the BI block size or
cluster size before backing up the database, the block size of the backup will overwrite the
changed block size when the backup is restored.
Increasing the AI block size allows larger AI reads and writes. This can reduce I/O rates
on disks where the AI areas are located. If your operating system can benefit from larger
writes, this option can improve performance. Larger AI block size might also improve
performance for roll-forward recovery processing.
When you execute this command, after-imaging and two-phase commit must be disabled,
and the database must be offline; otherwise, RFUTIL returns an error message.
After you change the AI block size, RFUTIL uses the new block size in all database
operations.
Use the PROSTRCT STATISTICS qualifier to display the block sizes for a database.
Typically, if you change the AI block size, you should also change the before-image (BI)
block and cluster size; otherwise, the increased AI performance will cause a BI bottleneck.
See Chapter 13, Managing Performance, for more information about using the RFUTIL
AIMAGE TRUNCATE utility.
2221
RFUTIL Utility
Parameters
db-name
2222
Parameters
db-name
2223
RFUTIL Utility
Parameters
db-name
2224
[
[
Parameters
verbose
endtime yyyy:mm:dd:hh:mm:ss
-B n
][
-r
]
endtrans transaction-number]
-a ai-name
db-name
Specifies to roll forward to a certain time. You must specify the ending time as a string of
digits and separate the date and time components with a colon. Transactions are included
in the partial roll forward only if they end before the specified time. For example, to roll
forward to 5:10 PM on July 18, 2002, type 2002:07:18:17:10:00. For RFUTIL to include
a transaction in this partial roll forward, the transaction must have ended on or before
2002:07:18:17:09:59.
endtrans
Specifies to roll forward up to but not including the transaction beginning that contains the
transaction-number. For example, if you specify endtrans 1000, RFUTIL rolls forward
the AI area to transaction 999. If you want to include transaction 1000, you must specify
endtrans 1001.
-B n
Specifies the number of database buffers. The single-user default value is 20.
-r
The start and end dates of the AI area being applied to the database
The number of transactions that were active after all AI notes were applied
2225
RFUTIL Utility
The ROLL FORWARD qualifier fails if:
Notes
2226
The ROLL FORWARD qualifier always disables after-imaging for the database before
beginning the roll-forward operation. After the roll-forward has completed, you must
re-enable it with the AIMAGE BEGIN qualifier if you want continued AI protection.
You must apply all AI extents associated with the database in the same sequence they were
generated before you can use the database.
Parameters
db-name
2227
RFUTIL Utility
[
[
Parameters
] [
-r
] [
verbose
-a ai-area
db-name
Specifies to roll forward to a certain point in time. You must specify the ending time as a
string of digits and separate the date and time components with a colon. Transactions are
included in the partial roll forward only if they end before the specified time. For example,
to roll forward to 5:10 PM on July 18, 2002, type 2002:07:18:17:10:00. For RFUTIL to
include a transaction in this partial roll forward, the transaction must have ended on or
before 2002:07:18:17:09:59.
endtrans
Specifies to roll forward up to but not including the transaction beginning that contains the
transaction-number. For example, if you specify endtrans 1000, RFUTIL rolls forward
the AI area to transaction 999. If you want to include transaction 1000, you must specify
endtrans 1001.
-B n
Specifies the number of database buffers. The single-user default value is 20.
-r
2228
Roll forward might encounter a two-phase begin note in a BI area that will signal roll forward
to enable transaction commit logging to the transaction log. If the database does not contain a
TL area, roll forward will abort. To recover from this situation, you should first add a TL area
to your database and then run ROLL FORWARD RETRY.
2229
RFUTIL Utility
db-name
2230
RFUTIL SEQUENCE is only needed when an AI extent switch occurs after the source database
was backed up, and before a copy is made.
23
Other Database Administration Utilities
This chapter describes miscellaneous OpenEdge database administration utilities, in
alphabetical order. It discusses the purpose, syntax, and primary parameters for each utility,
specifically:
DBMAN utility
DBTOOL utility
PROADSV utility
PROBKUP utility
PROCLUSTER utility
PROCOPY utility
PRODB utility
PRODEL utility
PROLOG utility
PROREST utility
DBMAN utility
DBMAN utility
Starts, stops, or queries a database. Before you can use the DBMAN command-line utility, you
must use the Progress Explorer Database Configuration Tool to create the database
configuration and store it in the conmgr.properties file.
Syntax
dbman
-database db-name
Parameters
service-name-user user-name
-stop
-query
]
]
-database db-name
Specifies the name of the database you want to start. It must match the name of a database
in the conmgr.properties file.
-config config-name
Specifies the name of the configuration with which you want to start the database.
-start
Queries the Connection Manager for the status of the database db-name.
-host host-name
Identifies the host machine where the AdminServer is running. The default is the local
host. If your AdminServer is running on a remote host, you must use the -host host-name
parameter to identify the host where the remote AdminServer is running.
-port port-number|service-name
Identifies the port that the AdminServer is listening on. If your AdminServer is running on
a remote host, you must use the -port port-number parameter to identify the port on
which the remote AdminServer is listening. The default port number is 20931.
-user user-name
If your AdminServer is running on a remote host, you must use the -user user-name
parameter to supply a valid user name for that host. You will be prompted for the
password.
232
DBMAN utility
Notes
When you specify a user name with the -user parameter, Windows supports three
different formats:
A user name as a simple text string, such as mary, implies a local user whose user
account is defined on the local server machine, which is the same machine that runs
the AdminServer.
A user name as an explicit local user name, in which the user account is defined on
the same machine that runs the AdminServer, except the user name explicitly
references the local machine domain, for example \mary.
A user name as a user account on a specific Windows domain. The general format is
Domain\User, in which the User is a valid user account defined within the domain
and the Domain is any valid Windows server, including the one where the
AdminServer is running.
Do not edit the conmgr.properties file directly. Instead, use the Progress Explorer
Database Configuration Tool.
DBMAN supports the use of internationalization startup parameters such as, -cpinternal
codepage and -cpstream codepage. See Chapter 18, Database Startup Parameters, for
a description of each database-related internationalization startup parameter.
The conmgr.properties file stores the database, configuration, and server group
properties. For example:
(1 of 3)
#
# Connection Manager Properties File
#
%% Properties File
%% version 1.1
%% Oct 31, 2005 1:59:37 PM
#
# The following are optional configuration properties and their default
# values. The legacy option, if applicable, is listed after the second
# comment. Property values set at this level become the default values
# for all configuration subgroups.
#
[configuration]
#
afterimagebuffers=5
# -aibufs
#
afterimageprocess=false
# n/a
#
afterimagestall=true
# -aistall
#
asynchronouspagewriters=1
# n/a
#
beforeimagebufferedwrites=false # -r
#
beforeimagebuffers=5
# -bibufs
#
beforeimageclusterage=60
# -G
#
beforeimagedelaywrites=3
# -Mf
#
beforeimageprocess=true
# n/a
#
blocksindatabasebuffers=0
# -B (calculated as 8*(-n))
#
casetablename=basic
# -cpcase
#
collationtable=basic
# -cpcoll
#
conversionmap=convmap.cp
# -convmap
#
crashprotection=true
# -i
#
databasecodepage=basic
# -cpdb
#
directio=false
# -directio
#
hashtableentries=0
# -hash (calculated as (-B)/4)
233
DBMAN utility
(2 of 3)
#
internalcodepage=iso8859-1
# -cpinternal
#
locktableentries=10000
# -L
#
logcharacterset=iso8859-1
# -cplog
#
maxservers=4
# -Mn
#
maxusers=20
# -n
#
nap=1
# -nap
#
napmax=1
# -napmax
#
pagewritermaxbuffers=25
# -pwwmax
#
pagewriterqueuedelay=100
# -pwqdelay
#
pagewriterqueuemin=1
# -pwqmin
#
pagewriterscan=1
# -pwscan
#
pagewriterscandelay=1
# -pwsdelay
#
semaphoresets=1
# -semsets
#
sharedmemoryoverflowsize=0
# -Mxs
#
spinlockretries=0
# -spin
#
sqlyearoffset=1950
# -yy
#
watchdogprocess=true
# n/a
#
# The following are optional database properties and their default
values.
# Property values set at this level become the default values for all
# database subgroups.
#
[database]
#
autostart=false
# autostart the defaultconfiguration?
#
databasename=demo
# absolute or relative path + database
name
# The following are optional server group properties and their default
# values. The legacy option, if applicable, is listed after the second
# comment. Property values set at this level become the default values
# for all servergroup subgroups.
#
[servergroup]
#
host=localhost
# -H
#
initialservers=0
# n/a
#
maxclientsperserver=0
# -Ma (calculated value)
#
maxdynamicport=5000
# -maxport (5000 for NT; 2000 for UNIX)
#
messagebuffersize=350
# -Mm (4gl only)
#
minclientsperserver=1
# -Mi
#
mindynamicport=3000
# -minport (3000 for NT; 1025 for UNIX)
#
networkclientsupport=true
# false for self-service
#
numberofservers=0
# -Mpb
#
port=0
# -S ; Must be non-zero
#
# when networkclientsupport=true
#
prosqltrc=nnnnnnnnnnn
# turn on various levels of SQL tracing
#
reportinginterval=1
# -rpint (4gl only)
#
serverexe=<4gl server location> # _mprosrv (4gl only)
#
type=both
# n/a
[configuration.sports2000.defaultconfiguration]
database=sports2000
displayname=defaultConfiguration
servergroups=sports2000.defaultconfiguration.defaultservergroup
234
DBMAN utility
(3 of 3)
[database.sports2000]
autostart=true
configurations=sports2000.defaultconfiguration
databasename=d:\work\database\101a\AuditEnabled\sports2000
defaultconfiguration=sports2000.defaultconfiguration
displayname=sports2000
[environment]
[servergroup.sports2000.defaultconfiguration.defaultservergroup]
configuration=sports2000.defaultconfiguration
displayname=defaultServerGroup
port=14000
235
DBTOOL utility
DBTOOL utility
Diagnostic tool that identifies possible record issues and fixes SQL Width violations.
Syntax
dbtool db-name
Parameters
db-name
Q. Quit
Choice:
Figure 231:
SQL Width & Date Scan w/Report Option This option reports on the following SQL
values:
_Field._Width
year_value<0
year_value>9999
In the generated report, three asterisks (***) in the Error column indicate SQL width or
date violations.
SQL Width Scan w/Fix Option This option scans for width violations over the
specified percentage of the current maximum width, and increases SQL width when
necessary. You are prompted to enter a number indicating the percentage above the current
maximum width to allocate for record growth.
For example, if the current maximum width of a field is 100, and you specify 10 percent
for growth, DBTOOL checks the SQL Width of the field, and if it is less than 110,
increases it to 110. If SQL Width is larger than the fields current maximum width plus the
percentage for growth, SQL Width is not changed.
236
DBTOOL utility
Record Validation This option compares the physical storage of the record to the
schema definition and reports discrepancies.
Record Version Validation This option performs record validation before and after
upgrading the schema version of a record. The first check compares the record to the
current schema definition in the record header. After that check completes, if the table has
a newer definition of the schema than the record, the record schema is updated, and then
the physical record is compared to the new definition.
Read or validate database block(s) This option validates the information in the
database block header. There are three levels of validation:
This option can be invoked by choosing to validate either all record blocks in one area or
all record blocks in all areas.
Note:
The validation process will report the first error in a block then proceed to the next
record block. Details of the errors encountered will be recorded in the database .lg
file.
Record fixup This option scans records for indications of possible corruption.
Schema Validation This option checks for inconsistencies in the database schema.
This option currently identifies errors in schema records for word indexes. If an error is
detected, this option reports the index number and recommends rebuilding that index.
Enable/Disable File Logging This option toggles the redirection of the tool output to
a file named dbtool.out. DBTOOL writes this file to your current working directory.
For menu options 1-6, you must know if your database has a server running or not. You will be
prompted to connect to the database as shown in the following procedure.
237
DBTOOL utility
Enter your choice at the main menu prompt. The following prompt appears:
2.
dsmUserConnect failed rc = -1
After you enter a valid connection code, the prompts specific to the functionality of your
selection appear.
3.
Continue through the remaining prompts for your option. Common prompts include the
following:
Display Enter the verbose level. The verbose level defines the amount of output
displayed, with zero being the least verbose. The following prompt appears for
display:
238
DBTOOL utility
Notes
DBTOOL is a multi-threaded utility. The connection type you specify when running the
utility determines the number of threads DBTOOL will run. Because each thread uses a
database connection, be aware of what the broker startup parameter (-n) is set to.
Specifying 1.5*(number of CPUs) as the number of threads to run DBTOOL with is
usually adequate.
DBTOOL sends output to .stdout. To redirect output to a file, enter the following syntax
when starting DBTOOL:
Syntax
dbtool db-name 2 > filename.out
If you decide to redirect output after entering DBTOOL, select Enable/Disable File
Logging from the DBTOOL menu. This option redirects the output to a file named
dbtool.out. DBTOOL writes this file to your current working directory.
DBTOOL does not fix date errors, it only identifies them. To fix date errors, use the Data
Dictionary or Data Administration tool to modify the fields that contain incompatible date
values.
239
Parameters
[-t|-l|-l2]
db-name1
Tight compare. Checks all fields including backup counters and master block fields.
-l
Loose compare. Does not report differences in the master block last modification date or
the backup update counter.
-l2
Second loose compare. Does not report differences in the incremental backup field of the
database block headers.
Note
2310
PROADSV utility
PROADSV utility
Starts, stops, or queries the current installation of an AdminServer on UNIX.
Syntax
proadsv
[
[
[
Parameters
-start
-stop
-port port-number
-cluster
-help
] [
| -query }
] [ -adminport
-hostname host-name
port-number
-start
Specifies the listening port number for online command utilities, such as DBMAN. If a
port number is not specified, it defaults to 20931.
-adminport port-number
Specifies the listening port number for communication between a server group and an
AdminServer. The default port number is 7832.
-cluster
Specifies that the AdminServer should bind the RMI registry to the host host-name. In a
clustered environment, use the cluster alias as the value for host-name. In the event of a
failover, using the cluster alias guarantees availability of the RMI registry for the
AdminServer.
-help
2311
PROADSV utility
Notes
HKEY_LOCAL_MACHINE\SOFTWARE\PSC\AdminService\version\StartupCmd
or
HKEY_LOCAL_MACHINE\SOFTWARE\PSC\AdminService\version\ShutdownCmd
To change the default port, add -port or -adminport and the port number to the end of
the value. If you add both -port and -adminport, be sure not to use the same port number.
Be sure to leave a space between -port or -adminport and the port number. For example:
To run more than one AdminServer on a single system, specify a unique -adminport
and -port port-number for each AdminServer. Failure to do so can result
in communication errors between AdminServer and server groups.
port-number
2312
See Chapter 3, Starting Up and Shutting Down, for additional information describing
the Progress Explorer Framework, including an AdminServer.
PROBKUP utility
PROBKUP utility
Backs up an OpenEdge RDBMS, including the database, before-image files, and transaction log
(TL) extents, and optionally enables after-imaging and AI File Management during an online
backup.
Syntax
probkup
online
db-name
incremental
device-name
Parameters
online
Identifies a special device (for example, a tape drive) or a standard file. If device-name
identifies a special device, PROBKUP assumes the device has removable media, such as
a tape or a floppy diskette. For Windows, use \\.\tape0 for the device name if you are
backing up to a tape drive.
enableai
Directs PROBKUP to enable after-imaging as part of the online backup process. The
backup must be full and online to enable after-imaging. See Chapter 7, After-imaging,
for more information on enabling after-imaging online.
enableaiarchiver -aiarcdir dirlist
Directs PROBKUP to enable the AI File Management Utility as part of the online backup
process. You must supply dirlist, a comma separated list of directories where archived
after-image files are written by the AI File Management Utility. The directory names can
not have any embedded spaces. In Windows, if you specify more than one directory, place
dirlist in quotes. The directories must exist, unless you also specify -aiarcdircreate
to direct the utility to create the directories. For more information, see the AI File
Management utility section on page 716.
-aiarcdircreate
Directs the AI File Management utility to create the directories specified by -aiarchdir.
2313
PROBKUP utility
-aiarcinterval n
Include this parameter to specify timed mode operation of AI File Management. Omit this
parameter for on-demand mode, and no mandatory extent switches other than when an
extent is full. n sets the elapsed time in seconds between forced AI extent switches. The
minimum time is 2 minutes, the maximum is 24 hours.
-estimate
Indicates that the backup will give a media estimate only. Use the Scan parameter when
using the Incremental or Compression parameters to get an accurate estimate.
Use -estimate for offline backups only. PROBKUP does not perform a backup when the
-estimate parameter is used.
-vs n
Indicates the volume size in database blocks that can be written to each removable volume.
Before PROBKUP writes each volume, it displays a message that tells you to prepare the
next volume. After writing each volume, a message tells you to remove the volume.
If you use the Volume Size parameter, the value must be greater than the value of the
Blocking Factor parameter.
If you do not use -vs, PROBKUP assumes there is no limit and writes the backup until
completion or until the volume is full. When the volume is full, PROBKUP prompts you
for the next volume.
-bf n
Indicates the blocking factor for blocking data output to the backup device. The blocking
factor specifies how many blocks of data are buffered before being transferred to the
backup device. The primary use for this parameter is to improve the transfer speed to
tape-backup devices by specifying that the data is transferred in amounts optimal for the
particular backup device. The default for the blocking factor parameter is 34. The
maximum value is 1024.
-verbose
Directs the PROBKUP utility to display information during the backup. If you specify the
Verbose parameter, PROBKUP displays Backed up n blocks in hh:mm:ss every 10
seconds. If you do not specify the Verbose parameter, the message appears only once
when the backup is complete.
-scan
Directs PROBKUP to perform an initial scan of the database and to display the number of
blocks that will be backed up and the amount of media the backup requires. You cannot
use the -scan parameter for online backups.
For full backups, if you specify -scan as well as -com, PROBKUP scans the database and
computes the backup media requirements after the data is compressed.
2314
PROBKUP utility
-io i
Specifies an incremental overlap factor. The incremental overlap factor determines the
redundancy among incremental backups. An incremental overlap factor of one (1) on
every backup allows for the loss of one incremental backup in a backup series, as long as
the immediate predecessor of that backup is not also lost. An overlap factor of two (2)
allows for losing the two immediate predecessors. The default overlap factor is zero(0).
-com
Indicates that the data should be compressed prior to writing it on the backup media. The
unused portion of index and record blocks is compressed to a 3-byte compression string.
Free blocks are compressed to the length of their header, 16 bytes.
-red i
Sets the amount of redundancy to add to the backup for error correction. The value i is a
positive integer that indicates the number of blocks for every error correction block.
PROBKUP creates an error correction block for every i blocks and writes it to the backup
media. You can use error correction blocks to recover corrupted backup blocks. See
Chapter 5, Backing Up a Database, for more information about error correction blocks
and data recovery.
The lower the redundancy factor, the more error correction blocks are created. If you
specify a redundancy of one (1), you completely duplicate the backup, block for block.
Because of the amount of time and media required to create the error correction blocks,
use this parameter only if your backup media is unreliable. The default for the redundancy
parameter is zero (0) indicating no redundancy.
-norecover
Do not perform crash recovery before backing up the database, but back up the BI files.
Notes
When restoring a backup, the target database must contain the same physical structure as
the backup version. For example, it must have the same number of storage areas, records,
blocks, and blocksize.
A database that was started with the No Shared Memory (-noshm) parameter
A database that was started with the No Crash Protection (-i) parameter
2315
PROBKUP utility
If you run the PROBKUP utility at the same time another process is accessing the same
backup device, you might receive a sharing violation error.
If you use the Compression parameter, you reduce the size of your backup by 10 percent
to 40 percent, depending on your database.
If the BI file is not truncated before you perform a backup, the database engine performs
database recovery.
2316
PROCLUSTER utility
PROCLUSTER utility
The PROCLUSTER command-line interface provides a user-friendly interface to Clusters.
With PROCLUSTER, you can:
Syntax
procluster db-name
db-name
Enables a database so that it will fail over properly. Before the database can be enabled, it
must be created and reside on a shared disk. The database must not be in use when it is
enabled as a cluster resource.
When you enable a database as a resource using PROCLUSTER enable, Clusters performs
the following:
2317
PROCLUSTER utility
-pf params-file
Specifies the file containing any parameters that the database requires when started.
The parameter file is required to:
Be named db-name.pf
AI
Directs PROCLUSTER to enable the after-image files, and an AI writer as resources with
a dependency on the database. After-imaging must have previously been enabled on the
database. See Chapter 7, After-imaging, for information on after-imaging.
BI
Removes the identified database resource if the database has been previously enabled for
failover. Specifying the database name automatically disables any other optional
dependencies specified when the database was enabled.
When you remove a database resource using PROCLUSTER disable, Clusters does the
following:
Deletes the resource from the cluster manager software once it is in an offline state
Deletes the group from the cluster manager software if the resource is the last resource in
the resource group.
start
Starts a cluster-enabled database. The database must have been previously enabled a
cluster resource.
Note: PROCLUSTER will append -pf db-name.pf to the proserve command that is
generated to start the database. The start command will fail if this parameter file is
not found.
2318
PROCLUSTER utility
stop
Stops a cluster-protected database. The database must be a cluster resource. When you
stop the database with PROCLUSTER stop, Clusters does the following:
Notifies the cluster that the resource should be stopped without fail over.
terminate
The only state of the database that returns a value of Looks Alive is when the database is
enabled and started. All other states will return the following:
isaliave
Determines if a resource is actually operational by querying the active system for the
status of the specified database.
The isalive exit status returns the following text if the database returns a successful
query:
Note
The PROCLUSTER ENABLE will register the database as a cluster resource, even if there
are errors in the command for the helper processes. To correct the errors, you must first
use PROCLUSTER DISABLE to unregister the database, then use PROCLUSTER
ENABLE to re-register without errors.
2319
PROCOPY utility
PROCOPY utility
Copies an existing database.
Syntax
procopy source-db-name target-db-name
Parameters
[-newinstance ] [-silent]
source-db-name
Specifies the database you want to copy. You cannot copy the database if you have a server
running on it.
target-db-name
Specifies the structure file or the new database. If you specify a directory without a
filename, PROCOPY returns an error.
The value you specify can be any combination of letters and numbers, starting with a letter.
Do not use ABL keywords or special characters, such as commas or semicolons. The
maximum length of target-db-name varies depending on the underlying operating
system. For specific limits, see your operating system documentation.
-newinstance
2320
PROCOPY utility
Notes
If you do not supply the .db extension for databases, PROCOPY automatically appends it.
A target database must contain the same physical structure as the source. For example, it
must have the same number of storage areas, records, blocks, blocksize, and cluster size.
Databases from releases prior to 10.1A do not have the schema support for a database
GUID. When copying one of these older databases, the GUID field will not be added to
the database. If the -newinstance qualifier is used, it is silently ignored. Use PROUTIL
UPDATESCHEMA to determine if your schema is up to date. For more information, see
the PROUTIL UPDATESCHEMA qualifier section on page 2094.
Databases supplied in the install directory for this release, contain the field for a database
GUID, but the field contains the Unknown value (?) rather than a valid GUID. When
PROCOPY is used to copy one of these databases, the GUID field of the target database
is automatically set.
2321
PRODB utility
PRODB utility
Creates a new OpenEdge database.
Syntax
prodb
Parameters
[ new-db-name ]
[ empty | sports |
[ -newinstance ]
isports
sports2000
old-db-name
demo
new-db-name
Specifies the name of the database you are creating. If you specify a directory without a
filename, PRODB returns an error.
The value you specify can be any combination of letters and numbers, starting with a letter.
Do not use ABL keywords or special characters, such as commas or semicolons. The
maximum length of new-db-name varies, depending on the underlying operating system.
See Chapter 2, OpenEdge RDBMS Limits, for details on specific limits.
empty
Specifies that the new database is a copy of the empty database located in the OpenEdge
install directory. PRODB knows where to locate the empty database, so you do not need
to provide a pathname to it.
In addition to the default empty database, PRODB allows you to create other empty
database structures with different block sizes:
empty (default).
To create these empty database structures, however, you must specify the pathname to
where OpenEdge is installed, or use the DLC environment variable. For example, use the
following command:
sports
Specifies that the new database is a copy of the international Sports database.
2322
PRODB utility
sports2000
You can also create a new database from the Data Dictionary.
When you use the PRODB utility and give the copy a new name, you cannot run the
original r-code against the new database. This is because PRODB saves the database with
the r-code. To run the r-code against the new database, use the Logical Database Name
(-ld) startup parameter and use the original database name.
A new database must contain the same physical structure as the source. For example, it
must have the same number of storage areas, records, blocks, and blocksize.
Databases from releases prior to 10.1A do not have the schema support for a database
GUID. When copying one of these older databases, the GUID field will not be added to
the database. If the -newinstance qualifier is used, it is silently ignored. Use PROUTIL
UPDATESCHEMA to determine if your schema is up to date. For more information, see
the PROUTIL UPDATESCHEMA qualifier section on page 2094.
Databases supplied in the install directory for this release, contain the field for a database
GUID, but the field contains the Unknown value (?) rather than a valid GUID. When
PRODB is used to copy one of these databases, the GUID field of the target database is
automatically set.
codepage
When issuing the PRODB command, specify the target and source database names (they
are positional) before specifying an internationalization startup parameter such as
-cpinternal.
PRODB also supports the parameter file (-pf) startup parameter that contains a list of
valid startup parameters.
2323
PRODEL utility
PRODEL utility
Deletes an OpenEdge database.
Syntax
prodel db-name
Parameters
db-name
When you delete a database, PRODEL displays a message to notify you that it is deleting
the database, log, BI, and AI areas for that database.
When you delete a database, PRODEL deletes all associated areas (database, log,
before-image (BI) and after-image (AI) files) that were created using the structure
description file.
If an AI area exists in a database, a warning appears. Back up the AI area before deleting it.
2324
PRODEL does not delete the databases .st file. Because the .st file remains, you can
recover the database after it has been deleted.
Connect to an AdminServer
For instructions on using Progress Explorer to complete these tasks, see the Progress Explorer
online help.
To launch Progress Explorer from the Start menu, choose Programs OpenEdge Progress
Explorer Tool.
Notes
conmgr.properties
Progress Explorer is a snap-in to the Microsoft Management Console (MMC), the system
and network administration tools framework. For more information about MMC, see the
MMC online help.
2325
PROLOG utility
Truncates the log file.
Syntax
prolog db-name
Parameters
-online
db-name
Specifies the database log you want to truncate. Do not include the .db suffix.
-online
Specifies that the database whose log is being truncated is currently online.
Notes
If you want to save the log file, use operating system utilities to back it up before using
PROLOG.
See Chapter 16, Logged Data, for more information about log files.
codepage
2326
PROREST utility
PROREST utility
Verifies the integrity of a database backup or restores a full or incremental backup of a database.
Syntax
prorest db-name device-name
Parameters
-vp
-vf
-list
db-name
Identifies the directory pathname of the input device (for example, a tape drive) or a
standard file from which you are restoring the data. If device-name identifies a block or
character special device, PROREST assumes the device has removable media, such as a
tape or a floppy diskette.
-vp
Specifies that the restore utility reads the backup volumes and computes and compares the
backup block cyclic redundancy checks (CRCs) with those in the block headers. To
recover any data from a bad block, you must have specified a redundancy factor when you
performed the database backup. See Chapter 5, Backing Up a Database, for more
information about error correction blocks and data recovery.
-vf
Specifies that the restore utility only compares the backup to the database block for block.
-list
Provides a description of all application data storage areas contained within a database
backup. Use the information to create a new structure description file and database so you
can restore the backup.
2327
When restoring a backup, the target database must contain the same physical structure as
the backup version. For example, it must have the same number of storage areas, records
per block, and block size.
Before you restore a database, you might want to verify that your backup does not contain
any corrupted blocks. You can use the PROREST utility to verify the integrity of a full or
incremental backup of a database by using the Partial Verify or Full Verify parameters.
The Partial Verify or Full Verify parameters do not restore or alter the database. You must
use the PROREST utility separately to restore the database.
You can use the Partial Verify parameter with both online and offline backups.
Use the Full Verify parameter immediately after performing an offline backup to verify
that the backup is correct.
2328
24
SQL Utilities
This chapter describes the OpenEdge database administration utilities for SQL. The utilities
include:
SQLDUMP utility
SQLLOAD utility
SQLSCHEMA utility
SQLDUMP utility
SQLDUMP utility
A command-line utility that dumps application data from SQL tables into one or more files.
Syntax
sqldump -u user_name
-a password
] [ -C code_page_name ]
[ [,owner_name.]table_name2,... ]
-t [ owner_name.]table_name1
db_name
Parameters
-u user_name
Specifies the user id SQLDUMP used to connect to the database. If you omit the
user_name and password parameter values, SQLDUMP prompts you for the values. If you
omit user_name and supply a password, SQLDUMP uses the value defined in the USER
environment variable as the user_name value.
-a password
A case-insensitive character string that specifies the name of the dump files code page. If
the -C parameter specifies a code page name that is not valid, the utility reports a run-time
error. If the -C parameter does not appear at all, the code page name defaults to the clients
internal code page:
If not set, the name of the code page of the clients locale
For example, you might use the -C parameter to have a Windows client using the MS1250
code page produce a dump file using the ISO8859-2 code page (to read later on a UNIX
machine, perhaps). Although you can accomplish this by setting the clients
SQL_CLIENT_CHARSET environment variable, using the -C parameter might be easier.
-t owner_name.table_name
Specifies a list of one or more tables to dump to a file. This parameter is required. Pattern
matching is supported in both owner_name and table_name, using a percent sign (%) for
one or more characters and an underscore (_) for a single character. The pattern matching
follows the standard defined by the LIKE predicate in SQL.
You can dump a single table, a set of tables, or all tables. If you omit the optional
owner_name qualifier, SQLDUMP uses the name specified by the -u parameter.
db_name
Specifies the database where you are dumping tables. You can dump tables from one
database each time you invoke SQLDUMP. There is no option flag preceding the db_name.
This parameter is required and must be the last parameter specified. The database name is
specified in the following way: progress:T:localhost:demosv:jo.
242
SQLDUMP utility
SQLDUMP dumps application data from SQL tables into one or more files. You can load the
data from the files into another database with the SQLLOAD utility. The SQLDUMP utility
does not dump data from ABL tables.
The SQLDUMP utility writes user data in row order into ASCII records with variable-length
format. The column order in the files is identical to the column order in the tables. The utility
writes both format and content header records to the dump file. You can dump multiple tables
in a single execution by specifying multiple table names, separated by commas. Make sure there
are no spaces before or after commas in the table list.
Data for one table always goes to a single dump file. Each dump file corresponds to one database
table. For example, if you specify 200 tables in the SQLDUMP command, you will create 200
dump files. The SQLDUMP utility assigns the filenames that correspond to the owner_name and
table_name in the database, with the file extension .dsql. If a dump file for a specified table
already exists, it will be overwritten and replaced. Dump files are created in the current working
directory.
The format of the records in a dump file is similar to the ABL .d file format:
Can contain any embedded characters except for NULL values, allowing commas,
newlines, and other control characters
Any error is a fatal error, and SQLDUMP halts the dumping process so that data integrity will
not be compromised. SQLDUMP reports errors to standard output.
After successful processing, SQLDUMP writes a summary report to standard output. For each
table SQLDUMP processes, the report shows:
Table name
Dump filename
243
SQLDUMP utility
Examples
This example directs the SQLDUMP utility to write the data from two tables to two dump files.
The user_name and password for connecting to the database are tucker and sulky. The tucker
account must have the authority to access the customers and products tables in database
salesdb with owner_name martin, as shown:
This example directs the SQLDUMP utility to write the data from all tables in the salesdb
database that begin with any of these strings: cust, invent, and sales, and having any owner
name that the user tucker has authority to access. The user_name and password for connecting
to the database are tucker and sulky, as shown:
This example directs the SQLDUMP utility to write the data from all tables for all owner names
in the salesdb database:
Notes
progress:T:thunder:4077:salesdb
Before you can execute SQLDUMP against a database server, the server must be
configured to accept SQL connections and must be running.
Each dump file records character set information in the identifier section of each file. For
example:
A^B^CProgress
sqlschema
v1.0
Quote fmt
A^B^CTimestamp
1999-10-19
19:06:49:0000
A^B^CDatabase
dumpdb.db
A^B^CProgress Character Set: iso8859-1
A^B^CJava Charcter Set: Unicode UTF-8
A^B^CDate Format: MM/DD/YYYY
The character set recorded in the dump file is the client character set. The default character
set for all non-JDBC clients is taken from the local operating system through the operating
system apis. JDBC clients use the Unicode UTF-8 character set.
To use a character set different than that used by the operating system, set the
SQL_CLIENT_CHARSET environment variable to the name of the preferred character
set. You can define any OpenEdge supported character set name. The name is not case
sensitive.
244
SQLDUMP utility
Backslash (\)
SQLDUMP supports schema names that contain special characters such as, a blank space,
a hyphen (-), or pound sign (#). These names must be used as delimited identifiers.
Therefore, when specifying names with special characters on a UNIX command line,
follow these rules:
So that the command line does not strip the quotes, use a backslash (\) to escape the
double quotes used for delimited identifiers.
Use double quotes to enclose any names with embedded spaces, commas, or
characters special to a command shell (such as the Bourne shell). This use of quotes
is in addition to quoting delimited identifiers.
For example, to dump the table Yearly Profits, use the following UNIX command-line
syntax:
Syntax
sqldump -t "\"Yearly Profits\"" -u xxx -a yyy db_name
In Windows, the command interpreter rules for the use of double quotation marks varies
from UNIX.
By default, SQLDUMP displays promsgs messages using the code page corresponding to
code_page_name. That is, if you are dumping a Russian database, and code_page_name
specifies the name of a Russian code page, the client displays promsgs messages using the
Russian code-page, (unless you specify a different code page by setting the clients
SQL_CLIENT_CHARSET_PROMSGS environment variable).
245
SQLLOAD utility
SQLLOAD utility
A command-line utility that loads user data from a formatted file into an SQL database.
Syntax
sqlload -u user_name [ -a password ] [ -C code_page_name ]
-t [ owner_name.]table_name1 [ [,owner_name.]table_name2, ... ]
[ -l log_file_name ] [ -b badfile_name ] [ -e max_errors ]
[ -s skipcount ] [ -m maxrows ] [ -F comma | quote ]
db_name
Parameters
u user_name
Specifies the user SQLLOAD uses to connect to the database. If you omit the user_name
and password, SQLLOAD prompts you for these parameter values. If you omit the
user_name and supply a password, SQLLOAD uses the value defined in the USER
environment variable.
-a password
A case-insensitive character string that specifies the name of the dump files code page. If
the -C parameter specifies a code page name that is not valid, a run-time error is reported.
If the -C parameter does not appear at all, the code page name defaults to the clients
internal code page:
If not set, the name of the code page of the clients locale.
For example, you might use the -C parameter to load a dump file whose code page is
ISO8859-2, using a Windows client whose code page is MS1250. Although you can
accomplish this by setting the clients SQL_CLIENT_CHARSET environment variable,
using the -C parameter might be easier.
-t owner_name.table_name
Specifies a list of one or more tables to load into a database. This parameter is required.
Pattern matching is supported, using a percent sign (%) for multiple characters and an
underscore (_) for a single character. The pattern matching follows the standard for the
LIKE predicate in SQL. You can load a single table, a set of tables, or all tables. If you omit
the optional owner_name table qualifier, SQLLOAD uses the name specified by the -u
parameter. The files from which SQLLOAD loads data are not specified in the SQLLOAD
syntax. The utility requires that the filename follow the naming convention
owner_name.table_name.dsql.
-l log_file_name
Specifies the file to which SQLLOAD writes errors and statistics. The default is standard
output.
246
SQLLOAD utility
-b badfile_name
Specifies the file where SQLLOAD writes rows that were not loaded.
-e max_errors
Specifies the maximum number of errors that SQLLOAD allows before term processing.
The default is 50.
-m maxrows
Directs SQLLOAD to check for syntax errors without loading any rows.
-F comma
quote
Identifies the database where you are loading tables. You can load tables into a single
database each time you invoke SQLLOAD. There is no option flag preceding the db_name.
This parameter is required, and must be the last parameter specified. The database name
is specified in the following way: progress:T:localhost:demosv:jo.
SQLLOAD loads user data from a formatted file into an SQL database. Typically, the source
file for the load is created by executing the SQLDUMP utility. The SQLLOAD utility can
process a source file created by another application or utility, if the format of the file conforms
to SQLLOAD requirements. The file extension made available to SQLLOAD for processing
must be .dsql. See the entry on SQLDUMP for a description of the required file format.
The SQLLOAD utility reads application data from variable-length text-formatted files and
writes the data into the specified database. The column order is identical to the table column
order. SQLLOAD reads format and content header records from the dump file. You can load
multiple tables in a single execution by specifying multiple table names, separated by commas.
Data for one table is from a single dump file. Every source file corresponds to one database
table. For example, if you specify 200 tables in the SQLLOAD command, you will load 200
database tables.
The format of the records in the input files is similar to the ABL .d file dump format. See the
SQLDUMP utility section on page 242 for a description of the record format. The maximum
record length SQLLOAD can process is 32K.
Each database record read is share-locked for consistency. You must ensure that the SQL Server
has a lock table large enough to contain one lock for every record in the table. The default lock
table size is 10,000 locks.
SQLLOAD writes any errors to standard output and halts the loading process for any error so
that data integrity is not compromised.
247
SQLLOAD utility
Examples
This example directs the SQLLOAD utility to load the data from two dump files into the
salesdb database. The input files to SQLLOAD must be tucker.customers.dsql and
tucker.products.dsql, as shown:
This example directs SQLLOAD to load the data from all appropriately named dump files into
the specified tables in the salesdb database:
Notes
Before you can execute SQLLOAD against a database server, the server must be
configured to accept SQL connections and must be running.
The character set used by SQLLOAD must match the character set information recorded
in each dump file. If the character sets do not match, the load is rejected. You can use the
SQL_CLIENT_CHARSET environment variable to specify a character set.
Each dump file you create with SQLDUMP contains character set information about that
file. The character set recorded in the dump file is the client character set. The default
character set for all non-JDBC clients is taken from the local operating system through the
operating system APIs. JDBC clients use the Unicode UTF-8 character set.
To use a character set different than that used by the operating system, set the
SQL_CLIENT_CHARSET environment variable to the name of the preferred character
set. You can define any OpenEdge-supported character set name. The name is not case
sensitive.
248
At run time, SQLLOAD reports an error if it detects a mismatch between the code page of
the dump file being loaded and the code page of the client running SQLLOAD.
By default, SQLLOAD displays promsgs messages using the code page corresponding to
code_page_name. That is, if you are restoring a Russian database and code_page_name
specifies the name of a Russian code page, the client displays promsgs messages using the
Russian code-page (unless you specify a different code page by setting the clients
SQL_CLIENT_CHARSET_PROMSGS environment variable).
Backslash (\)
SQLLOAD utility
SQLLOAD supports schema names that contain special characters, such as a blank space,
a hyphen (-), or pound sign (#). These names must be used as delimited identifiers.
Therefore, when specifying names with special characters on a UNIX command line,
follow these rules:
Use a backslash (\) to escape the double quotes used for delimited identifiers.
Use double quotes to enclose any names with embedded spaces, commas, or
characters special to a command shell (such as the Bourne shell). This use of quotes
is in addition to quoting delimited identifiers.
For example, to load the table Yearly Profits, use the following UNIX command-line
syntax:
Syntax
sqlload -u xxx -a yyy -t "\"Yearly Profits\"" db_name
In Windows, the command interpreter rules for the use of double quotation marks varies
from UNIX.
249
SQLSCHEMA utility
SQLSCHEMA utility
A command-line utility that writes SQL database schema components to an output file
selectively.
Syntax
sqlschema -u user_name
-a password
-t
owner_name.]table_name1
[
[
[
[
[
[
-p
[
[
[
[
[
owner_name.]procedure_name,
[,owner_name.]table_name2, ...
]
-T
-G
-g
-s
... ]
owner_name.]trigger_name, ... ]
owner_name.]procedure_name, ... ]
owner_name.]table_name, ... ]
owner_name.]tablename, ... ]
output_file_name ]
-o
db_name
Parameters
-u user_name
Specifies the user id that SQLSCHEMA employs to connect to the database. If you omit
the user_name and password, SQLSCHEMA prompts you for these values. If you omit
the user_name and supply a password, SQLSCHEMA uses the value defined by the
USER environment variable.
-a password
A list of one or more tables you want to capture definitions for. Pattern matching is
supported, using a percent sign (%) for multiple characters and an underscore (_) for a
single character. The pattern matching follows the standard for the LIKE predicate in SQL.
You can write the definition for a single table, a set of tables, or all tables. If you omit the
optional owner_name table qualifier, SQLSCHEMA uses the name specified by the -u
parameter.
-p owner_name.procedure_name
A list of one or more procedures you want to capture definitions for. The SQLSCHEMA
utility supports pattern matching for multiple and single characters. See the
owner_name.table_name parameter for an explanation of pattern matching. You can
capture the definitions for a single procedure, a set of procedures, or all procedures. If you
omit the optional owner_name table qualifier, SQLSCHEMA uses the name specified by
the -u parameter.
-T owner_name.trigger_name
A list of one or more triggers you want to capture definitions for. The SQLSCHEMA
utility supports pattern matching for multiple and single characters. See the
owner_name.table_name parameter for an explanation of pattern matching. You can
capture the definition for a single trigger, a set of triggers, or all triggers. If you omit the
optional owner_name table qualifier, SQLSCHEMA uses the name specified by the -u
parameter.
2410
SQLSCHEMA utility
-G owner_name.procedure_name
Allows you to dump privileges on stored procedures in the form of GRANT statements.
-g owner_name.table_name
A list of one or more tables whose related privileges are captured as grant statements. You
can write grant statements for both column and table privileges. The utility supports
pattern matching for this parameter.
-s owner_name.table_name
Specifies a list of one or more tables whose related synonyms are captured as create
synonym statements. The utility supports pattern matching for this parameter.
-o output_file_name.dfsql
Specifies the output file where SQLSCHEMA writes the definitions. When specified, the
file extension name must be .dfsql. If output_file_name is omitted, SQLSCHEMA
writes the definitions to the screen.
db_name
Identifies the database from which SQLSCHEMA captures component definitions. You
can process a single database each time you invoke SQLSCHEMA. There is no option flag
preceding the db_name. This parameter is required and must be the last parameter
specified. The database name is specified in a connection string, such as
progress:T:localhost:demosv:jo.
SQLSCHEMA writes SQL database schema components to an output file selectively. You can
capture table definitions including table constraints, views, stored procedures including related
privileges, and triggers. At the command line you specify which components to dump. To load
database schema information into a database, use the SQL Explorer tool. See OpenEdge Data
Management: SQL Reference for information about SQL Explorer.
The SQLSCHEMA utility cannot write definitions for ABL tables. Table definitions include the
database area name for the table, derived from a scan of the area and objects. When
SQLSCHEMA writes a table definition, it does not automatically write associated triggers,
synonyms, or privileges. These must be explicitly specified on the command line. Capturing
database schema requires privileges to access the requested components.
Examples
This example directs the SQLSCHEMA utility to write table definitions and trigger
information. The output goes to the screen since no output_file_name is specified. Since the
user name and password are not specified, SQLSCHEMA will prompt the user for these values,
as shown:
sqlschema -t tucker.customers,tucker.products -T
tucker.customers,tucker.products progress:T:thunder:4077:salesdb
2411
SQLSCHEMA utility
This example directs the SQLSCHEMA utility to write table definitions to an output file named
salesdbschema.dfsql:
Notes
2412
Before you can execute SQLSCHEMA against a database server, the server must be
configured to accept SQL connections and must be running.
Each output file created by the SQLSCHEMA utility records character set information
about the contents of the file. When you use SQLSCHEMA to dump schema information
from a database, the schema is written in Unicode UTF-8.
25
Virtual System Tables
Virtual system tables (VSTs) give ABL and SQL applications access to the same type of
database information that you collect with the OpenEdge Monitor (PROMON) utility. Virtual
system tables enable an application to examine the status of a database and monitor its
performance. With the database broker running, ABL and SQL applications can query a VST
and retrieve the specified information as run-time data.
This chapter contains the following sections:
db-name
252
Virtual system
table
Description
(1 of 5)
Where to find in
this chapter
After-image log
activity file
(_ActAILog)
Table 252
Before-image log
Table 253
Table 254
Table 255
Input/Output activity
file (_ActIOFile)
Table 256
Input/Output type
activity file
(_ActIOType)
Table 257
Table 258
Table 259
activity file
(_ActBILog)
253
Virtual system
table
254
Description
(2 of 5)
Where to find in
this chapter
Table 2510
Table 2511
Table 2512
Space allocation
activity file
(_ActSpace)
Table 2513
Summary activity
file (_ActSummary)
Table 2514
Table 2515
Virtual system
table
Description
(3 of 5)
Where to find in
this chapter
Table 2516
Block file
(_Block)
Table 2517
Table 2518
Checkpoint file
(_Checkpoint)
Table 2519
Code features
(_Code-Feature)
Table 2520
Database connection
file (_Connect)
Table 2521
Database features
file
(_Database-Feature)
Table 2522
Table 2523
Table 2524
Table 2525
Table 2526
License management
(_License)
Table 2527
Table 2528
255
Virtual system
table
256
Description
(4 of 5)
Where to find in
this chapter
Table 2529
Logging file
(_Logging)
Table 2530
Table 2531
User connection
(_MyConnection)
Table 2532
Resource queue
statistics file
(_Resrc)
Table 2533
Segments file
(_Segments)
Table 2534
Servers file
(_Servers)
Table 2535
Startup file
(_Startup)
Table 2536
Table 2537
Table 2538
Transaction file
(_Trans)
Table 2539
Table 2540
Table 2541
Database
input/output file
(_UserIO)
Table 2542
Virtual system
table
Record locking table
file (_UserLock)
Description
Displays the contents of the record locking
table, such as user name, chain, number,
record ID, lock type, and flags.
(5 of 5)
Where to find in
this chapter
Table 2543
User status
(_UserStatus)
Table 2544
Table 2546
257
Field name
Description
_AiLog-AIWWrites
INT64
_AiLog-BBuffWaits
INT64
_AiLog-BytesWritn
INT64
_AiLog-ForceWaits
INT64
_AiLog-NoBufAvail
INT64
_AiLog-PartialWrt
INT64
_AiLog-RecWriten
INT64
_AiLog-TotWrites
INT64
_AiLog-Trans
INT64
_AiLog-UpTime
INTEGER
Table 253:
Field name
258
Data type
Data type
(1 of 2)
Description
_BiLog-BBuffWaits
INT64
_BiLog-BIWWrites
INT64
_BiLog-BytesRead
INT64
_BiLog-BytesWrtn
INT64
_BiLog-ClstrClose
INT64
Field name
Data type
(2 of 2)
Description
_BiLog-EBuffWaits
INT64
_BiLog-ForceWaits
INT64
_BiLog-ForceWrts
INT64
_BiLog-PartialWrts
INT64
_BiLog-RecRead
INT64
_BiLog-RecWriten
INT64
_BiLog-TotalWrts
INT64
_BiLog-TotReads
INT64
_BiLog-Trans
INT64
_BiLog-UpTime
INTEGER
Table 254:
Field name
Data type
(1 of 2)
Description
_Buffer-APWEnq
INT64
_Buffer-Chkpts
INT64
_Buffer-Deferred
INT64
_Buffer-Flushed
INT64
_Buffer-LogicRds
INT64
_Buffer-LogicWrts
INT64
259
Field name
Description
_Buffer-LRUSkips
INT64
_Buffer-LRUwrts
INT64
_Buffer-Marked
INT64
_Buffer-OSRds
INT64
_Buffer-OSWrts
INT64
_Buffer-Trans
INT64
_Buffer-Uptime
INTEGER
Table 255:
Field name
2510
Data type
(2 of 2)
Data type
Description
_Index-Create
INT64
_Index-Delete
INT64
_Index-Find
INT64
_Index-Free
INT64
_Index-Remove
INT64
_Index-Splits
INT64
_Index-Trans
INT64
_Index-UpTime
INTEGER
Table 256:
Field name
Data type
Description
_IOFile-BufReads
INT64
_IOFile-BufWrites
INT64
_IOFile-Extends
INT64
_IOFile-FileName
CHARACTER
_IOFile-Reads
INT64
_IOFile-Trans
INT64
_IOFile-UbufReads
INT64
_IOFile-UbufWrites
INT64
_IOFile-UpTime
INTEGER
_IOFile-Writes
INT64
Table 257:
Field name
Data type
Description
_IOType-AiRds
INT64
_IOType-AiWrts
INT64
_IOType-BiRds
INT64
_IOType-BiWrts
INT64
_IOType-DataReads
INT64
_IOType-DataWrts
INT64
_IOType-IdxRds
INT64
_IOType-IdxWrts
INT64
_IOType-Trans
INT64
_IOType-UpTime
INTEGER
2511
Table 258:
Field name
Description
_Lock-CanclReq
INTEGER
_Lock-Downgrade
INTEGER
_Lock-ExclFind
INT64
_Lock-ExclLock
INTEGER
_Lock-ExclReq
INTEGER
_Lock-ExclWait
INTEGER
_Lock-RecGetLock
INTEGER
_Lock-RecGetReq
INTEGER
_Lock-RecGetWait
INTEGER
_Lock-RedReq
INTEGER
_Lock-ShrFind
INT64
_Lock-ShrLock
INTEGER
_Lock-ShrReq
INTEGER
_Lock-ShrWait
INTEGER
_Lock-Trans
INT64
_Lock-UpgLock
INTEGER
_Lock-UpgReq
INTEGER
_Lock-UpgWait
INTEGER
_Lock-UpTime
INTEGER
Table 259:
Field name
2512
Data type
Data type
(1 of 2)
Description
_Other-Commit
INT64
_Other-FlushMblk
INT64
Field name
Data type
(2 of 2)
Description
_Other-Trans
INT64
Transactions committed
_Other-Undo
INT64
_Other-UpTime
INTEGER
_Other-Wait
INT64
Table 2510:
Field name
Data type
(1 of 2)
Description
_PW-ApwQWrites
INT64
_PW-BuffsScaned
INT64
_PW-BufsCkp
INT64
_PW-Checkpoints
INT64
_PW-CkpQWrites
INT64
_PW-DBWrites
INT64
_PW-Flushed
INT64
_PW-Marked
INT64
_PW-ScanCycles
INT64
_PW-ScanWrites
INT64
2513
Field name
Description
_PW-TotDBWrites
INT64
_PW-Trans
INT64
_PW-UpTime
INTEGER
Table 2511:
Field name
Data type
Description
_Record-BytesCreat
INT64
_Record-BytesDel
INT64
_Record-BytesRead
INT64
_Record-BytesUpd
INT64
_Record-FragCreat
INT64
_Record-FragDel
INT64
_Record-FragRead
INT64
_Record-FragUpd
INT64
_Record-RecCreat
INT64
_Record-RecDel
INT64
_Record-RecRead
INT64
_Record-RecUpd
INT64
_Record-Trans
INT64
_Record-UpTime
INTEGER
Table 2512:
Field name
2514
Data type
(2 of 2)
Data type
(1 of 2)
Description
_Server-ByteRec
INT64
_Server-ByteSent
INT64
_Server-MsgRec
INT64
_Server-MsgSent
INT64
_Server-QryRec
INT64
Field name
Data type
(2 of 2)
Description
_Server-RecRec
INT64
_Server-RecSent
INT64
_Server-TimeSlice
INT64
_Server-Trans
INT64
_Server-UpTime
INTEGER
Table 2513:
Field name
Data type
(1 of 2)
Description
_Space-AllocNewRm
INT64
_Space-BackAdd
INT64
_Space-BytesAlloc
INT64
_Space-DbExd
INT64
_Space-Examined
INT64
_Space-FromFree
INT64
_Space-FromRm
INT64
_Space-Front2Back
INT64
_Space-FrontAdd
INT64
_Space-Locked
INT64
_Space-Removed
INT64
_Space-RetFree
INT64
2515
Field name
Description
_Space-TakeFree
INT64
_Space-Trans
INT64
_Space-UpTime
INTEGER
Table 2514:
Field name
2516
Data type
(2 of 2)
Data type
Description
_Summary-AiWrites
INT64
_Summary-BiReads
INT64
_Summary-BiWrites
INT64
_Summary-Chkpts
INT64
_Summary-Commits
INT64
_Summary-DbAccesses
INT64
_Summary-DbReads
INT64
_Summary-DbWrites
INT64
_Summary-Flushed
INT64
_Summary-RecCreat
INT64
_Summary-RecDel
INT64
_Summary-RecLock
INT64
_Summary-RecReads
INT64
_Summary-RecUpd
INT64
_Summary-RecWait
INT64
_Summary-TransComm
INT64
_Summary-Undos
INT64
_Summary-Uptime
INTEGER
Table 2515:
Field name
Data Type
Description
_AreaStatus-Areaname
CHARACTER
Area name
_AreaStatus-Areanum
INTEGER
Area number
_AreaStatus-Extents
INTEGER
_AreaStatus-Freenum
INT64
_AreaStatus-Hiwater
INT64
_AreaStatus-Lastextent
CHARACTER
_AreaStatus-Rmnum
INT64
_AreaStatus-Totblocks
INT64
Table 2516:
Field name
_AreaThreshold
INTEGER
_AreaThresholdArea
Table 2517:
Data type
INTEGER
Description
Number indicating how much of the
addressable space in an area is consumed;
possible values are:
Area number
Field name
(1 of 2)
Data type
Description
_Block-Area
INTEGER
_Block-BkupCtr
INTEGER
Backup counter
_Block-Block
CHARACTER
_Block-ChainType
CHARACTER
Chain type
2517
Field name
Data type
Description
_Block-Dbkey
INT64
Dbkey
_Block-NextDbkey
INT64
_Block-Type
CHARACTER
Type of block
_Block-Update
INTEGER
Table 2518:
Field name
Data type
Description
_BfStatus-APWQ
INTEGER
_BfStatus-CKPMarked
INT64
_BfStatus-CKPQ
INTEGER
_BfStatus-HashSize
INTEGER
_BfStatus-LastCkpNum
INTEGER
_BfStatus-LRU
INTEGER
_BfStatus-ModBuffs
INT64
_BfStatus-TotBufs
INTEGER
_BfStatus-UsedBuffs
INTEGER
Table 2519:
Field name
2518
(2 of 2)
Data type
(1 of 2)
Description
_Checkpoint-ApwQ
INT64
_Checkpoint-CptQ
INT64
_Checkpoint-Dirty
INT64
Field name
(2 of 2)
Data type
Description
_Checkpoint-Flush
INT64
_Checkpoint-Len
CHARACTER
_Checkpoint-Scan
INT64
_Checkpoint-Time
CHARACTER
Table 2520:
Field name
Data type
Description
_CodeFeature_Name
CHARACTER
_CodeFeature_Supported
CHARACTER
_CodeFeature_Required
CHARACTER
_CodeFeature-Res01
CHARACTER
_CodeFeature-Res02
INTEGER
Table 2521:
Field name
Data type
(1 of 3)
Description
_Connect-2phase
INTEGER
_Connect-Batch
CHARACTER
_Connect-CacheInfoType
CHARACTER
S SQL statements.
2519
Field name
_Connect-CacheInfo
_Connect-CacheLineNumber
(2 of 3)
Data type
Description
CHARACTER
INTEGER
S A SQL statement.
S Unknown.
_Connect-CacheLastUpdate
CHARACTER
_Connect-CachingType
CHARACTER
_Connect-Device
CHARACTER
2520
_Connect-Disconnect
INTEGER
_Connect-Interrupt
INTEGER
Field name
(3 of 3)
Data type
Description
_Connect-Name
CHARACTER
_Connect-Pid
INTEGER
_Connect-Resync
INTEGER
_Connect-SemId
INTEGER
_Connect-SemNum
INTEGER
_Connect-Server
INTEGER
_Connect-Time
CHARACTER
_Connect-transId
INTEGER
_Connect-Type
CHARACTER
_Connect-Usr
INTEGER
_Connect-Wait
CHARACTER
_Connect-Wait1
INT64
Table 2522:
Field Name
Data Type
Description
_DBFeature_Name
CHARACTER
_DBFeature_Enabled
CHARACTER
_DBFeature_Active
CHARACTER
DBFeature-Res01
CHARACTER
DBFeature-Res02
INTEGER
2521
Table 2523:
Field name
2522
Data type
(1 of 2)
Description
_DbStatus-AiBlkSize
INTEGER
_DbStatus-BiBlkSize
INTEGER
_DbStatus-BiClSize
INTEGER
_DbStatus-BiOpen
CHARACTER
_DbStatus-BiSize
INT64
_DbStatus-BiTrunc
CHARACTER
_DbStatus-CacheStamp
CHARACTER
_DbStatus-Changed
INTEGER
_DbStatus-ClVersMinor
INTEGER
_DbStatus-Codepage
CHARACTER
_DbStatus-Collation
CHARACTER
_DbStatus-CreateDate
CHARACTER
_DbStatus-DbBlkSize
INTEGER
_DbStatus-DbVers
INTEGER
_DbStatus-DbVersMinor
INTEGER
_DbStatus-EmptyBlks
INT64
_DbStatus-FbDate
CHARACTER
_DbStatus-FreeBlks
INT64
_DbStatus-HiWater
INTEGER
_DbStatus-IbDate
CHARACTER
_DbStatus-IbSeq
INTEGER
_DbStatus-Integrity
CHARACTER
_DbStatus-IntFlags
INTEGER
Integrity flags.
(2 of 2)
Field name
Data type
Description
_DbStatus-LastOpen
CHARACTER
_DbStatus-LastTable
INTEGER
_DbStatus-LastTran
INTEGER
_DbStatus-MostLocks
INTEGER
_DbStatus-NumAreas
INTEGER
Number of areas.
_DbStatus-NumLocks
INTEGER
_DbStatus-NumSems
INTEGER
Number of semaphores.
_DbStatus-PrevOpen
CHARACTER
_DbStatus-RMFreeBlks
INT64
_DbStatus-SharedMemVer
INTEGER
_DbStatus-ShmVers
INTEGER
_DbStatus-Starttime
CHARACTER
_DbStatus-State
INTEGER
_DbStatus-Tainted
INTEGER
_DbStatus-TotalBlks
INT64
Table 2524:
Field name
Data type
Description
_FileList-BlkSize
INTEGER
_FileList-Extend
INTEGER
_FileList-LogicalSz
INTEGER
_FileList-Name
CHARACTER
_FileList-Openmode
CHARACTER
_FileList-Size
INTEGER
2523
Table 2525:
Field name
Description
_IndexStat-Blockdelete
INTEGER
_IndexStat-Delete
INTEGER
_IndexStat-Create
INTEGER
_IndexStat-Read
INTEGER
_IndexStat-Split
INTEGER
Table 2526:
Field name
Data type
Description
_Latch-Busy
INTEGER
_Latch-Hold
INTEGER
_Latch-Lock
INTEGER
_Latch-LockedT
INTEGER
_Latch-LockT
INTEGER
_Latch-Name
CHARACTER
Latch name
_Latch-Qhold
INTEGER
_Latch-Spin
INTEGER
_Latch-Type
CHARACTER
Latch type
_Latch-Wait
INTEGER
_Latch-WaitT
INTEGER
Table 2527:
Field name
2524
Data type
Data type
(1 of 2)
Description
_Lic-ActiveConns
INTEGER
_Lic-BatchConns
INTEGER
_Lic-CurrConns
INTEGER
Field name
Data type
(2 of 2)
Description
_Lic-MaxActive
INTEGER
_Lic-MaxBatch
INTEGER
_Lic-MaxCurrent
INTEGER
_Lic-MinActive
INTEGER
_Lic-MinBatch
INTEGER
_Lic-MinCurrent
INTEGER
_Lic-ValidUsers
INTEGER
Table 2528:
Field name
Data type
Description
_Lock-Chain
INTEGER
Chain number.
_Lock-Flags
CHARACTER
_Lock-Name
CHARACTER
User name.
_Lock-RecId
INT64
_Lock-Table
INTEGER
Table name.
_Lock-Type
CHARACTER
_Lock-Usr
INTEGER
Table 2529:
Field name
Data type
(1 of 2)
Description
_LockReq-ExclFind
INT64
_LockReq-Name
CHARACTER
User name
_LockReq-Num
INTEGER
User number
_LockReq-RecLock
INT64
_LockReq-RecWait
INT64
2525
Field name
Description
_LockReq-SchLock
INT64
_LockReq-SchWait
INT64
_LockReq-ShrFind
INT64
_LockReq-TrnLock
INT64
_LockReq-TrnWait
INT64
Table 2530:
Field name
2526
Data type
(2 of 2)
Data type
(1 of 2)
Description
_Logging-2PC
CHARACTER
_Logging-2PCNickName
CHARACTER
_Logging-2PCPriority
INTEGER
_Logging-AiBegin
CHARACTER
_Logging-AiBlkSize
INTEGER
_Logging-AiBuffs
INTEGER
_Logging-AiCurrExt
INTEGER
_Logging-AiExtents
INTEGER
_Logging-AiGenNum
INTEGER
_Logging-AiIO
CHARACTER
Field name
(2 of 2)
Data type
Description
_Logging-AiJournal
CHARACTER
_Logging-AiLogSize
INTEGER
_Logging-AiNew
CHARACTER
_Logging-AiOpen
CHARACTER
_Logging-BiBlkSize
INTEGER
BI block size.
_Logging-BiBuffs
INTEGER
Number of BI buffers.
_Logging-BiBytesFree
INTEGER
_Logging-BiClAge
INTEGER
_Logging-BiClSize
INTEGER
BI cluster size.
_Logging-BiExtents
INTEGER
Number of BI extents.
_Logging-BiFullBuffs
INTEGER
_Logging-BiIO
CHARACTER
_Logging-BiLogSize
INTEGER
_Logging-CommitDelay
INTEGER
_Logging-CrashProt
INTEGER
_Logging-LastCkp
CHARACTER
2527
Table 2531:
Field name
2528
Data type
Description
_MstrBlk-AiBlksize
INTEGER
_MstrBlk-BiBlksize
INTEGER
_MstrBlk-BiOpen
CHARACTER
_MstrBlk-BiPrev
CHARACTER
_MstrBlk-BiState
INTEGER
_MstrBlk-Cfilnum
INTEGER
_MstrBlk-Crdate
CHARACTER
_MstrBlk-Dbstate
INTEGER
_MstrBlk-Dbvers
INTEGER
_MstrBlk-Fbdate
CHARACTER
_MstrBlk-Hiwater
INTEGER
_MstrBlk-Ibdate
CHARACTER
_MstrBlk-Ibseq
INTEGER
_MstrBlk-Integrity
INTEGER
_MstrBlk-Lasttask
INTEGER
Last transaction ID
_MstrBlk-Oppdate
CHARACTER
_MstrBlk-Oprdate
CHARACTER
_MstrBlk-Rlclsize
INTEGER
_MstrBlk-Rltime
CHARACTER
_MstrBlk-Tainted
INTEGER
_MstrBlk-Timestamp
CHARACTER
_MstrBlk-Totblks
INT64
Table 2532:
Field name
Data type
Description
_MyConn-NumSeqBuffers
INTEGER
_MyConn-Pid
INTEGER
_MyConn-UsedSeqBuffers
INTEGER
_MyConn-UserId
INTEGER
Table 2533:
Field name
Data type
Description
_Resrc-Name
CHARACTER
_Resrc-Lock
INT64
_Resrc-Wait
INT64
_Resrc-Time
INT64
Table 2534:
Field name
Data type
Description
_Segment-ByteFree
INTEGER
_Segment-BytesUsed
INTEGER
_Segments-Segld
INTEGER
Segment ID
_Segments-SegSize
INTEGER
Segment size
Table 2535:
Field name
Data type
(1 of 2)
Description
_Server-CurrUsers
INTEGER
_Server-Logins
INT64
_Server-MaxUsers
INTEGER
_Server-Num
INTEGER
Server number
_Server-Pid
INTEGER
2529
Field name
Data type
Description
_Server-PortNum
INTEGER
_Server-Protocol
CHARACTER
_Server-Type
CHARACTER
Server type
Table 2536:
Field name
2530
(2 of 2)
Data type
(1 of 2)
Description
_Startup-AiBuffs
INTEGER
_Startup-AiName
CHARACTER
_Startup-APWBuffs
INTEGER
_Startup-APWMaxWrites
INTEGER
_Startup-APWQTime
INTEGER
_Startup-APWSTime
INTEGER
_Startup-BiBuffs
INTEGER
_Startup-BiDelay
INTEGER
_Startup-BiIO
INTEGER
_Startup-BiName
CHARACTER
_Startup-BiTrunc
INTEGER
_Startup-Buffs
INTEGER
_Startup-CrashProt
INTEGER
_Startup-Directio
INTEGER
_Startup-LockTable
INTEGER
_Startup-MaxClients
INTEGER
_Startup-MaxServers
INTEGER
Field name
(2 of 2)
Data type
Description
_Startup-MaxUsers
INTEGER
_Startup-Spin
INTEGER
Table 2537:
Field name
Data type
Description
_IndexBase
INTEGER
_TableBase
INTEGER
Table 2538:
Field name
Data type
Description
_TableStat-Create
INTEGER
_TableStat-Delete
INTEGER
_TableStat-Read
INTEGER
_TableStat-Update
INTEGER
Table 2539:
Field name
Data type
(1 of 2)
Description
_Trans-Coord
CHARACTER
_Trans-CoordTx
INTEGER
_Trans-Counter
INTEGER
Transaction count
_Trans-Duration
INTEGER
_Trans-Flags
CHARACTER
Transaction flags
_Trans-Misc
INTEGER
Miscellaneous information
_Trans-Num
INTEGER
Transaction number
2531
Field name
Data type
Description
_Trans-State
CHARACTER
Transaction state
_Trans-Txtime
CHARACTER
_Trans-Usrnum
INTEGER
Table 2540:
Field name
Data type
Description
_Txe-Locks
INT64
_Txe-Lockss
INT64
_Txe-Time
INTEGER
_Txe-Type
CHARACTER
_Txe-Waits
INT64
_Txe-Waitss
INT64
_Txe-Wait-Time
INTEGER
Table 2541:
Field name
2532
(2 of 2)
Data type
Description
_UserIndexStat-Conn
INTEGER
User number
_UserIndexStat-Num
INTEGER
Index number
_UserIndexStat-blockdelete
INTEGER
_UserIndexStat-create
INTEGER
_UserIndexStat-delete
INTEGER
_UserIndexStat-read
INTEGER
_UserIndexStat-split
INTEGER
Table 2542:
Field name
Data type
Description
_UserIO-AiRead
INT64
_UserIO-AiWrite
INT64
_UserIO-BiRead
INT64
_UserIO-BiWrite
INT64
_UserIO-DbAccess
INT64
_UserIO-DbRead
INT64
_UserIO-DbWrite
INT64
_UserIO-Name
CHARACTER
_UserIO-Usr
INTEGER
Table 2543:
Field name
Data type
(1 of 2)
Description
_UserLock-Chain
INTEGER
_UserLock-Flags
CHARACTER
_UserLock-Misc
INTEGER
Miscellaneous information
_UserLock-Name
CHARACTER
_UserLock-Recid
INT64
2533
Field name
Description
_UserLock-Table
INTEGER
Table number
_UserLock-Type
CHARACTER
_UserLock-Usr
INTEGER
Table 2544:
Field name
Data type
Description
_UserStatus-Counter
INTEGER
_UserStatus-ObjectId
INTEGER
_UserStatus-ObjectType
INTEGER
_UserStatus-Operation
CHARACTER
_UserStatus-State
INTEGER
_UserStatus-Target
INTEGER
_UserStatus-UserId
INT64
User number.
Table 2545:
_UserStatus-State descriptions
_UserStatus-State
value
2534
Data type
(2 of 2)
Utility
(1 of 3)
Description
11
PROUTIL
12
PROUTIL
13
PROUTIL
24
PROUTIL TABLEMOVE
25
PROUTIL TABLEMOVE
26
PROUTIL TABLEMOVE
27
PROUTIL TABLEMOVE
31
PROUTIL IDXMOVE
_UserStatus-State descriptions
Table 2545:
_UserStatus-State descriptions
_UserStatus-State
value
Utility
(2 of 3)
Description
32
PROUTIL IDXMOVE
41
PROUTIL
IDXCOMPACT
42
PROUTIL
IDXCOMPACT
43
PROUTIL
IDXCOMPACT
51
PROUTIL IDXFIX
52
PROUTIL IDXFIX
53
PROUTIL IDXFIX
54
PROUTIL IDXFIX
55
PROUTIL
ENABLELARGEFILES
60
PROBKUP
61
PROBKUP
62
PROBKUP
63
PROBKUP
64
PROBKUP
65
PROBKUP
66
PROBKUP
70
PROUTIL DBANALYS
71
PROUTIL DBANALYS
72
PROUTIL DBANALYS
73
PROUTIL DBANALYS
74
PROUTIL DBANALYS
80
PROUTIL DUMP
81
PROUTIL DUMP
2535
_UserStatus-State descriptions
Table 2545:
_UserStatus-State descriptions
_UserStatus-State
value
2536
Utility
(3 of 3)
Description
82
PROUTIL DUMP
83
PROUTIL DUMP
90
PROUTIL IDXMOVE
91
PROUTIL IDXMOVE
92
PROUTIL IDXMOVE
93
PROUTIL IDXMOVE
100
PROUTIL IDXFIX
101
PROUTIL IDXFIX
102
PROUTIL IDXFIX
103
PROUTIL IDXFIX
104
PROUTIL IDXFIX
105
PROUTIL IDXFIX
106
PROUTIL IDXFIX
110
PROUTIL TABLEMOVE
111
PROUTIL TABLEMOVE
112
PROUTIL TABLEMOVE
113
PROUTIL TABLEMOVE
114
PROUTIL TABLEMOVE
115
PROUTIL TABLEMOVE
116
PROUTIL TABLEMOVE
Table 2546:
Field name
Data type
Description
_UserTableStat-Conn
INTEGER
User number
_UserTableStat-Num
INTEGER
Table number
_UserTableStat-create
INTEGER
_UserTableStat-delete
INTEGER
_UserTableStat-read
INTEGER
_UserTableStat-update
INTEGER
2537
2538
Index
A
_ActAILog virtual system table 253
_ActBILog virtual system table 253
_ActBuffer virtual system table 253
_ActIndex virtual system table 253
_ActIOFiles virtual system table 253
_ActIOType virtual system table 253
Activity option
PROMON utility 1916
_ActLock virtual system table 253
_ActOther virtual system table 253
_ActPWs virtual system table 254
_ActRecord virtual system table 254
_ActServer virtual system table 254
_ActSpace virtual system table 254
_ActSummary virtual system table 254
ADD qualifier
PROSTRCT utility 148, 149, 1410,
214
ADDONLINE qualifier
PROSTRCT utility 215
hosts 232
ports 34, 1812, 232, 2312
Progress Explorer connecting 32,
2312, 2325
AdminServer Port (-adminport) startup
parameter 189, 1812
After-image (AI)
areas 75, 2324
block size 1323
default 1324
increasing 1324
buffer pool 1323
files 621, 102, 2092
archiving 713
managing 79
marking as empty 715
monitoring 79
switching 711
I/O 1323
log 107
log activity 253
logging 256
notes 1316
After-image Buffers (-aibufs) startup
parameter 1324, 174, 183, 1814
After-image extents
backing up 42
backups 44
on-line backups 44
Index
1812
After-image File Management Archive
Interval (-aiarcinterval) startup parameter
1813
AppServer 2312
B
-B startup parameter 139, 1310, 1326,
183, 1815, 1824, 2017
Backups
after-imaging 48
after-imaging (AI) 713, 714
archiving 54
before-image files 42
Index
developing strategy 67
full 52, 515, 2114, 2314, 2327
important files 42
incremental 513, 2313, 2327
media 48
offline 56, 513
online 52, 58, 515, 2215
operating system 511, 2224
restoring 521
structure files 42
testing 53
types 43
off-line 45
on-line 44
unscheduled 48
verifying 512, 518, 2327
Base Index (-baseindex) startup parameter
187, 1815
Base Table (-basetable) startup parameter
187, 1816
-baseindex startup parameter 187, 1815
Before-image (BI)
areas 2324
block size
default 1320
increasing 1320
blocks 2092
clusters 2092
delaying writes 1320
files 2092
truncating 625, 205, 2016, 2021,
2075, 2090
I/O 1315, 1316
log activity 253
logging 256
notes 1316
threshold 1321, 1322
Before-image Block Size (-biblocksize)
startup parameter 2092
Before-image Buffers (-bibufs) startup
parameter 1317, 1324, 176, 183,
1814, 1817
Before-image Cluster Age (-G) startup
parameter 183, 1825
Before-image Cluster Size (-bi) startup
parameter 2092
Before-image files
backing up 42
Index
Buffer pool 136, 1827
Buffers
after-image (AI) 1323
before-image (BI) 1315, 1317, 1826
increasing 1317
database 135, 136
monitoring 138
tuning 139
empty 1311
modified 138
private 256
private read-only 135, 139
status 255
C
Case Table (-cpcase) startup parameter
186, 1820
Century Year Offset (-yy) startup parameter
1810, 1850, 2017
Chains
free 2019
RM 2019
CHANALYS qualifier
PROUTIL utility 2019
Character large objects (CLOBs) 1511,
2022
Character sets 35, 203, 2022, 2025
Characters
allowed for database names 213
_Checkpoint virtual system table 255
Checkpoints 1317, 255
Index4
Index
-cpcase startup parameter 186, 1820
-cpcoll startup parameter 186, 1820
-cpinternal startup parameter 35, 186,
1820, 1821, 202, 213, 222
-cplog startup parameter 186, 1821,
1823
-cpprint startup parameter 186, 1822,
1823
-cprcodein startup parameter 186, 1822
-cpstream startup parameter 186, 1823,
1824, 202, 213, 222
-cpterm startup parameter 186, 1823,
1824
Crashes
system 618
system. See System crashes
CRC codes 519
CRC. See Cyclic redundancy check (CRC)
CREATE qualifier
PROSTRCT utility 12, 13, 16, 17,
110, 106, 213, 217, 218, 219
Creating
incremental data definitions files 157
structure description (.st) files 13
Cyclic redundancy check (CRC) 2039,
2074, 2076
D
Data Administration tool 12, 115, 152,
153, 155, 1513, 2017, 2075, 239
Data collision 103, 105
Data compression 2315, 2316
Data consolidation model 103, 104
Data Dictionary tool 12, 112, 115, 152,
153, 155, 1513, 2017, 2075, 239,
2323
Data distribution model 103
Data extents
backing up 42
Data ownership models 103
See also Data distribution model, Data
consolidation model, Peer-to-peer
model
Data Types
limits 216
Data Values
limits 216
Database files
backing up 42
Database limits
block sizes 22
index limits 27
number of sequences 28
number of simultaneous transactions
212
number of users 211
records per block 23
size 210
Database names
characters not used 213
Database Status option
PROMON utility 1922
_Database-Feature virtual system table
255
Databases
backing up See Backups
backups
types. See Backups
backwards compatibility 2039, 2074
buffer activity 253
converting 12, 116, 204, 2021
coordinator 123, 124, 126, 1213,
1216, 1987, 202, 206, 207, 209
copying 124
crash recovery 62
creating 12, 1530
Data Administration tool 115
Data Dictionary tool 115, 2323
PRODB 113
PROSTRCT CREATE 13
damaged 627, 628, 629, 630, 2115
deleting 126
dumping contents 629
loading 2017
maintaining
indexes 1425
tables 1425
modes
buffered 2072
multi-user 204, 2018, 2051, 2072,
2210
single-user 204, 2018, 2051, 223
unbuffered 2072
naming conventions for different
operating systems 213
nicknames 127
online 2110
quiet points. See Quiet points
reconstructing 2225
Index5
Index
recovering 62, 66, 618, 620
recovery plans 69
restoring 520, 522, 620, 621
full 522
incremental 523
roll-forward recovery 64, 617, 727
schema. See Schema
source 2320
starting 33
See also Brokers, Servers
statistics 204
stopping 312, 315
table limits 27
target 152, 1531, 2320, 2328
unlocking 628
DataServers
ODBC 2312
ORACLE 2312
DBANALYS qualifier
PROUTIL utility 2027, 2087
DBAUTHKEY qualifier
PROUTIL utility 2028, 2076
DBIPCS qualifier
PROUTIL utility 2033
Disk mirroring 59
Disk space 74
Disks
full 623
DISPTOSSCREATELIMITS qualifier
PROUTIL utility 2037
DUMP qualifier
PROUTIL utility 159, 1510, 1511,
2038 to 2040
Dumping
auto-connect records 158
binary 159, 203, 2038, 2041
binary by field 203
contents 159
definitions 152, 153 to 154
field contents 1511
sequence definitions 157
sequence values 1516
SQL tables 242
SQL view contents 1517
table contents 152, 1513, 1513 to
1514
user table content 1516
DUMPSPECIFIED qualifier
PROUTIL utility 1511, 2041 to 2042
Dependency 119
DESCRIBE qualifier
PROUTIL utility 2029 to 2032
Direct I/O (-directio) startup parameter
1325, 183, 1824
-directio startup parameter 1325, 183,
1824
DISABLEAUDITING qualifier
PROUTIL utility 2034
DISABLEJTA qualifier
PROUTIL utility 2035
DISABLEKEYEVENTS qualifier
PROUTIL Utility 2036
Index6
ENABLEAUDITING qualifier
PROUTIL utility 2043
ENABLEJTA qualifier
PROUTIL utility 2045
ENABLEKEYEVENTS qualifier
PROUTIL utility 2046
ENABLELARGEFILES qualifier
PROUTIL utility 2047, 2049, 2050
ENABLELARGEKEYS qualifier
PROUTIL utility 2048
Error-correction blocks 519
Event Level (-evtlevel) startup parameter
1613, 183, 1825
Index
Event Viewer 1613
-evtlevel startup parameter 1613, 183,
1825
Extents 13, 1410, 1424, 212, 214,
2111, 2114, 255
See also After-image extents
after-image (AI) 16, 72, 75, 715,
106, 107, 1814, 219, 2210,
2213, 2216, 2217, 2221
busy 79
empty 79
full 79
application data (Dn) 16, 219
before-image (BI) 16, 17, 1317,
1818, 2089, 218
creating 17
data 2113
fixed-length 17, 73, 2111
name 2110
number 2110
schema 16, 17, 219
size 2110
tokens
after-image (ai) 2111
before-image (bi) 2111
data (d) 2111
transaction log (tl) 2111
transaction log (TL) 16, 219, 2228,
2313
type 2110
variable-length 18, 73, 2111
Firewalls 1837
Fixed-length extents
on-line backups 44
Failures
client machine 1215
network communication 1214
power 1215
servers 1214
4GL tools
See Data Administration tool, Data
Dictionary tool
Fragmentation
analyzing 1332
eliminating 1333
Full backup 44
File handles
maximum number 214
File tokens 75
Index7
Index
I
-i startup parameter 1321, 1534, 183,
1827
IDXACTIVATE qualifier
PROUTIL utility 2053
IDXANALYS qualifier
PROUTIL utility 1336, 1337, 1428,
159, 2054, 2062
IDXBUILD qualifier
PROUTIL utility 1319, 1336, 1338,
1339, 1343, 1533, 1534, 2055 to
2058, 2060
IDXCHECK qualifier
PROUTIL utility 2059 to 2060
running online 2061
IDXCOMPACT qualifier
PROUTIL utility 1336, 1337, 1427,
1428, 2058, 2062 to 2063, 2067
IDXFIX qualifier
PROUTIL utility 1428, 2058, 2064 to
2066, 2067
IDXMOVE qualifier
PROUTIL utility 1333, 1425, 1426,
1428, 2068 to 2069, 2088
increase startup parameters 183
increase startup parameters online 2070
Incremental backups 44
Index Range Size (-indexrangesize) startup
parameter 187, 1816, 1828
Indexes 255, 256
activating 2057, 2067
activity 253
Index8
K
Key Alias (-keyalias) startup parameter
188, 1830
Key Alias Password (-keyaliaspasswd)
startup parameter 188, 1831
-keyalias startup parameter 188, 1830
-keyaliaspasswd startup parameter 188,
1831
L
-L startup parameter 1831
_Latch virtual system table 255
Latches 1331, 1849, 255
Least recenlty used (LRU) 1310
Least recently used (LRU) 139, 1310,
1313
_License virtual system table 255
Index
Limits
database
number of simultaneous transactions
212
number of users 211
database names 213
for database names 213
LIST qualifier
PROSTRCT utility 114, 125, 106,
146, 148, 149, 1410, 1414,
1424, 214, 2110
PROSTRCT utlity 112
-lkrela
startup parameter 183
LOAD qualifier
PROUTIL utility 2073 to 2074
Load records
bad 1529
reconstructing 1529
Loading
binary 204, 2073
bulk See Bulk loading
contents 1520 to 1524
definitions 1518 to 1519
table 1518
no-integrity mode 1534
sequence values 1524
SQL tables 246
SQL view contents 1523
table contents
Data tool 1522
PROUTIL 1520
PROUTIL BULKLOAD 1527
user table contents 1523
_Lock virtual system table 255
Lock Table Entries (-L) startup parameter
183, 1831
Locking and Waiting statistics option
PROMON utility 199
_LockReq virtual system table 256
Locks 255
activity 253
exclusive 1425, 1831, 1913, 2089
intent exclusive 1913
intent share 1913
share 1427, 1831, 1913, 2068
shared on table with intent to set
exclusive 1913
transaction end 256
Log (.lg) files
backing up 42
Index9
Index
-Mf startup parameter 1316, 1320, 1321,
183, 1826, 1836
Off-line backups 45
On-line backups 44
after-image extents 44
P
-P startup parameter 82
Parameter file 182
Passwords
changing 88
Pathnames
absolute 114, 125
relative 114
Peer-to-peer model 103, 105
-PendConnTime startup parameter 189,
1842
Pending Connection Time
(-PendConnTime) startup parameter 189,
1842
Performance
bottlenecks 132
monitoring 133
Policy 119
Node 112
Index
Print Code Page (-cpprint) startup parameter
186, 1822, 1823
Private Buffers (-Bp) startup parameter
1310
PROADSV utility 32, 33, 2311 to 2312
PROAIW command 174
PROAPW command 175
PROBIW command 176
PROBKUP utility 52, 510, 711, 2224,
2313 to 2316, 257
Procedure library (.pl) files
backing up 42
PROCOPY utility 12, 111, 124, 76,
219, 2320 to 2321
PRODB utility 12, 113, 114, 124,
1530, 219, 2322 to 2323
Progress Explorer 32
PROSERVE utility 36
Index11
Index
PROUTIL utility 126, 128, 129, 1210,
1213, 152, 159, 1520, 1534, 202,
257
2PHASE BEGIN qualifier 126, 206
2PHASE COMMIT qualifier 1212,
1989, 207
2PHASE END qualifier 128, 208
2PHASE MODIFY qualifier 209
2PHASE RECOVER qualifier 1212,
1214, 1988, 2010
AUDITARCHIVE qualifier 2011
AUDITLOAD qualifier 2014
BIGROW qualifier 2016
BUILD INDEXES qualifier 1520
BULKLOAD qualifier 1527, 1528,
2017
BUSY qualifier 511, 2018
CHANALYS qualifier 2019
CODEPAGE-COMPILER qualifier
1820, 2020
CONV910 qualifier 116, 2021
CONVCHAR qualifier 2022 to 2024
CONVFILE qualifier 2025 to 2026
DBANALYS qualifier 2027, 2087
DBAUTHKEY qualifier 2028, 2076
DBIPCS qualifier 2033
DESCRIBE qualifier 2029 to 2032
DISABLEAUDITING qualifier 2034
DISABLEJTA qualifier 2035
DISPTOSSCREATELIMITS qualifier
2037
DUMP qualifier 159, 1510, 1511,
2038 to 2040
DUMPSPECIFIED qualifier 1511,
2041 to 2042
ENABLEAUDITING qualifier 2043
ENABLEJTA qualifier 2045
ENABLEKEYEVENTS qualifier 2046
ENABLELARGEFILES qualifier 2047,
2049, 2050
ENABLELARGEKEYS qualifier 2048
HOLDER qualifier 2051
IDXACTIVATE qualifier 2053
IDXANALYS qualifier 1336, 1337,
1428, 159, 2054, 2062
IDXBUILD qualifier 1319, 1336,
1338, 1339, 1343, 1533, 1534,
2055 to 2058, 2060
IDXCHECK qualifier 2059 to 2060
IDXCOMPACT qualifier 1336, 1337,
1427, 1428, 2058, 2062 to 2063,
2067
IDXFIX qualifier 1428, 2058, 2064
to 2066, 2067
IDXMOVE qualifier 1333, 1425,
1426, 1428, 2068 to 2069, 2088
INCREASETO qualifier 2070
IOSTATS qualifier 2071 to 2072
LOAD qualifier 2073 to 2074
MVSCH qualifier 117, 2075
RCODEKEY qualifier 2076
REVERT qualifier 2077
Index12
SETAREACREATELIMIT qualifier
2080
SETAREATOSSLIMIT qualifier 2081
SETBLOBCREATELIMITqualifier
2082
SETBLOBTOSSLIMIT qualifier 2083
SETTABLECREATELIMIT qualifier
2084
SETTABLETOSSLIMIT qualifier
2085
TABANALYS qualifier 1332, 1531,
1532, 2086 to 2087
TABLEMOVE qualifier 1333, 1425,
1426, 1428, 2088 to 2089
TRUNCATE AREA qualifier 1424,
2090 to 2091
TRUNCATE BI qualifier 1312, 1319,
1424, 2092 to 2093
UPDATESCHEMA qualifier 2094
UPDATEVST qualifier 2095, 252
WBREAK-COMPILER qualifier 2096
WORD-RULES qualifier 2096
PROWDOG command 37, 1714, 1841
Q
Quiet points 59
R
-r startup parameter 1844
R-code 157
R-code in Code Page (-cprcodein) startup
parameter 186, 1822
RCODEKEY qualifier
PROUTIL utility 2076
Record Locking Table option
PROMON utility 1913
Recovery
roll-forward 727, 126
Recovery Log Threshold (-bithold) startup
parameter 1321, 183, 1817, 1818
Recovery plans
developing 67
REMOVE qualifier
PROSTRCT utility 1411, 1424, 2111
REORDER AI qualifier
PROSTRCT utility 725 to 726, 2112
REPAIR qualifier
PROSTRCT utility 510, 2113
Index
Replication
asynchronous. See Asynchronous
replication
log-based 102, 106
implementing 106
schemes 102
synchronous. See Synchronous
replication
trigger-based 102
Resolve JTA Transactions option
PROMON utility 1990
Resolve Limbo Transactions option
PROMON utility 1988
Resource 119
Resource group 119
S
-S startup parameter 36, 1810, 1845
Scan (-scan) startup parameter
PROBKUP utility 45
REVERT qualifier
PROUTIL utility 2077
Security
administrator 86
connection 85
database 811
operating system 811
schema 810
Index13
Index
-servergroup startup parameter 1810,
1846
Single-user Progress
on-line backups 44
Sort (SRT)
blocks 1342
files 1340, 1342
SETAREACREATELIMIT qualifier
PROUTIL utility 2080
SETAREATOSSLIMIT qualifier
PROUTIL utility 2081
SETBLOBCREATELIMIT qualifier
PROUTIL utility 2082
SETBLOBTOSSLIMIT qualifier
PROUTIL utility 2083
SETTABLECREATELIMIT qualifier
PROUTIL utility 2084
SETTABLETOSSLIMIT qualifier
PROUTIL utility 2085
-SG startup parameter 1342
Shared device 112
Shared memory segment size (-shmsegsize)
startup parameter 1847
Shared Resources option
PROMON utility 1920
Shared-memory Overflow Size (-Mxs)
startup parameter 184, 1839
-shmsegsize startup parameter 1847
Shut Down Database option
PROMON utility 1924
Index14
Index
Before-image Buffers (-bibufs) 1317,
1324, 176, 183, 1814, 1817
Before-image Cluster Age (-G) 183,
1825
Before-image cluster size (-bi) 2092
Before-image Threshold (-bithold) 177
BI File Write (-Mf) 1316
Blocks in Database Buffers (-B) 139,
1310, 1326, 183, 1815, 1824,
2017
Buffered I/O (-r) 1844
Case Table (-cpcase) 186, 1820
Century Year Offset (-yy) 1810, 1850,
2017
Cluster Mode (-cluster) 1819
Collation Table (-cpcoll) 186, 1820
Configuration Properties File
(-properties) 1810, 1844
Conversion Map (-convmap) 186,
1819
Delayed BI File Write (-Mf) 1320,
1321, 183, 1826, 1836
Direct I/O (-directio) 1325, 183, 1824
Event Level (-evtlevel) 1613, 183,
1825
Group Delay (-groupdelay) 1320, 183,
1826
Hash Table Entries (-hash) 139, 183,
1827
Host Name (-H) 36, 189, 1826, 1845
Index Range Size (-indexrangesize)
187, 1816, 1828
Internal Code Page (-cpinternal) 35,
186, 1820, 1821, 202, 213, 222
Key Alias (-keyalias) 188
Key Alias Password (-keyaliaspasswd)
188
Lock Table Entries (-L) 183, 1831
Log File Code Page (-cplog) 186,
1821, 1823
Manual Server (-m2) 185, 1833
Maximum Clients per Server (-Ma)
1326, 189, 1834, 1837
Maximum Dynamic Server (-maxport)
189, 1835, 1837
Maximum JTA Transactions (-maxxids)
1835
Maximum Number of Users (-n) 1837
Maximum Servers (-Mn) 1326, 1330,
189, 1834, 1837, 1838
Maximum Servers per Broker (-Mpb)
189, 1839
Merge Number (-TM) 1342, 1343
Minimum Clients per Server (-Mi) 189,
1834, 1837, 1838
Minimum Dynamic Server (-minport)
189, 1835, 1837
Network Type (-N) 189, 1840
No Crash Protection (-i) 1321, 1534,
183, 1827
Index15
Index
Storage areas 13, 24, 135, 148, 149,
1410, 1411, 1424, 2110, 2114
after-image (AI) 72, 219, 2211,
2212
application data (.Dn) 16
application data (Dn) 2090, 219
before-image (BI) 178
creating 17
database control (DB) 213, 216, 219
number of 24
primary recovery 219
primary recovery area
size 29
removing 2111
size of 24
transaction log (TL) 206, 219
truncating 205
Store and forward replication. See also
Synchronous replication 102
Stream Code Page (-cpstream) startup
parameter 186, 1823, 1824, 202,
213, 222
Structure description (.st) files 110, 124,
59, 72, 148, 149, 1410, 1413,
1414, 1424, 212, 214, 215, 216,
218, 2320, 2324
creating 13
example 18
Structure files
backing up 42
Synchronous replication 102
System crashes 618, 619
T
-T startup parameter 1341, 1342
TABANALYS qualifier
PROUTIL utility 1332, 1531, 1532,
2086 to 2087
Table Range Size (-tablerangesize) startup
parameter 187, 1816, 1850
TABLEMOVE qualifier
PROUTIL utility 1333, 1425, 1426,
1428, 2088 to 2089
-tablerangesize startup parameter 187,
1816, 1850
Tables
constant 1531
deleting 2090
fragmentation 205
freezing 810
Index16
Index
TRUNCATE AREA qualifier
PROUTIL utility 1424, 2090 to 2091
TRUNCATE BI qualifier
PROUTIL utility 1312, 1319, 1424,
2092 to 2093
2PHASE BEGIN qualifier
PROUTIL utility 126, 206
2PHASE COMMIT qualifier
PROUTIL utility 1212
Two-phase commit 65, 102, 202, 206,
209, 2111, 256
deactivating 128
disabling 208, 2021
enabling 126
2PHASE COMMIT qualifier
PROUTIL utility 1989, 207
2PHASE END qualifier
PROUTIL utility 128, 208
2PHASE MODIFY qualifier
PROUTIL utility 209
2PHASE RECOVER qualifier
PROUTIL utility 1212, 1214, 1988,
2010
_TxeLock virtual system table 256
U
-U startup parameter 82
UNIX parameters
MAXUP 1328
NPROC 1328
SEMMNI 1330
SEMMNS 1330
SEMMNU 1330
SEMMSL 1330
SHMALL 2033
UNLOCK qualifier
PROSTRCT utility 2115
UPDATESCHEMA qualifier
PROUTIL utility 2094
UPDATEVST qualifier
PROUTIL utility 2095, 252
User Control option
PROMON utility 194
User ID (-U) startup parameter 82
User reports 89
_UserIndexStat virtual system table 256
V
Variable-length extents
on-line backups 44
Virtual system tables (VSTs) 133, 1311,
1345, 1428, 1816, 1828, 205, 2095
_ActAILog 253
Index17
Index
_ActBILog 253
_ActBuffer 253
_ActIndex 253
_ActIOFile 253
_ActIOType 253
_ActLock 253
_ActOther 253
_ActPWs 254
_ActRecord 254
_ActServer 254
_ActSpace 254
_ActSummary 254
_AreaStatus 254
_AreaThreshold 255
_Block 255
_BuffStatus 255
_Checkpoint 255
_Code-Feature 255
_Connect 255
_Database-Feature 255
_DbStatus 255
_Filelist 255
_IndexStat 255
_Latch 255
_License 255
_Lock 255
_LockReq 256
_Logging 256
_MstrBlk 256
_MyConnection 256
_Resrc 256
_Segments 256
_Servers 256
_Startup 256
_StatBase 256
_TableStat 256
_Trans 256
Index18
_TxeLock 256
_UserIndexStat 256
_UserIO 256
_UserLock 257
_UserStatus 1427, 1428, 2054,
2061, 2063, 2069, 2089, 257
_UserTableStat 257
VLM Page Table Entry Optimization
(-Mpte) startup parameter 184, 1839
VSTs. See Virtual system tables (VSTs)
W
Watchdog process. See PROWDOG
command
WBREAK-COMPILER qualifier
PROUTIL utility 2096
WebSpeed
Messanger 2312
Transaction Server 2312
Windows Application Event Log 1613,
1825
Windows Performance tool 133
WORD-RULES qualifier
PROUTIL utility 2096
Y
-yy startup parameter 1810, 1850, 2017