11. Data Recovery and High Availability Guide and Reference
11. Data Recovery and High Availability Guide and Reference
11. Data Recovery and High Availability Guide and Reference
SC09-4831-00
® ™
IBM DB2 Universal Database
Data Recovery and High Availability
Guide and Reference
Version 8
SC09-4831-00
Before using this information and the product it supports, be sure to read the general information under Notices.
This document contains proprietary information of IBM. It is provided under a license agreement and is protected by
copyright law. The information contained in this publication does not include any product warranties, and any
statements provided in this manual should not be interpreted as such.
You can order IBM publications online or through your local IBM representative.
v To order publications online, go to the IBM Publications Center at www.ibm.com/shop/publications/order
v To find your local IBM representative, go to the IBM Directory of Worldwide Contacts at
www.ibm.com/planetwide
To order DB2 publications from DB2 Marketing and Sales in the United States or Canada, call 1-800-IBM-4YOU
(426-4968).
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
© Copyright International Business Machines Corporation 2001, 2002. All rights reserved.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
About This Book . . . . . . . . . . vii Managing Log Files with a User Exit
Who Should Use this Book . . . . . . . vii Program . . . . . . . . . . . . 47
How this Book is Structured . . . . . . vii Log File Allocation and Removal . . . . 49
Blocking Transactions When the Log
Part 1. Data Recovery. . . . . . . 1 Directory File is Full . . . . . . . . 50
On Demand Log Archive . . . . . . 51
Using Raw Logs . . . . . . . . . 51
Chapter 1. Developing a Good Backup and
How to Prevent Losing Log Files . . . . 53
Recovery Strategy . . . . . . . . . . 3
Understanding the Recovery History File . . 54
Developing a Backup and Recovery Strategy . 3
Recovery History File - Garbage Collection. . 56
Deciding How Often to Back Up . . . . . 7
Garbage Collection . . . . . . . . . 56
Storage Considerations . . . . . . . . . 9
Understanding Table Space States . . . . . 59
Keeping Related Data Together . . . . . 10
Enhancing Recovery Performance . . . . . 60
Using Different Operating Systems . . . . 10
Enhancing Recovery Performance - Parallel
Crash Recovery . . . . . . . . . . . 11
Recovery . . . . . . . . . . . . . 61
Crash Recovery - Details . . . . . . . . 12
Parallel Recovery . . . . . . . . . 61
Recovering Damaged Table Spaces . . . 12
Reducing the Impact of Media Failure . . 14
Chapter 2. Database Backup . . . . . . 63
Reducing the Impact of Transaction Failure 16
Backup Overview . . . . . . . . . . 63
Recovering from Transaction Failures in a
Displaying Backup Information . . . . 66
Partitioned Database Environment . . . 16
Privileges, Authorities, and Authorization
Recovering from the Failure of a Database
Required to Use Backup . . . . . . . . 66
Partition Server . . . . . . . . . . 20
Using Backup . . . . . . . . . . . 67
Recovering Indoubt Transactions on the
Backing Up to Tape . . . . . . . . . 69
Host when DB2 Connect Has the DB2
Backing Up to Named Pipes . . . . . . 71
Syncpoint Manager Configured . . . . 21
BACKUP DATABASE . . . . . . . . . 72
Recovering Indoubt Transactions on the
db2Backup - Backup database . . . . . . 77
Host when DB2 Connect Does Not Use the
Backup Sessions - CLP Examples . . . . . 84
DB2 Syncpoint Manager . . . . . . . 22
Optimizing Backup Performance . . . . . 85
Disaster Recovery . . . . . . . . . . 23
Version Recovery . . . . . . . . . . 24
Rollforward Recovery . . . . . . . . . 25 Chapter 3. Database Restore . . . . . . 87
Incremental Backup and Recovery . . . . 28 Restore Overview . . . . . . . . . . 87
Incremental Backup and Recovery - Details 30 Optimizing Restore Performance . . . . 88
Restoring from Incremental Backup Images 30 Privileges, Authorities, and Authorization
Limitations to Automatic Incremental Required to Use Restore . . . . . . . . 88
Restore. . . . . . . . . . . . . 32 Using Restore . . . . . . . . . . . 89
Understanding Recovery Logs . . . . . . 34 Using Incremental Restore in a Test and
Recovery Log Details . . . . . . . . . 37 Production Environment . . . . . . . . 90
Log Mirroring . . . . . . . . . . 37 Redefining Table Space Containers During a
Reducing Logging with the NOT LOGGED Restore Operation (Redirected Restore) . . . 93
INITIALLY Parameter . . . . . . . . 38 Restoring to an Existing Database . . . . . 94
Configuration Parameters for Database Restoring to a New Database . . . . . . 95
Logging . . . . . . . . . . . . 39 RESTORE DATABASE . . . . . . . . 95
Managing Log Files . . . . . . . . 45 db2Restore - Restore database. . . . . . 104
Restore Sessions - CLP Examples. . . . . 115
Appendix A. How to Read the Syntax Appendix F. Tivoli Storage Manager . . . 319
Diagrams . . . . . . . . . . . . 203 Configuring a Tivoli Storage Manager Client 319
Contents v
vi Data Recovery and High Availability Guide and Reference
About This Book
This book provides detailed information about, and shows you how to use,
the IBM DB2 Universal Database (UDB) backup, restore, and recovery utilities.
The book also explains the importance of high availability, and describes DB2
failover support on several platforms.
It is assumed that you are familiar with DB2 Universal Database, Structured
Query Language (SQL), and with the operating system environment in which
DB2 UDB is running. This manual does not contain instructions for installing
DB2, which depend on your operating system.
Data Recovery
Chapter 1, “Developing a Good Backup and Recovery Strategy”
Discusses factors to consider when choosing database and table space
recovery methods, including backing up and restoring a database or
table space, and using rollforward recovery.
Chapter 1, “Developing a Good Backup and Recovery Strategy”
Describes the DB2 backup utility, used to create backup copies of a
database or table spaces.
Chapter 3, “Database Restore”
Describes the DB2 restore utility, used to rebuild damaged or
corrupted databases or table spaces that were previously backed up.
Chapter 4, “Rollforward Recovery”
Describes the DB2 rollforward utility, used to recover a database by
applying transactions that were recorded in the database recovery log
files.
High Availability
Appendixes
Appendix A, “How to Read the Syntax Diagrams”
Explains the conventions used in syntax diagrams.
Appendix B, “Warning, Error and Completion Messages”
Provides information about interpreting messages generated by the
database manager when a warning or error condition has been
detected.
Appendix C, “Additional DB2 Commands”
Describes recovery-related DB2 commands.
Appendix D, “Additional APIs and Associated Data Structures”
Describes recovery-related APIs and their data structures.
Appendix E, “Recovery Sample Program”
Provides the code listing for a sample program containing
recovery-related DB2 APIs and embedded SQL calls, and information
on how to use them.
Appendix F, “Tivoli Storage Manager”
Provides information about the Tivoli Storage Manager (TSM,
formerly ADSM) product, which you can use to manage database or
table space backup operations.
Appendix G, “User Exit for Database Recovery”
Discusses how user exit programs can be used with database log files,
and describes some sample user exit programs.
Different recovery methods are discussed in the sections that follow, and you
will discover which recovery method is best suited to your business
environment.
The concept of a database backup is the same as any other data backup: taking
a copy of the data and then storing it on a different medium in case of failure
or damage to the original. The simplest case of a backup involves shutting
down the database to ensure that no further transactions occur, and then
simply backing it up. You can then rebuild the database if it becomes
damaged or corrupted in some way.
Recovery log files and the recovery history file are created automatically when
a database is created (Figure 1 on page 5). These log files are important if you
need to recover data that is lost or damaged. You cannot directly modify a
recovery log file or the recovery history file; however, you can delete entries
from the recovery history file using the PRUNE HISTORY command. You can
also use the rec_his_retentn database configuration parameter to specify the
number of days that the recovery history file will be retained.
System
Log
Files
Instance(s)
Recovery
History
File
Database(s)
Table
Space
Change
History File
Each database includes recovery logs, which are used to recover from
application or system errors. In combination with the database backups, they
are used to recover the consistency of the database right up to the point in
time when the error occurred.
The recovery history file contains a summary of the backup information that
can be used to determine recovery options, if all or part of the database must
be recovered to a given point in time. It is used to track recovery-related
events such as backup and restore operations, among others. This file is
located in the database directory.
The table space change history file, which is also located in the database
directory, contains information that can be used to determine which log files
are required for the recovery of a particular table space.
If you have a recoverable database, you can back up, restore, and roll
individual table spaces forward, rather than the entire database. When you
back up a table space online, it is still available for use, and simultaneous
updates are recorded in the logs. When you perform an online restore or
rollforward operation on a table space, the table space itself is not available
for use until the operation completes, but users are not prevented from
accessing tables in other table spaces.
Related concepts:
v “Crash Recovery” on page 11
Related reference:
v “Recovery History Retention Period configuration parameter -
rec_his_retentn” in the Administration Guide: Performance
v “DB2 Data Links Manager system setup and backup recommendations” in
the DB2 Data Links Manager Administration Guide and Reference
Your recovery plan should allow for regularly scheduled backup operations,
because backing up a database requires time and system resources. Your plan
may include a combination of full database backups and incremental backup
operations.
You should take full database backups regularly, even if you archive the logs
(which allows for rollforward recovery). It is more time consuming to rebuild
a database from a collection of table space backup images than it is to recover
the database from a full database backup image. Table space backup images
are useful for recovering from an isolated disk failure or an application error.
You should also consider not overwriting backup images and logs, saving at
least two full database backup images and their associated logs as an extra
precaution.
If the amount of time needed to apply archived logs when recovering and
rolling a very active database forward is a major concern, consider the cost of
backing up the database more frequently. This reduces the number of archived
logs you need to apply when rolling forward.
You can initiate a backup operation while the database is either online or
offline. If it is online, other applications or processes can connect to the
database, as well as read and modify data while the backup operation is
running. If the backup operation is running offline, other applications cannot
connect to the database.
To reduce the amount of time that the database is not available, consider
using online backup operations. Online backup operations are supported only
if rollforward recovery is enabled. If rollforward recovery is enabled and you
have a complete set of recovery logs, you can rebuild the database, should the
Offline backup operations are faster than online backup operations, since there
is no contention for the data files.
The backup utility lets you back up selected table spaces. If you use DMS
table spaces, you can store different types of data in their own table spaces to
reduce the time required for backup operations. You can keep table data in
one table space, long field and LOB data in another table space, and indexes
in yet another table space. If you do this and a disk failure occurs, it is likely
to affect only one of the table spaces. Restoring or rolling forward one of these
table spaces will take less time than it would have taken to restore a single
table space containing all of the data.
You can also save time by taking backups of different table spaces at different
times, as long as the changes to them are not the same. So, if long field or
LOB data is not changed as frequently as the other data, you can back up
these table spaces less frequently. If long field and LOB data are not required
for recovery, you can also consider not backing up the table space that
contains that data. If the LOB data can be reproduced from a separate source,
choose the NOT LOGGED option when creating or altering a table to include
LOB columns.
Note: Consider the following if you keep your long field data, LOB data, and
indexes in separate table spaces, but do not back them up together: If
you back up a table space that does not contain all of the table data,
you cannot perform point-in-time rollforward recovery on that table
space. All the table spaces that contain any type of data for a table
must be rolled forward simultaneously to the same point in time.
If you reorganize a table, you should back up the affected table spaces after
the operation completes. If you have to restore the table spaces, you will not
have to roll forward through the data reorganization.
The time required to recover a database is made up of two parts: the time
required to complete the restoration of the backup; and, if the database is
enabled for forward recovery, the time required to apply the logs during the
rollforward operation. When formulating a recovery plan, you should take
these recovery costs and their impact on your business operations into
account. Testing your overall recovery plan will assist you in determining
whether the time required to recover the database is reasonable given your
business requirements. Following each test, you may want to increase the
frequency with which you take a backup. If rollforward recovery is part of
Related concepts:
v “Incremental Backup and Recovery” on page 28
Related reference:
v Appendix G, “User Exit for Database Recovery” on page 323
v “Configuration Parameters for Database Logging” on page 39
Storage Considerations
When deciding which recovery method to use, consider the storage space
required.
The version recovery method requires space to hold the backup copy of the
database and the restored database. The rollforward recovery method requires
space to hold the backup copy of the database or table spaces, the restored
database, and the archived database logs.
If a table contains long field or large object (LOB) columns, you should
consider placing this data into a separate table space. This will affect your
storage space considerations, as well as affect your plan for recovery. With a
separate table space for long field and LOB data, and knowing the time
required to back up long field and LOB data, you may decide to use a
recovery plan that only occasionally saves a backup of this table space. You
may also choose, when creating or altering a table to include LOB columns,
not to log changes to those columns. This will reduce the size of the required
log space and the corresponding log archive space.
The database logs can use up a large amount of storage. If you plan to use the
rollforward recovery method, you must decide how to manage the archived
logs. Your choices are the following:
v Use a user exit program to copy these logs to another storage device in
your environment.
v Manually copy the logs to a storage device or directory other than the
database log path directory after they are no longer in the active set of logs.
As part of your database design, you will know the relationships that exist
between tables. These relationships can be expressed at the application level,
when transactions update more than one table, or at the database level, where
referential integrity exists between tables, or where triggers on one table affect
another table. You should consider these relationships when developing a
recovery plan. You will want to back up related sets of data together. Such
sets can be established at either the table space or the database level. By
keeping related sets of data together, you can recover to a point where all of
the data is consistent. This is especially important if you want to be able to
perform point-in-time rollforward recovery on table spaces.
When working in an environment that has more than one operating system,
you must consider that in most cases, the backup and recovery plans cannot
be integrated. That is, you cannot usually back up a database on one
operating system, and then restore that database on another operating system.
In such cases, you should keep the recovery plans for each operating system
separate and independent.
There is, however, support for cross-platform backup and restore operations
between operating systems with similar architectures such as AIX® and Sun
Solaris, and between 32 bit and 64 bit operating systems. When you transfer
the backup image between systems, you must transfer it in binary mode. The
target system must have the same (or later) version of DB2® as the source
system. Restore operations to a down-level system are not supported.
If you must move tables from one operating system to another, and
cross-platform backup and restore support is not available in your
environment, you can use the db2move command, or the export utility
followed by the import or the load utility.
Related reference:
v “db2move - Database Movement Tool” in the Command Reference
v “EXPORT” in the Command Reference
v “IMPORT” in the Command Reference
v “LOAD” in the Command Reference
2 rollback
3 rollback
4 rollback
Crash
All four rolled back
TIME
A transaction failure results from a severe error or condition that causes the
database or the database manager to end abnormally. Partially completed
units of work, or UOW that have not been flushed to disk at the time of
failure, leave the database in an inconsistent state. Following a transaction
failure, the database must be recovered. Conditions that can result in
transaction failure include:
v A power failure on the machine, causing the database manager and the
database partitions on it to go down
v A serious operating system error that causes DB2® to go down
v A hardware failure such as memory corruption, or disk, CPU, or network
failure.
Related reference:
v “Auto Restart Enable configuration parameter - autorestart” in the
Administration Guide: Performance
A damaged table space has one or more containers that cannot be accessed.
This is often caused by media problems that are either permanent (for
example, a bad disk), or temporary (for example, an offline disk, or an
unmounted file system).
If the damaged table space is the system catalog table space, the database
cannot be restarted. If the container problems cannot be fixed leaving the
original data intact, the only available options are:
v To restore the database
v To restore the catalog table space. (Table space restore is only valid for
recoverable databases, because the database must be rolled forward.)
If the damaged table space is not the system catalog table space, DB2®
attempts to make as much of the database available as possible.
If the damaged table space is the only temporary table space, you should
create a new temporary table space as soon as a connection to the database
can be made. Once created, the new temporary table space can be used, and
Note: Putting a table space name into the DROP PENDING TABLESPACES
list does not mean that the table space will be in drop pending state.
This will occur only if the table space is found to be damaged during
the restart operation. Once the restart operation is successful, you
should issue DROP TABLESPACE statements to drop each of the table
spaces that are in drop pending state (invoke the LIST TABLESPACES
command to find out which table spaces are in this state). This way the
space can be reclaimed, or the table spaces can be recreated.
Reducing the Impact of Media Failure
To reduce the probability of media failure, and to simplify recovery from this
type of failure:
v Mirror or duplicate the disks that hold the data and logs for important
databases.
v Use a Redundant Array of Independent Disks (RAID) configuration, such as
RAID Level 5.
v In a partitioned database environment, set up a rigorous procedure for
handling the data and the logs on the catalog node. Because this node is
critical for maintaining the database:
– Ensure that it resides on a reliable disk
– Duplicate it
– Make frequent backups
– Do not put user data on it.
RAID level 5 involves data and parity striping by sectors, across all disks.
Parity is interleaved with data, rather than being stored on a dedicated drive.
Data protection is good: If any disk fails, the data can still be accessed by
using information from the other disks, along with the striped parity
information. Read performance is good, but write performance is not. A RAID
level 5 configuration requires a minimum of three identical disks. The amount
of disk space required for overhead varies with the number of disks in the
array. In the case of a RAID level 5 configuration with 5 disks, the space
overhead is 20 percent.
When using a RAID (but not a RAID level 0) disk array, a failed disk will not
prevent you from accessing data on the array. When hot-pluggable or
hot-swappable disks are used in the array, a replacement disk can be swapped
with the failed disk while the array is in use. With RAID level 5, if two disks
fail at the same time, all data is lost (but the probability of simultaneous disk
failures is very small).
You might consider using a RAID level 1 hardware disk array or a software
disk array for your logs, because this provides recoverability to the point of
failure, and offers good write performance, which is important for logs. In
cases where reliability is critical (because time cannot be lost recovering data
following a disk failure), and write performance is not so critical, consider
using a RAID level 5 hardware disk array. Alternatively, if write performance
is critical, and the cost of additional disk space is not significant, consider a
RAID level 1 hardware disk array for your data, as well as for your logs.
For detailed information about the available RAID levels, visit the following
web site: http://www.acnc.com/04_01_00.html
CAUTION:
Having the operating system boot drive in the disk array prevents your
system from starting if that drive fails. If the drive fails before the disk
array is running, the disk array cannot allow access to the drive. A boot
drive should be separate from the disk array.
Reducing the Impact of Transaction Failure
Related concepts:
v “Synchronizing Clocks in a Partitioned Database System” on page 132
Recovering from Transaction Failures in a Partitioned Database
Environment
If one of the servers responds with a NO, the transaction is rolled back.
Otherwise, the coordinator node begins the second phase.
During the second phase, the coordinator node writes a COMMIT log record,
then distributes a COMMIT request to all the servers that responded with a
YES. After all the other database partition servers have committed, they send
an acknowledgment of the COMMIT to the coordinator node. The transaction
is complete when the coordinator agent has received all COMMIT
acknowledgments from all the participating servers. At this point, the
coordinator agent writes a FORGET log record.
Crash recovery reapplies the log records in the active log files to ensure that
the effects of all complete transactions are in the database. After the changes
have been reapplied, all uncommitted transactions are rolled back locally,
except for indoubt transactions. There are two types of indoubt transaction in a
partitioned database environment:
v On a database partition server that is not the coordinator node, a
transaction is in doubt if it is prepared but not yet committed.
v On the coordinator node, a transaction is in doubt if it is committed but not
yet logged as complete (that is, the FORGET record is not yet written). This
situation occurs when the coordinator agent has not received all the
COMMIT acknowledgments from all the servers that worked for the
application.
Crash recovery attempts to resolve all the indoubt transactions by doing one
of the following. The action that is taken depends on whether the database
partition server was the coordinator node for an application:
v If the server that restarted is not the coordinator node for the application, it
sends a query message to the coordinator agent to discover the outcome of
the transaction.
v If the server that restarted is the coordinator node for the application, it
sends a message to all the other agents (subordinate agents) that the
coordinator agent is still waiting for COMMIT acknowledgments.
Note: If multiple logical nodes are being used on a processor, the failure of
one logical node may cause other logical nodes on the same processor
to fail.
Related concepts:
v “Two-phase commit” in the Administration Guide: Planning
v “Error recovery during two-phase commit” in the Administration Guide:
Planning
Related tasks:
v “Manually resolving indoubt transactions” in the Administration Guide:
Planning
Related reference:
v “db2start - Start DB2” in the Command Reference
v “LIST INDOUBT TRANSACTIONS” in the Command Reference
Recovering from the Failure of a Database Partition Server
Procedure:
Related concepts:
v “Recovering from Transaction Failures in a Partitioned Database
Environment” on page 16
Related reference:
v “db2start - Start DB2” in the Command Reference
v “RESTART DATABASE” in the Command Reference
To access host or AS/400 database servers, DB2 Connect is used. The recovery
steps differ if DB2 Connect has the DB2 Syncpoint Manager configured.
Procedures:
Related tasks:
v “Recovering Indoubt Transactions on the Host when DB2 Connect Does
Not Use the DB2 Syncpoint Manager” on page 22
Related reference:
v “db2start - Start DB2” in the Command Reference
v “LIST INDOUBT TRANSACTIONS” in the Command Reference
v “RESTART DATABASE” in the Command Reference
Recovering Indoubt Transactions on the Host when DB2 Connect Does
Not Use the DB2 Syncpoint Manager
To access host or AS/400 database servers, DB2 Connect is used. The recovery
steps differ if DB2 Connect has the DB2 Syncpoint Manager configured.
Procedure:
Note: Because the DB2 Syncpoint Manager is not involved, you cannot use
the LIST DRDA INDOUBT TRANSACTIONS command.
1. On the OS/390 host, issue the command DISPLAY THREAD
TYPE(INDOUBT).
From this list identify the transaction that you want to heuristically
complete. For details about the DISPLAY command, see the DB2 for
OS/390 Command Reference. The LUWID displayed can be matched to the
same luwid at the Transaction Manager Database.
2. Issue the RECOVER THREAD(<LUWID>) ACTION(ABORT|COMMIT)
command, depending on what you want to do.
For details about the RECOVER command, see the DB2 for OS/390
Command Reference.
Related tasks:
v “Recovering Indoubt Transactions on the Host when DB2 Connect Has the
DB2 Syncpoint Manager Configured” on page 21
Related reference:
v “LIST INDOUBT TRANSACTIONS” in the Command Reference
Disaster Recovery
The term disaster recovery is used to describe the activities that need to be
done to restore the database in the event of a fire, earthquake, vandalism, or
other catastrophic events. A plan for disaster recovery can include one or
more of the following:
v A site to be used in the event of an emergency
If your plan for disaster recovery is to recover the entire database on another
machine, you require at least one full database backup and all the archived
logs for the database. You may choose to keep a standby database up to date
by applying the logs to it as they are archived. Or, you may choose to keep
the database backup and log archives in the standby site, and perform restore
and rollforward operations only after a disaster has occurred. (In this case, a
recent database backup is clearly desirable.) With a disaster, however, it is
generally not possible to recover all of the transactions up to the time of the
disaster.
The usefulness of a table space backup for disaster recovery depends on the
scope of the failure. Typically, disaster recovery requires that you restore the
entire database; therefore, a full database backup should be kept at a standby
site. Even if you have a separate backup image of every table space, you
cannot use them to recover the database. If the disaster is a damaged disk, a
table space backup of each table space on that disk can be used to recover. If
you have lost access to a container because of a disk failure (or for any other
reason), you can restore the container to a different location.
Both table space backups and full database backups can have a role to play in
any disaster recovery plan. The DB2® facilities available for backing up,
restoring, and rolling data forward provide a foundation for a disaster
recovery plan. You should ensure that you have tested recovery procedures in
place to protect your business.
Related concepts:
v “Redefining Table Space Containers During a Restore Operation (Redirected
Restore)” on page 93
Version Recovery
create
BACKUP
database
image
TIME
Figure 3. Version Recovery. The database is restored from the latest backup image, but all units of
work processed between the time of backup and failure are lost.
Using the version recovery method, you must schedule and perform full
backups of the database on a regular basis.
Rollforward Recovery
To use the rollforward recovery method, you must have taken a backup of the
database, and archived the logs (by enabling either the logretain or the userexit
database configuration parameters, or both). Restoring the database and
specifying the WITHOUT ROLLING FORWARD option is equivalent to using
the version recovery method. The database is restored to a state identical to
the one at the time that the offline backup image was made. If you restore the
database and do not specify the WITHOUT ROLLING FORWARD option for
the restore database operation, the database will be in rollforward pending
state at the end of the restore operation. This allows rollforward recovery to
take place.
update update
TIME
Figure 4. Database Rollforward Recovery. There can be more than one active log in the case of a
long-running transaction.
Table space rollforward recovery can be used in the following two situations:
v After a table space restore operation, the table space is always in
rollforward pending state, and it must be rolled forward. Invoke the
Note: If the table space in error contains the system catalog tables, you will
not be able to start the database. You must restore the
SYSCATSPACE table space, then perform rollforward recovery to the
end of the logs.
Media
error
ROLLFORWARD
update update
Time
Figure 5. Table Space Rollforward Recovery. There can be more than one active log in the case of
a long-running transaction.
Related concepts:
v “Understanding Recovery Logs” on page 34
Related reference:
v “ROLLFORWARD DATABASE” on page 134
Related samples:
v “dbrecov.out -- HOW TO RECOVER A DATABASE (C)”
v “dbrecov.sqc -- How to recover a database (C)”
v “dbrecov.out -- HOW TO RECOVER A DATABASE (C++)”
v “dbrecov.sqC -- How to recover a database (C++)”
DB2® now supports incremental backup and recovery (but not of long field or
large object data). An incremental backup is a backup image that contains only
pages that have been updated since the previous backup was taken. In
addition to updated data and index pages, each incremental backup image
also contains all of the initial database meta-data (such as database
configuration, table space definitions, database history, and so on) that is
normally stored in full backup images.
The key difference between incremental and delta backup images is their
behavior when successive backups are taken of an object that is continually
changing over time. Each successive incremental image contains the entire
contents of the previous incremental image, plus any data that has changed,
or is new, since the previous full backup was produced. Delta backup images
contain only the pages that have changed since the previous image of any
type was produced.
To rebuild the database or the table space to a consistent state, the recovery
process must begin with a consistent image of the entire object (database or
table space) to be restored, and must then apply each of the appropriate
incremental backup images in the order described below.
For SMS and DMS table spaces, the granularity of this tracking is at the table
space level. In table space level tracking, a flag for each table space indicates
Related tasks:
v “Restoring from Incremental Backup Images” on page 30
Procedure:
where:
<ts> points to the last incremental backup image (the target image)
to be restored
where:
<ts1> points to the initial full database (or table space) image
where:
<tsX> points to each incremental backup image in creation sequence
If you are using manual incremental restore for a database restore operation,
and table space backup images have been produced, the table space images
must be restored in the chronological order of their backup time stamps.
If you want to use manual incremental restore, the db2ckrst utility can be
used to query the database history and generate a list of backup image time
stamps needed for an incremental restore. A simplified restore syntax for a
manual incremental restore is also generated. It is recommended that you
keep a complete record of backups, and only use this utility as a guide.
This will result in the DB2 restore utility performing each of the steps
described at the beginning of this section automatically. During the initial
phase of processing, the backup image with time stamp 20001228152133 is
read, and the restore utility verifies that the database, its history, and the table
space definitions exist and are valid.
Note: It is highly recommended that you not use the FORCE option of the
PRUNE HISTORY command. The default operation of this command
prevents you from deleting history entries that may be required for
recovery from the most recent, full database backup image, but with
the FORCE option, it is possible to delete entries that are required for
an automatic restore operation.
During the third phase of processing, DB2 will restore each of the remaining
backup images in the generated chain. If an error occurs during this phase,
you will have to issue the RESTORE DATABASE command with the
INCREMENTAL ABORT option to cleanup any remaining resources. You will
then have to determine if the error can be resolved before you re-issue the
RESTORE command or attempt the manual incremental restore again.
Related concepts:
v “Incremental Backup and Recovery” on page 28
Related reference:
v “RESTORE DATABASE” on page 95
v “db2ckrst - Check Incremental Restore Image Sequence” on page 216
Limitations to Automatic Incremental Restore
1. If a table space name has been changed since the backup operation you
want to restore from, and you use the new name when you issue a table
Suggested workarounds:
v Use manual incremental restore.
v Restore the history file first from image <ts4> before issuing an
automatic incremental restore.
3. If you restore a backup image from one database into another database
and then do an incremental (delta) backup, you can no longer use
automatic incremental restore to restore this backup image.
Example:
db2 create db a
db2 create db b
SQL2542N No match for a database image file was found based on the source
database alias "B" and timestamp "ts1" provided.
Suggested workaround:
v Use manual incremental restore as follows:
db2 restore db b incremental taken at ts2
db2 restore db a incremental taken at ts1 into b
db2 restore db b incremental taken at ts2
v After the manual restore operation into database B, issue a full database
backup to start a new incremental chain
Related concepts:
v “Incremental Backup and Recovery” on page 28
Related tasks:
v “Restoring from Incremental Backup Images” on page 30
Related reference:
v “RESTORE DATABASE” on page 95
All databases have logs associated with them. These logs keep records of
database changes. If a database needs to be restored to a point beyond the last
full, offline backup, logs are required to roll the data forward to the point of
failure.
There are two types of DB2® logging: circular, and archive, each provides a
different level of recovery capability:
v Circular logging is the default behavior when a new database is created.
(The logretain and userexit database configuration parameters are set to NO.) With
this type of logging, only full, offline backups of the database are allowed.
The database must be offline (inaccessible to users) when a full backup is
taken. As the name suggests, circular logging uses a “ring” of online logs to
provide recovery from transaction failures and system crashes. The logs are
used and retained only to the point of ensuring the integrity of current
transactions. Circular logging does not allow you to roll a database forward
through transactions performed after the last full backup operation. All
changes occurring since the last backup operation are lost. Since this type of
restore operation recovers your data to the specific point in time at which a
full backup was taken, it is called version recovery.
DB2 server
Transaction
Circular Logs
Active
Database Log Path Log File
Active logs are used during crash recovery to prevent a failure (system
power or application error) from leaving a database in an inconsistent state.
The RESTART DATABASE command uses the active logs, if needed, to
move the database to a consistent and usable state. During crash recovery,
yet uncommitted changes recorded in these logs are rolled back. Changes
that were committed but not yet written from memory (the buffer pool) to
disk (database containers) are redone. These actions ensure the integrity of
the database. Active logs are located in the database log path directory.
v Archive logging is used specifically for rollforward recovery. Enabling the
logretain and/or the userexit database configuration parameter will result in
archive logging. To archive logs, you can choose to have DB2 leave the log
files in the active path and then manually archive them, or you can install a
user exit program to automate the archiving. Archived logs are logs that
were active but are no longer required for crash recovery.
TIME
Logs are used between backups to track the changes to the databases.
Figure 7. Active and Archived Database Logs in Rollforward Recovery. There can be more than
one active log in the case of a long-running transaction.
To determine which log extents in the database log path directory are archived
logs, check the value of the loghead database configuration parameter. This
parameter indicates the lowest numbered log that is active. Those logs with
sequence numbers less than loghead are archived logs and can be moved. You
can check the value of this parameter by using the Control Center; or, by
using the command line processor and the GET DATABASE
CONFIGURATION command to view the ″First active log file″. For more
information about this configuration parameter, see the Administration Guide:
Performance book.
Related concepts:
v “Log Mirroring” on page 37
DB2® supports log mirroring at the database level. Mirroring log files helps
protect a database from:
v Accidental deletion of an active log
v Data corruption caused by hardware failure
If you are concerned that your active logs may be damaged (as a result of a
disk crash), you should consider using a new DB2 configuration parameter,
MIRRORLOGPATH, to specify a secondary path for the database to manage
copies of the active log, mirroring the volumes on which the logs are stored.
If there is an error writing to either the active log path or the mirror log path,
the database will mark the failing path as “bad”, write a message to the
administration notification log, and write subsequent log records to the
remaining “good” log path only. DB2 will not attempt to use the “bad” path
again until the current log file is completed. When DB2 needs to open the
next log file, it will verify that this path is valid, and if so, will begin to use it.
If not, DB2 will not attempt to use the path again until the next log file is
accessed for the first time. There is no attempt to synchronize the log paths,
but DB2 keeps information about access errors that occur, so that the correct
paths are used when log files are archived. If a failure occurs while writing to
the remaining “good” path, the database shuts down.
Related reference:
v “Mirror Log Path configuration parameter - mirrorlogpath” in the
Administration Guide: Performance
If your application creates and populates work tables from master tables, and
you are not concerned about the recoverability of these work tables because
they can be easily recreated from the master tables, you may want to create
the work tables specifying the NOT LOGGED INITIALLY parameter on the
CREATE TABLE statement. The advantage of using the NOT LOGGED
INITIALLY parameter is that any changes made on the table (including insert,
delete, update, or create index operations) in the same unit of work that
creates the table will not be logged. This not only reduces the logging that is
done, but may also increase the performance of your application. You can
achieve the same result for existing tables by using the ALTER TABLE
statement with the NOT LOGGED INITIALLY parameter.
Notes:
1. You can create more than one table with the NOT LOGGED INITIALLY
parameter in the same unit of work.
2. Changes to the catalog tables and other user tables are still logged.
Because changes to the table are not logged, you should consider the
following when deciding to use the NOT LOGGED INITIALLY table attribute:
v All changes to the table will be flushed out to disk at commit time. This
means that the commit may take longer.
v If the NOT LOGGED INITIALLY attribute is activated and an activity
occurs that is not logged, the entire unit of work will be rolled back if a
statement fails or a ROLLBACK TO SAVEPOINT is executed (SQL1476N).
v You cannot recover these tables when rolling forward. If the rollforward
operation encounters a table that was created or altered with the NOT
LOGGED INITIALLY option, the table is marked as unavailable. After the
database is recovered, any attempt to access the table returns SQL1477N.
Note: When a table is created, row locks are held on the catalog tables until
a COMMIT is done. To take advantage of the no logging behavior,
you must populate the table in the same unit of work in which it is
created. This has implications for concurrency.
Related concepts:
v “Application processes, concurrency, and recovery” in the SQL Reference,
Volume 1
Related tasks:
v “Creating a table space” in the Administration Guide: Implementation
Related reference:
v “DECLARE GLOBAL TEMPORARY TABLE statement” in the SQL
Reference, Volume 2
Configuration Parameters for Database Logging
Primary logs (logprimary)
This parameter specifies the number of primary logs of size logfilsz
that will be created.
A primary log, whether empty or full, requires the same amount of
disk space. Thus, if you configure more logs than you need, you use
disk space unnecessarily. If you configure too few logs, you can
encounter a log-full condition. As you select the number of logs to
configure, you must consider the size you make each log and whether
your application can handle a log-full condition. The total log file size
limit on active log space is 256 GB.
If you are enabling an existing database for rollforward recovery,
change the number of primary logs to the sum of the number of
primary and secondary logs, plus 1. Additional information is logged
for LONG VARCHAR and LOB fields in a database enabled for
rollforward recovery.
Secondary logs (logsecond)
This parameter specifies the number of secondary log files that are
created and used for recovery, if needed.
If the primary log files become full, secondary log files (of size
logfilsiz) are allocated, one at a time as needed, up to the maximum
Note: This value must be set to ON to enable infinite active log space.
Note: The default values for the logretain and userexit database configuration
parameters do not support rollforward recovery, and must be changed
if you are going to use them.
Overflow log path (overflowlogpath)
This parameter can be used for several functions, depending on your
logging requirements. You can specify a location for DB2 to find log
files that are needed for a rollforward operation. It is similar to the
OVERFLOW LOG PATH option of the ROLLFORWARD command;
however, instead of specifying the OVERFLOW LOG PATH option for
every ROLLFORWARD command issued, you can set this
configuration parameter once. If both are used, the OVERFLOW LOG
PATH option will overwrite the overflowlogpath configuration
parameter for that rollforward operation.
If logsecond is set to -1, you can specify a directory for DB2 to store
active log files retrieved from the archive. (Active log files must be
retrieved for rollback operations if they are no longer in the active log
path).
If overflowlogpath is not specified, DB2 will retrieve the log files into
the active log path. By specifying this parameter you can provide
additional resource for DB2 to store the retrieved log files. The benefit
includes spreading the I/O cost to different disks, and allowing more
log files to be stored in the active log path.
For example, if you are using the db2ReadLog API for replication,
you can use overflowlogpath to specify a location for DB2 to search for
log files that are needed for this API. If the log file is not found (in
either the active log path or the overflow log path) and the database
is configured with userexit enabled, DB2 will retrieve the log file. You
Related concepts:
v “Managing Log Files” on page 45
v “Enhancing Recovery Performance” on page 60
When a rollforward operation completes successfully, the last log that was
used is truncated, and logging begins with the next sequential log. Any log
in the log path directory with a sequence number greater than the last log
used for rollforward recovery is re-used. Any entries in the truncated log
following the truncation point are overwritten with zeros. Ensure that you
make a copy of the logs before invoking the rollforward utility. (You can
invoke a user exit program to copy the logs to another location.)
v If a database has not been activated (by way of the ACTIVATE DATABASE
command), DB2 truncates the current log file when all applications have
disconnected from the database. The next time an application connects to
the database, DB2 starts logging to a new log file. If many small log files
are being produced on your system, you may want to consider using the
ACTIVATE DATABASE command. This not only saves the overhead of
having to initialize the database when applications connect, it also saves the
overhead of having to allocate a large log file, truncate it, and then allocate
a new large log file.
v An archived log may be associated with two or more different log sequences
for a database, because log file names are reused (see Figure 8 on page 46).
For example, if you want to recover Backup 2, there are two possible log
sequences that could be used. If, during full database recovery, you roll
forward to a point in time and stop before reaching the end of the logs, you
S0000013.LOG S0000014.LOG . . .
Restore Backup 2
and Roll Forward to
end of log 12.
Related reference:
v Appendix G, “User Exit for Database Recovery” on page 323
Note: On Windows® operating systems, you cannot use a REXX user exit
to archive logs.
v When archiving, a log file is passed to the user exit when it is full, even if
the log file is still active and is needed for normal processing. This allows
copies of the data to be moved away from volatile media as quickly as
possible. The log file passed to the user exit is retained in the log path
directory until it is no longer needed for normal processing. At this point,
the disk space is reused.
v DB2® opens a log file in read mode when it starts a user exit program to
archive the file. On some platforms, this prevents the user exit program
from being able to delete the log file. Other platforms, like AIX, allow
processes, including the user exit program, to delete log files. A user exit
program should never delete a log file after it is archived, because the file
could still be active and needed for crash recovery. DB2 manages disk space
reuse when log files are archived.
v When a log file has been archived and is inactive, DB2 does not delete the
file but renames it as the next log file when such a file is needed. This
results in a performance gain, because creating a new log file (instead of
renaming the file) causes all pages to be written out to guarantee the disk
space. It is more efficient to reuse than to free up and then reacquire the
necessary pages on disk.
v DB2 will not invoke the user exit program to retrieve the log file during
crash recovery or rollback unless the logsecond database configuration
parameter is set to -1.
v A user exit program does not guarantee rollforward recovery to the point of
failure, but only attempts to make the failure window smaller. As log files
fill, they are queued for the user exit routine. Should the disk containing
the log fail before a log file is filled, the data in that log file is lost. Also,
since the files are queued for archiving, the disk can fail before all the files
are copied, causing any log files in the queue to be lost.
v The configured size of each individual log file has a direct bearing on the
user exit. If each log file is very large, a large amount of data can be lost if
a disk fails. A database configured with small log files causes the data to be
passed to the user exit routine more often.
Note: To free unused log space, the log file is truncated before it is
archived.
v A copy of the log should be made to another physical device so that the
offline log file can be used by rollforward recovery if the device containing
the log file experiences a media failure. This should not be the same device
containing database data files.
v If you have enabled user exit programs and are using a tape drive as a
storage device for logs and backup images, you need to ensure that the
destination for the backup images and the archived logs is not the same
tape drive. Since some log archiving may take place while a backup
operation is in progress, an error may occur when the two processes are
trying to write to the same tape drive at the same time.
v In some cases, if a database is closed before a positive response has been
received from a user exit program for an archive request, the database
manager will send another request when the database is opened. Thus, a
log file may be archived more than once.
v If a user exit program receives a request to archive a file that does not exist
(because there were multiple requests to archive and the file was deleted
after the first successful archiving operation), or to retrieve a file that does
not exist (because it is located in another directory or the end of the logs
has been reached), it should ignore this request and pass a successful return
code.
v The user exit program should allow for the existence of different log files
with the same name after a point in time recovery; it should be written to
preserve both log files and to associate those log files with the correct
recovery path.
v If a user exit program is enabled for two or more databases that are using
the same tape device to archive log files, and a rollforward operation is
taking place on one of the databases, the other database(s) should not be
active. If another database tries to archive a log file while the rollforward
operation is in progress, the logs required for the rollforward operation may
not be found or the new log file archived to the tape device might
overwrite the log files previously stored on that tape device.
Related concepts:
v “Managing Log Files” on page 45
Log File Allocation and Removal
Log files in the database log directory are never removed if they may be
required for crash recovery. When the userexit database configuration
parameter is enabled, a full log file becomes a candidate for removal only
after it is no longer required for crash recovery. A log file which is required
for crash recovery is called an active log. A log file which is not required for
crash recovery is called an archived log.
The process of allocating new log files and removing old log files is
dependent on the settings of userexit and logretain database configuration
parameters:
Both logretain and userexit are set to OFF
Circular logging will be used. Rollforward recovery is not supported
with circular logging, while crash recovery is.
During circular logging, new log files, other than secondary logs, are
not generated and old log files are not deleted. Log files are handled
in a circular fashion. That is, when the last log file is full, DB2® begins
writing to the first log file.
A log full situation can occur if all of the log files are active and the
circular logging process cannot wrap to the first log file. Secondary
log files are created when all the primary log files are active and full.
Once a secondary log is created, it is not deleted until the database is
restarted.
Logretain is set to ON and userexit is set to OFF
Both rollforward recovery and crash recovery are enabled. The
database is known to be recoverable. When userexit is set to OFF, DB2
does not delete log files from the database log directory. Each time a
log file becomes full, DB2 begins writing records to another log file,
and (if the maximum number of primary and secondary logs has not
been reached) creates a new log file.
Userexit is set to ON
When both logretain and userexit are set to on, both rollforward
recovery and crash recovery are enabled. When a log file becomes full,
it is automatically archived using the user supplied user exit program.
If an error is encountered while archiving a log file, archiving of log files will
be suspended for five minutes before being attempted again. DB2 will then
continue archiving log files as they become full. Log files that became full
during the five minute waiting period will not be archived immediately after
the delay, DB2 will spread the archive of these files over time.
The easiest way to remove old log files is to restart the database. Once the
database is restarted, only new log files and log files that the user exit
program failed to archive will be found in the database directory.
When a database is restarted, if the number of empty logs is less than the
number of primary logs specified by the logprimary configuration parameter,
additional log files will be allocated to make up the difference. If there are
more empty logs than primary logs available in the database directory, the
database can be restarted with as many available empty logs as are found in
the database directory. After database shutdown, secondary log files that have
been created will remain in the active log path when the database is restarted.
Blocking Transactions When the Log Directory File is Full
Until the log file is successfully created, any user application that attempts to
update table data will not able to commit transactions. Read-only queries may
not be directly affected; however, if a query needs to access data that is locked
by an update request, or a data page that is fixed in the buffer pool by the
updating application, read-only queries will also appear to hang.
Related concepts:
v “Understanding Recovery Logs” on page 34
v “Managing Log Files with a User Exit Program” on page 47
On Demand Log Archive
DB2® now supports the closing (and, if the user exit option is enabled, the
archiving) of the active log for a recoverable database at any time. This allows
you to collect a complete set of log files up to a known point, and then to use
these log files to update a standby database.
You can initiate on demand log archiving by invoking the ARCHIVE LOG
command, or by calling the db2ArchiveLog API.
Related reference:
v “ARCHIVE LOG” on page 225
v “db2ArchiveLog - Archive Active Log” in the Administrative API Reference
Using Raw Logs
You can use a raw device for your database log. There are both advantages
and disadvantages in doing so.
v The advantages are:
– You can attach more than 26 physical drives to a system.
– The file I/O path length is shorter. This may improve performance on
your system. You should conduct benchmarks to evaluate if there are
measurable benefits for your work load.
v The disadvantages are:
– The device cannot be shared by other applications; the entire device must
be assigned to DB2.
You can configure a raw log with the newlogpath database configuration
parameter. Before doing so, however, consider the advantages and
disadvantages listed above, and the additional considerations listed below:
v Only one device is allowed. You can define the device over multiple disks
at the operating system level. DB2® will make an operating system call to
determine the size of the device in 4-KB pages.
If you use multiple disks, this will provide a larger device, and the striping
that results can improve performance by faster I/O throughput.
v DB2 will attempt to write to the last 4-KB page of the device. If the device
size is greater than 2 GB, the attempt to write to the last page will fail on
operating systems that do not provide support for devices larger than 2 GB.
In this situation, DB2 will attempt to use all pages, up to the supported
limit.
Information about the size of the device is used to indicate the size of the
device (in 4-KB pages) available to DB2 under the support of the operating
system. The amount of disk space that DB2 can write to is referred to as the
device-size-available.
The first 4-KB page of the device is not used by DB2 (this space is generally
used by operating system for other purposes.) This means that the total
space available to DB2 is device-size = device-size-available - 1.
v The logsecond parameter is not used. DB2 will not allocate secondary logs.
The size of active log space is the number of 4-KB pages that result from
logprimary x logfilsiz.
v Log records are still grouped into log extents, each with a log file size
(logfilsiz) of 4-KB pages. Log extents are placed in the raw device, one after
another. Each extent also consists of an extra two pages for the extent
header. This means that the number of available log extents the device can
support is device-size / (logfilsiz + 2)
v The device must be large enough to support the active log space. That is,
the number of available log extents must be greater than (or equal to) the
value specified for the logprimary configuration parameter. If the userexit
configuration parameter is enabled, ensure that the raw device can contain
more logs than the value specified for the logprimary configuration
parameter. This will compensate for the delay incurred when the user exit
program is archiving a log file.
v If you are using circular logging, the logprimary configuration parameter
will determine the number of log extents that are written to the device. This
may result in unused space on the device.
Related tasks:
v “Specifying raw I/O” in the Administration Guide: Implementation
Related reference:
v “db2ReadLog - Asynchronous Read Log” on page 263
v Appendix F, “Tivoli Storage Manager” on page 319
How to Prevent Losing Log Files
Related concepts:
v “Understanding Recovery Logs” on page 34
ROLLFORWARD
CREATE BACKUP Units of work BACKUP Units of work RESTORE BACKUP Units of work
database database database database changes in logs database
TIME
You can use the summarized backup information in this file to recover all or
part of a database to a given point in time. The information in the file
includes:
v An identification (ID) field to uniquely identify each entry
v The part of the database that was copied and how
v The time the copy was made
v The location of the copy (stating both the device information and the logical
way to access the copy)
v The last time a restore operation was done
v The time at which a table space was renamed, showing the previous and
the current name of the table space
v The status of a backup operation: active, inactive, expired, or deleted
v The last log sequence number saved by the database backup or processed
during a rollforward recovery operation.
To see the entries in the recovery history file, use the LIST HISTORY
command.
Related reference:
v “Recovery History Retention Period configuration parameter -
rec_his_retentn” in the Administration Guide: Performance
v “LIST HISTORY” on page 228
Although you can use the PRUNE HISTORY command at any time to remove
entries from the history file, it is recommended that such pruning be left to
DB2. The number of DB2® database backups recorded in the recovery history
file is monitored automatically by DB2 garbage collection. DB2 garbage
collection is invoked:
v After a full, non-incremental database backup operation completes
successfully.
v After a database restore operation, where a rollforward operation is not
required, completes successfully.
v After a successful database rollforward operation completes successfully.
The configuration parameter num_db_backups defines how many active full
(non-incremental) database backup images are kept. The value of this
parameter is used to scan the history file, starting with the last entry.
An active database backup is one that can be restored and rolled forward using
the current logs to recover the current state of the database. An inactive
database backup is one that, if restored, moves the database back to a previous
state.
d1 d2 d3 d4 LS1
Figure 10. Active Database Backups. The value of num_db_backups has been set to four.
All active database backup images that are no longer needed are marked as
“expired”. These images are considered to be unnecessary, because more
recent backup images are available. All table space backup images and load
backup copies that were taken before the database backup image expired are
also marked as “expired”.
All database backup images that are marked as “inactive” and that were taken
prior to the point at which an expired database backup was taken are also
marked as “expired”. All associated inactive table space backup images and
load backup copies are also marked as “expired”.
t1 t2 t3 t4
d1 d2 d3 d4 LS1
t5 t6 t7
RS1 d5 d6 LS2
If an active database backup image is restored, but it is not the most recent
database backup recorded in the history file, any subsequent database backup
images belonging to the same log sequence are marked as “inactive”.
t1 t2 t3 t4 t5
d1 d2 d3 d4 d5 LS1
DB2 garbage collection is also responsible for marking the history file entries
for a DB2 database or table space backup image as “inactive”, if that backup
does not correspond to the current log sequence, also called the current log
chain. The current log sequence is determined by the DB2 database backup
image that has been restored, and the log files that have been processed. Once
a database backup image is restored, all subsequent database backup images
become “inactive”, because the restored image begins a new log chain. (This is
true if the backup image was restored without rolling forward. If a
rollforward operation has occurred, all database backups that were taken after
the break in the log chain are marked as “inactive”. It is conceivable that an
older database backup image will have to be restored because the rollforward
utility has gone through the log sequence containing a damaged current
backup image.)
A table space-level backup image becomes “inactive” if, after it is restored, the
current state of the database cannot be reached by applying the current log
sequence.
d1 d2 d3 d4 LS1
t5 t6 t7 t8
RS1 d5 d6 d7 LS2
t1 t2 t3 t4
d1 d2 d3 d4 LS1
t5 t6 t7 t8 t9 t10
RS1 LS2
d5 d6 d7 d8 d9
Related concepts:
v “Understanding the Recovery History File” on page 54
Related reference:
v “PRUNE HISTORY/LOGFILE” on page 231
The current status of a table space is reflected by its state. The table space
states most commonly associated with recovery are:
v Rollforward pending. A table space is put in this state after it is restored, or
following an input/output (I/O) error. After it is restored, the table space
Note: If you back up a table space that contains table data without the
associated long or LOB fields, you cannot perform point-in-time
rollforward recovery on that table space. All the table spaces for a
table must be rolled forward simultaneously to the same point in
time.
v The following apply for both backup and restore operations:
– Multiple I/O buffers and devices should be used.
– Allocate at least twice as many buffers as devices being used.
– Do not overload the I/O device controller bandwidth.
– Use more buffers of smaller size rather than a few large buffers.
– Tune the number and the size of the buffers according to the system
resources.
– Use of the PARALLELISM option
DB2® uses multiple agents to perform both crash recovery and database
rollforward recovery. You can expect better performance during these
operations, particularly on symmetric multi-processor (SMP) machines; using
multiple agents during database recovery takes advantage of the extra CPUs
that are available on SMP machines.
DB2 distributes log records to these agents so that they can be reapplied
concurrently, where appropriate. For example, the processing of log records
associated with insert, delete, update, add key, and delete key operations can
be parallelized in this way. Because the log records are parallelized at the
page level (log records on the same data page are processed by the same
agent), performance is enhanced, even if all the work was done on one table.
Related concepts:
v “Enhancing Recovery Performance” on page 60
Backup Overview
The simplest form of the DB2® BACKUP DATABASE command requires only
that you specify the alias name of the database that you want to back up. For
example:
db2 backup db sample
If the command completes successfully, you will have acquired a new backup
image that is located in the path or the directory from which the command
was issued. It is located in this directory because the command in this
example does not explicitly specify a target location for the backup image. On
Windows® operating systems, for example, this command (when issued from
the root directory) creates an image that appears in a directory listing as
follows:
Directory of D:\SAMPLE.0\DB2\NODE0000\CATN0000\20010320
Note: If the DB2 client and server are not located on the same system, the
default target directory for the backup image is the current working
Backup images are created at the target location that you have the option to
specify when you invoke the backup utility. This location can be:
v A directory (for backups to disk or diskette)
v A device (for backups to tape)
v A Tivoli® Storage Manager (TSM) server
v Another vendor’s server
On UNIX® based systems, file names for backup images created on disk
consist of a concatenation of several elements, separated by periods:
DB_alias.Type.Inst_name.NODEnnnn.CATNnnnn.timestamp.Seq_num
For example:
STAFF.0.DB201.NODE0000.CATN0000.19950922120112.001
For example:
SAMPLE.0\DB2\NODE0000\CATN0000\20010320\122644.001
Database alias A 1- to 8-character database alias name that
was specified when the backup utility was
invoked.
Type Type of backup operation, where: 0 represents
a full database-level backup, 3 represents a
table space-level backup, and 4 represents a
backup image generated by the LOAD...COPY
TO command.
Instance name A 1- to 8-character name of the current
instance that is taken from the
DB2INSTANCE environment variable.
Node number The node number. In non-partitioned database
systems, this is always NODE0000. In
partitioned database systems, it is NODExxxx,
where xxxx is the number assigned to the
node in the db2nodes.cfg file.
You cannot back up a database that is in an unusable state, except when that
database is in backup pending state. If any table space is in an abnormal state,
you cannot back up the database or that table space, unless it is in backup
pending state.
The backup utility provides concurrency control for multiple processes that
are making backup copies of different databases. This concurrency control
keeps the backup target devices open until all the backup operations have
ended. If an error occurs during a backup operation, and an open container
cannot be closed, other backup operations targeting the same drive may
receive access errors. To correct such access errors, you must terminate the
backup operation that caused the error and disconnect from the target device.
If you are using the backup utility for concurrent backup operations to tape,
ensure that the processes do not target the same tape.
Displaying Backup Information
You can use db2ckbkp to display information about existing backup images.
This utility allows you to:
v Test the integrity of a backup image and determine whether or not it can be
restored.
v Display information that is stored in the backup header.
v Display information about the objects and the log file header in the backup
image.
Related concepts:
v “Understanding the Recovery History File” on page 54
Related reference:
v “db2ckbkp - Check Backup” on page 213
v Appendix F, “Tivoli Storage Manager” on page 319
Prerequisites:
You should not be connected to the database that is to be backed up: the
backup utility automatically establishes a connection to the specified database,
and this connection is terminated at the completion of the backup operation.
The database can be local or remote. The backup image remains on the
database server, unless you are using a storage management product such as
Tivoli Storage Manager (TSM).
Restrictions:
Procedure:
The backup utility can be invoked through the command line processor (CLP),
the Backup Database notebook or Wizard in the Control Center, or the
db2Backup application programming interface (API).
Related concepts:
v “Administrative APIs in Embedded SQL or DB2 CLI Programs” in the
Application Development Guide: Programming Client Applications
v “Introducing the plug-in architecture for the Control Center” in the
Administration Guide: Implementation
Related tasks:
v “Migrating databases” in the Quick Beginnings for DB2 Servers
Related reference:
v “LIST DBPARTITIONNUMS” in the Command Reference
v “db2Backup - Backup database” on page 77
When you back up your database or table space, you must correctly set your
block size and your buffer size. This is particularly true if you are using a
variable block size (on AIX, for example, if the block size has been set to
zero).
There is a restriction on the number of fixed block sizes that can be used
when backing up. This restriction exists because DB2® writes out the backup
image header as a 4-KB block. The only fixed block sizes DB2 supports are
512, 1024, 2048, and 4096 bytes. If you are using a fixed block size, you can
specify any backup buffer size. However, you may find that your backup
operation will not complete successfully if the fixed block size is not one of
the sizes that DB2 supports.
If your database is large, using a fixed block size means that your backup
operations will take a long time. You may want to consider using a variable
block size.
Note: Use of a variable block size is currently not supported. If you must use
this option, ensure that you have well tested procedures in place that
enable you to recover successfully, using backup images that were
created with a variable block size.
When using a variable block size, you must specify a backup buffer size that
is less than or equal to the maximum limit for the tape devices that you are
using. For optimal performance, the buffer size must be equal to the
maximum block size limit of the device being used.
Where:
<device>
is a valid tape device name. The default on Windows operating
systems is \\.\TAPE0.
<blksize>
is the blocking factor for the tape. It must be a factor or multiple of
4096. The default value is the default block size for the device.
Restoring from a backup image with variable block size may return an error.
If this happens, you may need to rewrite the image using an appropriate
block size. Following is an example on AIX:
There is a problem with this approach if the image is too large to dump to a
file. One possible solution is to use the dd command to dump the image from
one tape device to another. This will work as long as the image does not span
more than one tape. When using two tape devices, the dd command is:
dd if=/dev/rmt1 of=/dev/rmt0 obs=4096
If using two tape devices is not possible, you may be able to dump the image
to a raw device using the dd command, and then to dump the image from the
raw device to tape. The problem with this approach is that the dd command
must keep track of the number of blocks dumped to the raw device. This
number must be specified when the image is moved back to tape. If the dd
command is used to dump the image from the raw device to tape, the
command dumps the entire contents of the raw device to tape. The dd utility
cannot determine how much of the raw device is used to hold the image.
When using the backup utility, you will need to know the maximum block
size limit for your tape devices. Here are some examples:
Notes:
1. The 7332 does not implement a block size limit. 256 KB is simply a
suggested value. Block size limit is imposed by the parent adapter.
2. While the 3590 does support a 2-MB block size, you could experiment
with lower values (like 256 KB), provided the performance is adequate for
your needs.
Support is now available for database backup to (and database restore from)
local named pipes on UNIX based systems.
Prerequisites:
Both the writer and the reader of the named pipe must be on the same
machine. The pipe must exist and be located on a local file system. Because
the named pipe is treated as a local device, there is no need to specify that the
target is a named pipe.
Procedure:
Related tasks:
v “Using Backup” on page 67
Related reference:
v “BACKUP DATABASE” on page 72
v “RESTORE DATABASE” on page 95
BACKUP DATABASE
Scope:
Authorization:
Required connection:
Command syntax:
II BACKUP DATABASE database-alias I
DB USER username
USING password
I I
, ONLINE INCREMENTAL
DELTA
TABLESPACE ( K tablespace-name )
I I
USE TSM
XBSA OPEN num-sessions SESSIONS
,
TO K dir
dev
LOAD library-name
OPEN num-sessions SESSIONS
I I
WITH num-buffers BUFFERS BUFFER buffer-size PARALLELISM n
I IM
WITHOUT PROMPTING
Command parameters:
DATABASE database-alias
Specifies the alias of the database to back up.
USER username
Identifies the user name under which to back up the database.
USING password
The password used to authenticate the user name. If the password is
omitted, the user is prompted to enter it.
TABLESPACE tablespace-name
A list of names used to specify the table spaces to be backed up.
ONLINE
Specifies online backup. The default is offline backup. Online backups
are only available for databases configured with logretain or userexit
enabled.
USE TSM
Specifies that the backup is to use Tivoli Storage Manager (formerly
ADSM) output.
OPEN num-sessions SESSIONS
The number of I/O sessions to be created between DB2 and TSM or
another backup vendor product.
If the tape system does not support the ability to uniquely reference a
backup image, it is recommended that multiple backup copies of the
same database not be kept on the same tape.
LOAD library-name
The name of the shared library (DLL on Windows operating systems)
containing the vendor backup and restore I/O functions to be used. It
can contain the full path. If the full path is not given, it will default to
the path on which the user exit program resides.
WITH num-buffers BUFFERS
The number of buffers to be used. The default is 2. However, when
creating a backup to multiple locations, a larger number of buffers
may be used to improve performance.
BUFFER buffer-size
The size, in 4-KB pages, of the buffer used when building the backup
image. The minimum value for this parameter is 8 pages; the default
value is 1024 pages. If a buffer size of zero is specified, the value of
the database manager configuration parameter backbufsz will be used
as the buffer allocation size.
If using tape with variable block size, reduce the buffer size to within
the range that the tape device supports.. Otherwise, the backup
operation may succeed, but the resulting image may not be
recoverable.
When using tape devices on SCO UnixWare 7, specify a buffer size of
16.
With most versions of Linux, using DB2’s default buffer size for
backup operations to a SCSI tape device results in error SQL2025N,
reason code 75. To prevent the overflow of Linux internal SCSI
buffers, use this formula:
bufferpages <= ST_MAX_BUFFERS * ST_BUFFER_BLOCKS / 4
Examples:
In the second command, the db2_all utility will issue the same backup
command to each database partition in turn (except partition 0). All four
database partition backup images will be stored in the /dev3/backup
directory.
Related reference:
v “RESTORE DATABASE” on page 95
v “ROLLFORWARD DATABASE” on page 134
Scope:
Authorization:
Required connection:
db2ApiDf.h
C API syntax:
/* File: db2ApiDf.h */
/* API: db2Backup */
/* ... */
SQL_API_RC SQL_API_FN
db2Backup (
db2Uint32 versionNumber,
void *pDB2BackupStruct,
struct sqlca *pSqlca);
/* File: db2ApiDf.h */
/* API: db2Backup */
/* ... */
SQL_API_RC SQL_API_FN
db2gBackup (
db2Uint32 versionNumber,
void *pDB2gBackupStruct,
struct sqlca *pSqlca);
API parameters:
versionNumber
Input. Specifies the version and release level of the structure passed as
the second parameter pDB2BackupStruct.
pDB2BackupStruct
Input. A pointer to the db2BackupStruct structure.
pSqlca
Output. A pointer to the sqlca structure.
piDBAlias
Input. A string containing the database alias (as cataloged in the
system database directory) of the database to back up.
iDBAliasLen
Input. A 4-byte unsigned integer representing the length in bytes of
the database alias.
oApplicationId
Output. The API will return a string identifying the agent servicing
the application. Can be used to obtain information about the progress
of the backup operation using the database monitor.
poApplicationId
Output. Supply a buffer of length SQLU_APPLID_LEN+1 (defined in
sqlutil). The API will return a string identifying the agent servicing
the application. Can be used to obtain information about the progress
of the backup operation using the database monitor.
iApplicationIdLen
Input. A 4-byte unsigned integer representing the length in bytes of
the poApplicationId buffer. Should be equal to SQLU_APPLID_LEN+1
(defined in sqlutil).
oTimestamp
Output. The API will return the time stamp of the backup image
poTimestamp
Output. Supply a buffer of length SQLU_TIME_STAMP_LEN+1
(defined in sqlutil). The API will return the time stamp of the
backup image.
iTimestampLen
Input. A 4-byte unsigned integer representing the length in bytes of
the poTimestamp buffer. Should be equal to
SQLU_TIME_STAMP_LEN+1 (defined in sqlutil).
piTablespaceList
Input. List of table spaces to be backed up. Required for table space
level backup only. Must be NULL for a database level backup. See
structure DB2TablespaceStruct.
piMediaList
Input. This structure allows the caller to specify the destination for the
backup operation. The information provided depends on the value of
the locationType field. The valid values for locationType (defined in
sqlutil.h ) are:
SQLU_LOCAL_MEDIA
Local devices (a combination of tapes, disks, or diskettes).
SQLU_TSM_MEDIA
TSM. If the locations pointer is set to NULL, the TSM shared
library provided with DB2 is used. If a different version of the
TSM shared library is desired, use SQLU_OTHER_MEDIA
and provide the shared library name.
SQLU_OTHER_MEDIA
Vendor product. Provide the shared library name in the
locations field.
SQLU_USER_EXIT
User exit. No additional input is required (only available
when server is on OS/2).
oBackupSize
Output. Size of the backup image (in MB).
iCallerAction
Input. Specifies action to be taken. Valid values (defined in
db2ApiDf.h) are:
DB2BACKUP_BACKUP
Start the backup.
DB2BACKUP_NOINTERRUPT
Start the backup. Specifies that the backup will run
unattended, and that scenarios which normally require user
intervention will either be attempted without first returning to
the caller, or will generate an error. Use this caller action, for
example, if it is known that all of the media required for the
backup have been mounted, and utility prompts are not
desired.
DB2BACKUP_CONTINUE
Continue the backup after the user has performed some action
requested by the utility (mount a new tape, for example).
DB2BACKUP_TERMINATE
Terminate the backup after the user has failed to perform
some action requested by the utility.
DB2BACKUP_DEVICE_TERMINATE
Remove a particular device from the list of devices used by
backup. When a particular medium is full, backup will return
a warning to the caller (while continuing to process using the
remaining devices). Call backup again with this caller action
to remove the device which generated the warning from the
list of devices being used.
DB2BACKUP_PARM_CHK
Used to validate parameters without performing a backup.
This option does not terminate the database connection after
the call returns. After successful return of this call, it is
expected that the user will issue a call with
SQLUB_CONTINUE to proceed with the action.
DB2BACKUP_PARM_CHK_ONLY
Used to validate parameters without performing a backup.
Before this call returns, the database connection established by
this call is terminated, and no subsequent call is required.
iBufferSize
Input. Backup buffer size in 4KB allocation units (pages). Minimum is
8 units. The default is 1024 units.
iNumBuffers
Input. Specifies number of backup buffers to be used. Minimum is 2.
Maximum is limited by memory. Can specify 0 for the default value
of 2.
iParallelism
Input. Degree of parallelism (number of buffer manipulators).
Minimum is 1. Maximum is 1024. The default is 1.
iOptions
Input. A bitmap of backup properties. The options are to be combined
using the bitwise OR operator to produce a value for iOptions. Valid
values (defined in db2ApiDf.h) are:
DB2BACKUP_OFFLINE
Offline gives an exclusive connection to the database.
DB2BACKUP_ONLINE
Online allows database access by other applications while the
backup operation occurs.
locations
A pointer to the list of media locations. For C, the list is
null-terminated strings. In the generic case, it is a list of db2Char
structures.
numLocations
The number of entries in the locations parameter.
locationType
A character indicated the media type. Valid values (defined in
sqlutil.h.) are:
SQLU_LOCAL_MEDIA
Local devices (tapes, disks, diskettes, or named pipes).
SQLU_TSM_MEDIA
Tivoli Storage Manager.
SQLU_OTHER_MEDIA
Vendor library.
SQLU_USER_EXIT
User exit (only available when the server is on OS/2).
pioData
A pointer to the character data buffer.
iLength
Input. The size of the pioData buffer.
oLength
Output. Reserved for future use.
Related samples:
v “dbrecov.sqc -- How to recover a database (C)”
v “dbrecov.sqC -- How to recover a database (C++)”
Example 2
Example 3
Related tasks:
v “Using Backup” on page 67
Related concepts:
v “Backup Overview” on page 63
Related tasks:
v “Using Backup” on page 67
Restore Overview
The simplest form of the DB2® RESTORE DATABASE command requires only
that you specify the alias name of the database that you want to restore. For
example:
db2 restore db sample
In this example, because the SAMPLE database exists, the following message
is returned:
SQL2539W Warning! Restoring to an existing database that is the same as
the backup image database. The database files will be deleted.
Do you want to continue ? (y/n)
If you specify y, and a backup image for the SAMPLE database exists, the
restore operation should complete successfully.
A table space is not usable until the restore operation (followed by rollforward
recovery) completes successfully.
If you have tables that span more than one table space, you should back up
and restore the set of table spaces together.
When doing a partial or subset restore operation, you can use either a table
space-level backup image, or a full database-level backup image and choose
one or more table spaces from that image. All the log files associated with
these table spaces from the time that the backup image was created must
exist.
Optimizing Restore Performance
To reduce the amount of time required to complete a restore operation:
v Increase the restore buffer size.
The restore buffer size must be a positive integer multiple of the backup
buffer size specified during the backup operation. If an incorrect buffer size
is specified, the buffers allocated will be the smallest acceptable size.
v Increase the number of buffers.
The value you specify must be a multiple of the number of pages that you
specified for the backup buffer. The minimum number of pages is 16.
v Increase the value of the PARALLELISM option.
This will increase the number of buffer manipulators (BM) that will be used
to write to the database during the restore operation. The default value is 1.
Prerequisites:
Restrictions:
Procedure:
The restore utility can be invoked through the command line processor (CLP),
the Restore Database notebook or wizard in the Control Center, or the
db2Restore application programming interface (API).
Detailed information is provided through the online help facility within the
Control Center.
Related concepts:
v “Administrative APIs in Embedded SQL or DB2 CLI Programs” in the
Application Development Guide: Programming Client Applications
v “Introducing the plug-in architecture for the Control Center” in the
Administration Guide: Implementation
Related reference:
v “db2Restore - Restore database” on page 104
The restore utility will create the TEST database and populate it.
If the database TEST does exist and the database history is not empty, you
must drop the database before the automatic incremental restore operation as
follows:
drop db test
DB20000I The DROP DATABASE command completed successfully.
If you do not want to drop the database, you can issue the PRUNE HISTORY
command using a timestamp far into the future and the WITH FORCE
OPTION parameter before issuing the RESTORE DATABASE command:
connect to test
Database Connection Information
connect reset
In this case, the RESTORE DATABASE COMMAND will act in the same
manner as when the database TEST did not exist.
If the database TEST does exist and the database history is empty, you do not
have to drop the database TEST before the automatic incremental restore
operation:
restore db prod incremental automatic taken at ts2 into test without
prompting
SQL2540W Restore is successful, however a warning "2539" was
encountered during Database Restore while processing in No
Interrupt mode.
You can continue taking incremental or delta backups of the test database
without first taking a full database backup. However, if you ever need to
restore one of the incremental or delta images you will have to perform a
manual incremental restore. This is because automatic incremental restore
operations require that each of the backup images restored during an
automatic incremental restore be created from the same database alias.
If you make a full database backup of the test database after you complete the
restore operation using the production backup image, you can take
incremental or delta backups and can restore them using either manual or
automatic mode.
Related concepts:
v “Incremental Backup and Recovery” on page 28
Related reference:
v “BACKUP DATABASE” on page 72
v “RESTORE DATABASE” on page 95
v “LIST HISTORY” on page 228
During a database backup operation, a record is kept of all the table space
containers associated with the table spaces that are being backed up. During a
restore operation, all containers listed in the backup image are checked to
determine if they exist and if they are accessible. If one or more of these
containers is inaccessible because of media failure (or for any other reason),
the restore operation will fail. A successful restore operation in this case
requires redirection to different containers. DB2® supports adding, changing,
or removing table space containers.
Related reference:
v “RESTORE DATABASE” on page 95
v “Restore Sessions - CLP Examples” on page 115
Related samples:
v “dbrecov.out -- HOW TO RECOVER A DATABASE (C)”
v “dbrecov.sqc -- How to recover a database (C)”
v “dbrecov.out -- HOW TO RECOVER A DATABASE (C++)”
v “dbrecov.sqC -- How to recover a database (C++)”
You can restore a full database backup image to an existing database. The
backup image may differ from the existing database in its alias name, its
database name, or its database seed.
A database seed is a unique identifier for a database that does not change
during the life of the database. The seed is assigned by the database manager
when the database is created. DB2® always uses the seed from the backup
image.
You can create a new database and then restore a full database backup image
to it. If you do not create a new database, the restore utility will create one.
RESTORE DATABASE
Rebuilds a damaged or corrupted database that has been backed up using the
DB2 backup utility. The restored database is in the same state it was in when
the backup copy was made. This utility can also restore to a database with a
name different from the database name in the backup image (in addition to
being able to restore to a new database).
This utility can also be used to restore backup images that were produced by
the previous two versions of DB2. If a migration is required, it will be
invoked automatically at the end of the restore operation.
If, at the time of the backup operation, the database was enabled for
rollforward recovery, the database can be brought to the state it was in prior
to the occurrence of the damage or corruption by invoking the rollforward
utility after successful completion of a restore operation.
This utility can also restore from a table space level backup.
Scope:
Authorization:
Required connection:
Command syntax:
II RESTORE DATABASE source-database-alias restore-options IM
DB CONTINUE
ABORT
restore-options:
I
USER username
USING password
I I
TABLESPACE
ONLINE
,
TABLESPACE ( K tablespace-name )
ONLINE
HISTORY FILE
ONLINE
I I
INCREMENTAL
AUTO
AUTOMATIC
ABORT
I I
USE TSM
XBSA OPEN num-sessions SESSIONS
,
FROM K directory
device
LOAD shared-library
OPEN num-sessions SESSIONS
I I
TAKEN AT date-time TO target-directory
I I
INTO target-database-alias NEWLOGPATH directory
I I
WITH num-buffers BUFFERS BUFFER buffer-size DLREPORT filename
I I
REPLACE EXISTING REDIRECT PARALLELISM n
I
WITHOUT ROLLING FORWARD WITHOUT DATALINK WITHOUT PROMPTING
Command parameters:
DATABASE source-database-alias
Alias of the source database from which the backup was taken.
CONTINUE
Specifies that the containers have been redefined, and that the final
step in a redirected restore operation should be performed.
ABORT
This parameter:
v Stops a redirected restore operation. This is useful when an error
has occurred that requires one or more steps to be repeated. After
RESTORE DATABASE with the ABORT option has been issued,
each step of a redirected restore operation must be repeated,
including RESTORE DATABASE with the REDIRECT option.
v Terminates an incremental restore operation before completion.
USER username
Identifies the user name under which the database is to be restored.
USING password
The password used to authenticate the user name. If the password is
omitted, the user is prompted to enter it.
TABLESPACE tablespace-name
A list of names used to specify the table spaces that are to be restored.
ONLINE
This keyword, applicable only when performing a table space-level
restore operation, is specified to allow a backup image to be restored
online. This means that other agents can connect to the database while
the backup image is being restored, and that the data in other table
spaces will be available while the specified table spaces are being
restored.
HISTORY FILE
This keyword is specified to restore only the history file from the
backup image.
INCREMENTAL
Without additional parameters, INCREMENTAL specifies a manual
cumulative restore operation. During manual restore the user must
issue each restore command manually for each image involved in the
restore. Do so according to the following order: last, first, second,
third and so on up to and including the last image.
INCREMENTAL AUTOMATIC/AUTO
Specifies an automatic cumulative restore operation.
INCREMENTAL ABORT
Specifies abortion of an in-progress manual cumulative restore
operation.
USE TSM
Specifies that the database is to be restored from TSM-managed
output.
OPEN num-sessions SESSIONS
Specifies the number of I/O sessions that are to be used with TSM or
the vendor product.
USE XBSA
Specifies that the XBSA interface is to be used. Backup Services APIs
(XBSA) are an open application programming interface for
applications or facilities needing data storage management for backup
or archiving purposes. Legato NetWorker is a storage manager that
currently supports the XBSA interface.
FROM directory/device
The directory or device on which the backup images reside. If USE
TSM, FROM, and LOAD are omitted, the default value is the current
directory.
On Windows operating systems, the specified directory must not be a
DB2-generated directory. For example, given the following commands:
db2 backup database sample to c:\backup
db2 restore database sample from c:\backup
If several items are specified, and the last item is a tape device, the
user is prompted for another tape. Valid response options are:
c Continue. Continue using the device that generated the
warning message (for example, continue when a new tape has
been mounted).
d Device terminate. Stop using only the device that generated
the warning message (for example, terminate when there are
no more tapes).
t Terminate. Abort the restore operation after the user has failed
to perform some action requested by the utility.
LOAD shared-library
The name of the shared library (DLL on Windows operating systems)
containing the vendor backup and restore I/O functions to be used.
The name can contain a full path. If the full path is not given, the
value defaults to the path on which the user exit program resides.
TAKEN AT date-time
The time stamp of the database backup image. The time stamp is
displayed after successful completion of a backup operation, and is
part of the path name for the backup image. It is specified in the form
yyyymmddhhmmss. A partial time stamp can also be specified. For
example, if two different backup images with time stamps
19971001010101 and 19971002010101 exist, specifying 19971002 causes
the image with time stamp 19971002010101 to be used. If a value for
this parameter is not specified, there must be only one backup image
on the source media.
TO target-directory
The target database directory. This parameter is ignored if the utility is
restoring to an existing database. The drive and directory that you
specify must be local.
BUFFER buffer-size
The size, in pages, of the buffer used for the restore operation. The
minimum value for this parameter is 8 pages; the default value is
1024 pages. If a buffer size of zero is specified, the value of the
database manager configuration parameter restbufsz will be used as
the buffer allocation size.
The restore buffer size must be a positive integer multiple of the
backup buffer size specified during the backup operation. If an
incorrect buffer size is specified, the buffers are allocated to be of the
smallest acceptable size.
When using tape devices on SCO UnixWare 7, specify a buffer size of
16.
DLREPORT filename
The file name, if specified, must be specified as an absolute path.
Reports the files that become unlinked, as a result of a fast reconcile,
during a restore operation. This option is only to be used if the table
being restored has a DATALINK column type and linked files.
REPLACE EXISTING
If a database with the same alias as the target database alias already
exists, this parameter specifies that the restore utility is to replace the
existing database with the restored database. This is useful for scripts
that invoke the restore utility, because the command line processor
will not prompt the user to verify deletion of an existing database. If
the WITHOUT PROMPTING parameter is specified, it is not
necessary to specify REPLACE EXISTING, but in this case, the
operation will fail if events occur that normally require user
intervention.
REDIRECT
Specifies a redirected restore operation. To complete a redirected
restore operation, this command should be followed by one or more
SET TABLESPACE CONTAINERS commands, and then by a
RESTORE DATABASE command with the CONTINUE option.
WITHOUT DATALINK
Specifies that any tables with DATALINK columns are to be put in
DataLink_Reconcile_Pending (DRP) state, and that no reconciliation of
linked files is to be performed.
PARALLELISM n
Specifies the number of buffer manipulators that are to be spawned
during the restore operation. The default value is 1.
WITHOUT PROMPTING
Specifies that the restore operation is to run unattended. Actions that
normally require user intervention will return an error message. When
using a removable media device, such as tape or diskette, the user is
prompted when the device ends, even if this option is specified.
Examples:
To restore the catalog partition first, then all other database partitions of the
WSDB database from the /dev3/backup directory, issue the following
commands from one of the database partitions:
db2_all ’<<+0< db2 RESTORE DATABASE wsdb FROM /dev3/backup
TAKEN AT 20020331234149
INTO wsdb REPLACE EXISTING’
db2_all ’<<+1< db2 RESTORE DATABASE wsdb FROM /dev3/backup
TAKEN AT 20020331234427
INTO wsdb REPLACE EXISTING’
db2_all ’<<+2< db2 RESTORE DATABASE wsdb FROM /dev3/backup
TAKEN AT 20020331234828
INTO wsdb REPLACE EXISTING’
db2_all ’<<+3< db2 RESTORE DATABASE wsdb FROM /dev3/backup
TAKEN AT 20020331235235
INTO wsdb REPLACE EXISTING’
The db2_all utility issues the restore command to each specified database
partition.
To verify that the containers of the restored database are the ones specified
in this step, issue the LIST TABLESPACE CONTAINERS command.
3. After successful completion of steps 1 and 2, issue:
db2 restore db mydb continue
For a manual database restore of the images created on Friday morning, issue:
restore db mydb incremental taken at (Fri)
restore db mydb incremental taken at (Sun)
restore db mydb incremental taken at (Wed)
restore db mydb incremental taken at (Thu)
restore db mydb incremental taken at (Fri)
Usage notes:
Any RESTORE DATABASE command of the form db2 restore db <name> will
perform a full database restore, regardless of whether the image being
restored is a database image or a table space image. Any RESTORE
DATABASE command of the form db2 restore db <name> tablespace will
perform a table space restore of the table spaces found in the image. Any
RESTORE DATABASE command in which a list of table spaces is provided
will perform a restore of whatever table spaces are explicitly listed.
Related reference:
v “BACKUP DATABASE” on page 72
v “ROLLFORWARD DATABASE” on page 134
v “db2move - Database Movement Tool” in the Command Reference
This utility can also be used to restore DB2 databases created in the two
previous releases.
This utility can also restore from a table space level backup.
Scope:
This API only affects the database partition from which it is called.
Authorization:
Required connection:
db2ApiDf.h
C API syntax:
/* File: db2ApiDf.h */
/* API: db2Restore */
/* ... */
SQL_API_RC SQL_API_FN
db2Restore (
db2Uint32 versionNumber,
void *pDB2RestoreStruct,
struct sqlca *pSqlca);
/* ... */
API parameters:
versionNumber
Input. Specifies the version and release level of the structure passed as
the second parameter pParamStruct.
pDB2RestoreStruct
Input. A pointer to the db2RestoreStruct structure
pSqlca
Output. A pointer to the sqlca structure.
piSourceDBAlias
Input. A string containing the database alias of the source database
backup image.
iSourceDBAliasLen
Input. A 4-byte unsigned integer representing the length in bytes of
the source database alias.
piTargetDBAlias
Input. A string containing the target database alias. If this parameter is
null, the piSourceDBAlias will be used.
iTargetDBAliasLen
Input. A 4-byte unsigned integer representing the length in bytes of
the target database alias.
oApplicationId
Output. The API will return a string identifying the agent servicing
the application. Can be used to obtain information about the progress
of the backup operation using the database monitor.
poApplicationId
Output. Supply a buffer of length SQLU_APPLID_LEN+1 (defined in
sqlutil). The API will return a string identifying the agent servicing
the application. Can be used to obtain information about the progress
of the backup operation using the database monitor.
iApplicationIdLen
Input. A 4-byte unsigned integer representing the length in bytes of
the poApplicationId buffer. Should be equal to SQLU_APPLID_LEN+1
(defined in sqlutil).
piTimestamp
Input. A string representing the timestamp of the backup image. This
field is optional if there is only one backup image in the source
specified.
iTimestampLen
Input. A 4-byte unsigned integer representing the length in bytes of
the piTimestamp buffer.
piTargetDBPath
Input. A string containing the relative or fully qualified name of the
target database directory on the server. Used if a new database is to
be created for the restored backup; otherwise not used.
piReportFile
Input. The file name, if specified, must be fully qualified. The
datalinks files that become unlinked during restore (as a result of a
fast reconcile) will be reported.
iReportFileLen
Input. A 4-byte unsigned integer representing the length in bytes of
the piReportFile buffer.
piTablespaceList
Input. List of table spaces to be restored. Used when restoring a
subset of table spaces from a database or table space backup image.
See the DB2TablespaceStruct structure . The following restrictions
apply:
v The database must be recoverable; that is, log retain or user exits
must be enabled.
v The database being restored to must be the same database that was
used to create the backup image. That is, table spaces can not be
added to a database through the table space restore function.
v The rollforward utility will ensure that table spaces restored in a
partitioned database environment are synchronized with any other
database partition containing the same table spaces. If a table space
restore operation is requested and the piTablespaceList is NULL, the
restore utility will attempt to restore all of the table spaces in the
backup image.
When restoring a table space that has been renamed since it was
backed up, the new table space name must be used in the restore
command. If the old table space name is used, it will not be found.
piMediaList
Input. Source media for the backup image. The information provided
depends on the value of the locationType field. The valid values for
locationType (defined in sqlutil) are:
SQLU_LOCAL_MEDIA
Local devices (a combination of tapes, disks, or diskettes).
SQLU_TSM_MEDIA
TSM. If the locations pointer is set to NULL, the TSM shared
library provided with DB2 is used. If a different version of the
TSM shared library is desired, use SQLU_OTHER_MEDIA
and provide the shared library name.
SQLU_OTHER_MEDIA
Vendor product. Provide the shared library name in the
locations field.
SQLU_USER_EXIT
User exit. No additional input is required (only available
when server is on OS/2).
piUsername
Input. A string containing the user name to be used when attempting
a connection. Can be NULL.
iUsernameLen
Input. A 4-byte unsigned integer representing the length in bytes of
piUsername. Set to zero if no user name is provided.
piPassword
Input. A string containing the password to be used with the user
name. Can be NULL.
iPasswordLen
Input. A 4-byte unsigned integer representing the length in bytes of
piPassword. Set to zero if no password is provided.
piNewLogPath
Input. A string representing the path to be used for logging after the
restore has completed. If this field is null the default log path will be
used.
iNewLogPathLen
Input. A 4-byte unsigned integer representing the length in bytes of
piNewLogPath.
piVendorOptions
Input. Used to pass information from the application to the vendor
functions. This data structure must be flat; that is, no level of
indirection is supported. Note that byte-reversal is not done, and the
code page is not checked for this data.
iVendorOptionsSize
Input. The length of the piVendorOptions, which cannot exceed 65535
bytes.
iParallelism
Input. Degree of parallelism (number of buffer manipulators).
Minimum is 1. Maximum is 1024. The default is 1.
iBufferSize
Input. Backup buffer size in 4KB allocation units (pages). Minimum is
8 units. The default is 1024 units. The size entered for a restore must
be equal to or an integer multiple of the buffer size used to produce
the backup image.
iNumBuffers
Input. Specifies number of restore buffers to be used.
iCallerAction
Input. Specifies action to be taken. Valid values (defined in db2ApiDf)
are:
DB2RESTORE_RESTORE
Start the restore operation.
DB2RESTORE_NOINTERRUPT
Start the restore. Specifies that the restore will run unattended,
and that scenarios which normally require user intervention
will either be attempted without first returning to the caller, or
will generate an error. Use this caller action, for example, if it
is known that all of the media required for the restore have
been mounted, and utility prompts are not desired.
DB2RESTORE_CONTINUE
Continue the restore after the user has performed some action
requested by the utility (mount a new tape, for example).
DB2RESTORE_TERMINATE
Terminate the restore after the user has failed to perform some
action requested by the utility.
DB2RESTORE_DEVICE_TERMINATE
Remove a particular device from the list of devices used by
restore. When a particular device has exhausted its input,
restore will return a warning to the caller. Call restore again
with this caller action to remove the device which generated
the warning from the list of devices being used.
DB2RESTORE_PARM_CHK
Used to validate parameters without performing a restore.
This option does not terminate the database connection after
the call returns. After successful return of this call, it is
expected that the user will issue a call with
DB2RESTORE_CONTINUE to proceed with the action.
DB2RESTORE_PARM_CHK_ONLY
Used to validate parameters without performing a restore.
Before this call returns, the database connection established by
this call is terminated, and no subsequent call is required.
DB2RESTORE_TERMINATE_INCRE
Terminate an incremental restore operation before completion.
DB2RESTORE_RESTORE_STORDEF
Initial call. Table space container redefinition requested.
DB2RESTORE_STORDEF_NOINTERRUPT
Initial call. The restore will run uninterrupted. Table space
container redefinition requested.
iOptions
Input. A bitmap of restore properties. The options are to be combined
using the bitwise OR operator to produce a value for iOptions. Valid
values (defined in db2ApiDf) are:
DB2RESTORE_OFFLINE
Perform an offline restore operation.
DB2RESTORE_ONLINE
Perform an online restore operation.
DB2RESTORE_DB
Restore all table spaces in the database. This must be run
offline
DB2RESTORE_TABLESPACE
Restore only the table spaces listed in the piTablespaceList
parameter from the backup image. This can be online or
offline.
DB2RESTORE_HISTORY
Restore only the history file.
DB2RESTORE_INCREMENTAL
Perform a manual cumulative restore operation.
DB2RESTORE_AUTOMATIC
Perform an automatic cumulative (incremental) restore
operation. Must be specified with
DB2RESTORE_INCREMENTAL.
DB2RESTORE_DATALINK
Perform reconciliation operations. Tables with a defined
DATALINK column must have RECOVERY YES option
specified.
DB2RESTORE_NODATALINK
Do not perform reconciliation operations. Tables with
DATALINK columns are placed into
DataLink_Roconcile_pending (DRP) state. Tables with a
defined DATALINK column must have the RECOVERY YES
option specified.
DB2RESTORE_ROLLFWD
Place the database in rollforward pending state after it has
been successfully restored.
DB2RESTORE_NOROLLFWD
Do not place the database in rollforward pending state after it
has been successfully restored. This cannot be specified for
backups taken online or for table space level restores. If,
following a successful restore, the database is in roll-forward
pending state, db2Rollforward - Rollforward Database must
be executed before the database can be used.
tablespaces
A pointer to the list of table spaces to be backed up. For C, the list is
null-terminated strings. In the generic case, it is a list of db2Char
structures.
numTablespaces
Number of entries in the tablespaces parameter.
locations
A pointer to the list of media locations. For C, the list is
null-terminated strings. In the generic case, it is a list of db2Char
structures.
numLocations
The number of entries in the locations parameter.
locationType
A character indicated the media type. Valid values (defined in
sqlutil) are:
SQLU_LOCAL_MEDIA
Local devices(tapes, disks, diskettes, or named pipes).
SQLU_TSM_MEDIA
Tivoli Storage Manager.
SQLU_OTHER_MEDIA
Vendor library.
SQLU_USER_EXIT
User exit (only available when the server is on OS/2).
pioData
A pointer to the character data buffer.
iLength
Input. The size of the pioData buffer
oLength
Output. Reserverd for future use.
Usage notes:
For offline restore, this utility connects to the database in exclusive mode. The
utility fails if any application, including the calling application, is already
connected to the database that is being restored. In addition, the request will
fail if the restore utility is being used to perform the restore, and any
application, including the calling application, is already connected to any
database on the same workstation. If the connect is successful, the API locks
out other applications until the restore is completed.
The current database configuration file will not be replaced by the backup
copy unless it is unusable. If the file is replaced, a warning message is
returned.
The database or table space must have been backed up using db2Backup -
Backup Database.
If the restore type specifies that the history file on the backup is to be
restored, it will be restored over the existing history file for the database,
effectively erasing any changes made to the history file after the backup that
is being restored. If this is undesirable, restore the history file to a new or test
database so that its contents can be viewed without destroying any updates
that have taken place.
If, at the time of the backup operation, the database was enabled for roll
forward recovery, the database can be brought to the state it was in prior to
the occurrence of the damage or corruption by issuing db2Rollforward after
successful execution of db2Restore. If the database is recoverable, it will
default to roll forward pending state after the completion of the restore.
If the database backup image is taken offline, and the caller does not want to
roll forward the database after the restore, the DB2RESTORE_NOROLLFWD
option can be used for the restore. This results in the database being useable
immediately after the restore. If the backup image is taken online, the caller
must roll forward through the corresponding log records at the completion of
the restore.
Related samples:
v “dbrecov.sqc -- How to recover a database (C)”
v “dbrecov.sqC -- How to recover a database (C++)”
To verify that the containers of the restored database are the ones specified
in this step, issue the LIST TABLESPACE CONTAINERS command for
every table space whose container locations are being redefined.
3. After successful completion of steps 1 and 2, issue:
db2 restore db mydb continue
Example 2
To verify that the containers of the restored database are the ones specified
in this step, issue the LIST TABLESPACE CONTAINERS command.
3. After successful completion of steps 1 and 2, issue:
db2 restore db mydb continue
Example 3
To verify that the containers of the restored database are the ones specified
in this step, issue the LIST TABLESPACE CONTAINERS command.
3. After successful completion of steps 1 and 2, issue:
db2 restore db mydb continue
Related reference:
v “RESTORE DATABASE” on page 95
v “LIST TABLESPACE CONTAINERS” in the Command Reference
v “SET TABLESPACE CONTAINERS” in the Command Reference
Rollforward Overview
Node number = 0
Rollforward status = not pending
Next log file to be read =
Log files processed = -
Last committed transaction = 2001-03-11-02.39.48.000000
A table space rollforward operation can run offline. The database is not
available for use until the rollforward operation completes successfully. This
occurs if the end of the logs is reached, or if the STOP option was specified
when the utility was invoked.
Related concepts:
v “Using the Load Copy Location File” on page 130
v “Understanding Recovery Logs” on page 34
Related reference:
v “ROLLFORWARD DATABASE” on page 134
v “Configuration Parameters for Database Logging” on page 39
Using Rollforward
Prerequisites:
Restrictions:
Procedure:
Detailed information is provided through the online help facility within the
Control Center.
Related concepts:
v “Administrative APIs in Embedded SQL or DB2 CLI Programs” in the
Application Development Guide: Programming Client Applications
v “Introducing the plug-in architecture for the Control Center” in the
Administration Guide: Implementation
Related reference:
v “db2Rollforward - Rollforward Database” on page 145
If the database is enabled for forward recovery, you have the option of
backing up, restoring, and rolling forward table spaces instead of the entire
database. You may want to implement a recovery strategy for individual table
spaces because this can save time: it takes less time to recover a portion of the
database than it does to recover the entire database. For example, if a disk is
bad, and it contains only one table space, that table space can be restored and
rolled forward without having to recover the entire database, and without
impacting user access to the rest of the database, unless the damaged table
space contains the system catalog tables; in this situation, you cannot connect
to the database. (The system catalog table space can be restored independently
if a table space-level backup image containing the system catalog table space
When a table space is rolled forward, DB2® will skip files which are known
not to contain any log records affecting that table space. If you want all of the
log files to be processed, set the DB2_COLLECT_TS_REC_INFO registry
variable to false.
The table space change history file (DB2TSCHG.HIS), located in the database
directory, keeps track of which logs should be processed for each table space.
You can view the contents of this file using the db2logsForRfwd utility, and
delete entries from it using the PRUNE HISTORY command. During a
database restore operation, DB2TSCHG.HIS is restored from the backup image
and then brought up to date during the database rollforward operation. If no
information is available for a log file, it is treated as though it is required for
the recovery of every table space.
Since information for each log file is flushed to disk after the log becomes
inactive, this information can be lost as a result of a crash. To compensate for
this, if a recovery operation begins in the middle of a log file, the entire log is
treated as though it contains modifications to every table space in the system.
After this, the active logs will be processed and the information for them will
be rebuilt. If information for older or archived log files is lost in a crash
situation and no information for them exists in the data file, they will be
treated as though they contain modifications for every table space during the
table space recovery operation.
Before rolling a table space forward, invoke the LIST TABLESPACES SHOW
DETAIL command. This command returns the minimum recovery time, which is
the earliest point in time to which the table space can be rolled forward. The
minimum recovery time is updated when data definition language (DDL)
statements are run against the table space, or against tables in the table space.
The table space must be rolled forward to at least the minimum recovery
time, so that it becomes synchronized with the information in the system
catalog tables. If recovering more than one table space, the table spaces must
be rolled forward to at least the highest minimum recovery time of all the
table spaces being recovered. In a partitioned database environment, issue the
If you are rolling table spaces forward to a point in time, and a table is
contained in multiple table spaces, all of these table spaces must be rolled
forward simultaneously. If, for example, the table data is contained in one
table space, and the index for the table is contained in another table space,
you must roll both table spaces forward simultaneously to the same point in
time.
If the data and the long objects in a table are in separate table spaces, and the
long object data has been reorganized, the table spaces for both the data and
the long objects must be restored and rolled forward together. You should
take a backup of the affected table spaces after the table is reorganized.
If you want to roll a table space forward to a point in time, and a table in the
table space is either:
v An underlying table for a materialized query or staging table that is in
another table space
v A materialized query or staging table for a table in another table space
You should roll both table spaces forward to the same point in time. If you do
not, the materialized query or staging table is placed in check pending state at
the end of the rollforward operation. The materialized query table will need
to be fully refreshed, and the staging table will be marked as incomplete.
If you want to roll a table space forward to a point in time, and a table in the
table space participates in a referential integrity relationship with another
table that is contained in another table space, you should roll both table
spaces forward simultaneously to the same point in time. If you do not, the
child table in the referential integrity relationship will be placed in check
pending state at the end of the rollforward operation. When the child table is
later checked for constraint violations, a check on the entire table is required.
If any of the following tables exist, they will also be placed in check pending
state with the child table:
v Any descendent materialized query tables for the child table
v Any descendent staging tables for the child table
v Any descendent foreign key tables of the child table
These tables will require full processing to bring them out of the check
pending state. If you roll both table spaces forward simultaneously, the
constraint will remain active at the end of the point-in-time rollforward
operation.
You can issue the QUIESCE TABLESPACES FOR TABLE command to create a
transaction-consistent point in time for rolling table spaces forward. The
quiesce request (in share, intent to update, or exclusive mode) waits (through
locking) for all running transactions against those table spaces to complete,
and blocks new requests. When the quiesce request is granted, the table
spaces are in a consistent state. To determine a suitable time to stop the
rollforward operation, you can look in the recovery history file to find quiesce
points, and check whether they occur after the minimum recovery time.
In the preceding example, the database is backed up at time T1. Then, at time
T3, table space TABSP1 is rolled forward to a specific point in time (T2), The
table space is backed up after time T3. Because the table space is in backup
pending state, this backup operation is mandatory. The time stamp of the
table space backup image is after time T3, but the table space is at time T2.
Log records from between T2 and T3 are not applied to TABSP1. At time T4,
the database is restored, using the backup image created at T1, and rolled
forward to the end of the logs. Table space TABSP1 is put in restore pending
state at time T3, because the database manager assumes that operations were
performed on TABSP1 between T3 and T4 without the log changes between
T2 and T3 having been applied to the table space. If these log changes were in
fact applied as part of the rollforward operation against the database, this
assumption would be incorrect. The table space-level backup that must be
taken after the table space is rolled forward to a point in time allows you to
roll that table space forward past a previous point-in-time rollforward
operation (T3 in the example).
Assuming that you want to recover table space TABSP1 to T4, you would
restore the table space from a backup image that was taken after T3 (either
the required backup, or a later one), then roll TABSP1 forward to the end of
the logs.
In the preceding example, the most efficient way of restoring the database to
time T4 would be to perform the required steps in the following order:
1. Restore the database.
2. Restore the table space.
3. Roll the database forward.
4. Roll the table space forward.
If you cannot find the TABSP1 backup image that follows time T3, or you
want to restore TABSP1 to T3 (or earlier), you can:
v Roll the table space forward to T3. You do not need to restore the table
space again, because it was restored from the database backup image.
v Restore the table space again, using the database backup taken at time T1,
then roll the table space forward to a time that precedes time T3.
v Drop the table space.
Related concepts:
v “Using the Load Copy Location File” on page 130
Related reference:
v “ROLLFORWARD DATABASE” on page 134
You may occasionally drop a table whose data you still need. If this is the
case, you should consider making your critical tables recoverable following a
drop table operation.
You could recover the table data by invoking a database restore operation,
followed by a database rollforward operation to a point in time before the
table was dropped. This may be time consuming if the database is large, and
your data will be unavailable during recovery.
Prerequisites:
For a dropped table to be recoverable, the table space in which the table
resides must have the DROPPED TABLE RECOVERY option turned on. This
can be done during table space creation, or by invoking the ALTER
TABLESPACE statement. The DROPPED TABLE RECOVERY option is table
space-specific and limited to regular table spaces. To determine if a table
space is enabled for dropped table recovery, you can query the
DROP_RECOVERY column in the SYSCAT.TABLESPACES catalog table.
Dropped table recovery is enabled by default for newly created data table
spaces.
When a DROP TABLE statement is run against a table whose table space is
enabled for dropped table recovery, an additional entry (identifying the
dropped table) is made in the log files. An entry is also made in the recovery
history file, containing information that can be used to recreate the table.
Restrictions:
There are some restrictions on the type of data that is recoverable from a
dropped table. It is not possible to recover:
v Large object (LOB) or long field data. The DROPPED TABLE RECOVERY
option is not supported for large table spaces. If you attempt to recover a
dropped table that contains LOB or LONG VARCHAR columns, these
columns will be set to NULL in the generated export file. The DROPPED
TABLE RECOVERY option can only be used for regular table spaces, not
for temporary or large table spaces.
v The metadata associated with row types. (The data is recovered, but not the
metadata.) The data in the hierarchy table of the typed table will be
recovered. This data may contain more information than appeared in the
typed table that was dropped.
Procedure:
Only one dropped table can be recovered at a time. You can recover a
dropped table by doing the following:
1. Identify the dropped table by invoking the LIST HISTORY DROPPED
TABLE command. The dropped table ID is listed in the Backup ID
column.
Related reference:
v “ALTER TABLESPACE statement” in the SQL Reference, Volume 2
v “CREATE TABLE statement” in the SQL Reference, Volume 2
v “ROLLFORWARD DATABASE” on page 134
v “LIST HISTORY” on page 228
The DB2LOADREC registry variable is used to identify the file with the load
copy location information. This file is used during rollforward recovery to
locate the load copy. It has information about:
v Media type
v Number of media devices to be used
v Location of the load copy generated during a table load operation
v File name of the load copy, if applicable
The following information is provided in the location file. The first five
parameters must have valid values, and are used to identify the load copy.
The entire structure is repeated for each load copy recorded. For example:
TIMestamp 19950725182542 *
Time stamp generated at load time
SCHema PAYROLL *
Schema of table loaded
TABlename EMPLOYEES *
Table name
DATabasename DBT *
Database name
DB2instance TORONTO *
DB2INSTANCE
BUFfernumber NULL *
Number of buffers to be used for
recovery
SESsionnumber NULL * Number of sessions to be used for
recovery
TYPeofmedia L * Type of media - L for local device
A for TSM
O for other vendors
LOCationnumber 3 * Number of locations
ENTry /u/toronto/dbt.payroll.employes.001
ENT /u/toronto/dbt.payroll.employes.002
ENT /dev/rmt0
TIM 19950725192054
SCH PAYROLL
TAB DEPT
DAT DBT
DB2® TORONTO
SES NULL
BUF NULL
TYP A
TIM 19940325192054
SCH PAYROLL
TAB DEPT
DAT DBT
DB2 TORONTO
SES NULL
If you want to use a particular load copy, you can use the recovery history file
for the database to determine the time stamp for that specific load operation.
In a partitioned database environment, the recovery history file is local to each
database partition.
Related reference:
v Appendix F, “Tivoli Storage Manager” on page 319
To ensure that the log record time stamps reflect the sequence of transactions
in a partitioned database system, DB2® uses the system clock on each machine
as the basis for the time stamps in the log records. If, however, the system
clock is set ahead, the log clock is automatically set ahead with it. Although
the system clock can be set back, the clock for the logs cannot, and remains at
the same advanced time until the system clock matches this time. The clocks
are then in synchrony. The implication of this is that a short term system clock
error on a database node can have a long lasting effect on the time stamps of
database logs.
For example, assume that the system clock on database partition server A is
mistakenly set to November 7, 1999 when the year is 1997, and assume that
the mistake is corrected after an update transaction is committed in the
partition at that database partition server. If the database is in continual use,
and is regularly updated over time, any point between November 7, 1997 and
November 7, 1999 is virtually unreachable through rollforward recovery.
When the COMMIT on database partition server A completes, the time stamp
in the database log is set to 1999, and the log clock remains at November 7,
1999 until the system clock matches this time. If you attempt to roll forward
to a point in time within this time frame, the operation will stop at the first
time stamp that is beyond the specified stop point, which is November 7,
1997.
Although DB2 cannot control updates to the system clock, the max_time_diff
database manager configuration parameter reduces the chances of this type of
problem occurring:
v The configurable values for this parameter range from 1 minute to 24
hours.
v When the first connection request is made to a non-catalog node, the
database partition server sends its time to the catalog node for the database.
The catalog node then checks that the time on the node requesting the
connection, and its own time are within the range specified by the
max_time_diff parameter. If this range is exceeded, the connection is refused.
v An update transaction that involves more than two database partition
servers in the database must verify that the clocks on the participating
database partition servers are in synchrony before the update can be
committed. If two or more database partition servers have a time difference
that exceeds the limit allowed by max_time_diff, the transaction is rolled
back to prevent the incorrect time from being propagated to other database
partition servers.
Note: All times are converted on the server and (in partitioned database
environments) on the catalog node.
v The timestamp string is converted to GMT on the server, so the time
represents the server’s time zone, not the client’s. If the client is in a
different time zone from the server, the server’s local time should be used.
v If the timestamp string is close to the time change due to daylight savings
time, it is important to know whether the stop time is before or after the
time change so that it is specified correctly.
Related concepts:
v “Rollforward Overview” on page 119
v “Synchronizing Clocks in a Partitioned Database System” on page 132
ROLLFORWARD DATABASE
Scope:
Authorization:
Required connection:
Command syntax:
II ROLLFORWARD DATABASE database-alias I
DB USER username
USING password
I I
TO isotime
USING LOCAL TIME ON ALL DBPARTITIONNUMS AND COMPLETE
END OF LOGS AND STOP
On Database Partition clause
COMPLETE
STOP On Database Partition clause
CANCEL
QUERY STATUS
USING LOCAL TIME
I I
TABLESPACE ONLINE
,
( K tablespace-name )
ONLINE
I I
OVERFLOW LOG PATH ( log-directory )
, Log Overflow clause
I IM
NORETRIEVE RECOVER DROPPED TABLE drop-table-id TO export-directory
DBPARTITIONNUM ( K db-partition-number1 I
DBPARTITIONNUMS TO db-partition-number2
I )
Command parameters:
DATABASE database-alias
The alias of the database that is to be rollforward recovered.
USER username
The user name under which the database is to be rollforward
recovered.
USING password
The password used to authenticate the user name. If the password is
omitted, the user is prompted to enter it.
TO
isotime
The point in time to which all committed transactions are to
be rolled forward (including the transaction committed
precisely at that time, as well as all transactions committed
previously).
This value is specified as a time stamp, a 7-part character
string that identifies a combined date and time. The format is
yyyy-mm-dd-hh.mm.ss.nnnnnn (year, month, day, hour, minutes,
seconds, microseconds), expressed in Coordinated Universal
Time (UTC). UTC helps to avoid having the same time stamp
associated with different logs (because of a change in time
associated with daylight savings time, for example). The time
stamp in a backup image is based on the local time at which
the backup operation started. The CURRENT TIMEZONE
special register specifies the difference between UTC and local
time at the application server. The difference is represented by
a time duration (a decimal number in which the first two
db-partition-number2
Specifies the second database partition number, so that all partitions
from db-partition-number1 up to and including db-partition-number2 are
included in the database partition list.
COMPLETE / STOP
Stops the rolling forward of log records, and completes the
rollforward recovery process by rolling back any incomplete
transactions and turning off the rollforward pending state of the
database. This allows access to the database or table spaces that are
being rolled forward. These keywords are equivalent; specify one or
the other, but not both. The keyword AND permits specification of
multiple operations at once; for example, db2 rollforward db sample
to end of logs and complete.
Note: When rolling table spaces forward to a point in time, the table
spaces are placed in backup pending state.
CANCEL
Cancels the rollforward recovery operation. This puts the database or
one or more table spaces on all partitions on which forward recovery
has been started in restore pending state:
v If a database rollforward operation is not in progress (that is, the
database is in rollforward pending state), this option puts the
database in restore pending state.
v If a table space rollforward operation is not in progress (that is, the
table spaces are in rollforward pending state), a table space list
must be specified. All table spaces in the list are put in restore
pending state.
v If a table space rollforward operation is in progress (that is, at least
one table space is in rollforward in progress state), all table spaces
that are in rollforward in progress state are put in restore pending
state. If a table space list is specified, it must include all table spaces
that are in rollforward in progress state. All table spaces on the list
are put in restore pending state.
v If rolling forward to a point in time, any table space name that is
passed in is ignored, and all table spaces that are in rollforward in
progress state are put in restore pending state.
v If rolling forward to the end of the logs with a table space list, only
the table spaces listed are put in restore pending state.
v It terminated abnormally.
v The STOP option was not specified.
v An error caused it to fail. Some errors, such as rolling forward
through a non-recoverable load operation, can put a table space into
restore pending state.
Note: Use this option with caution, and only if the rollforward
operation that is in progress cannot be completed because some
of the table spaces have been put in rollforward pending state
or in restore pending state. When in doubt, use the LIST
TABLESPACES command to identify the table spaces that are in
rollforward in progress state, or in rollforward pending state.
QUERY STATUS
Lists the log files that the database manager has rolled forward, the
next archive file required, and the time stamp (in CUT) of the last
committed transaction since rollforward processing began. In a
partitioned database environment, this status information is returned
for each partition. The information returned contains the following
fields:
Database partition number
Rollforward status
Status can be: database or table space rollforward pending,
database or table space rollforward in progress, database or
table space rollforward processing STOP, or not pending.
Next log file to be read
A string containing the name of the next required log file. In a
partitioned database environment, use this information if the
rollforward utility fails with a return code indicating that a
log file is missing or that a log information mismatch has
occurred.
Log files processed
A string containing the names of processed log files that are
no longer needed for recovery, and that can be removed from
the directory. If, for example, the oldest uncommitted
transaction starts in log file x, the range of obsolete log files
will not include x; the range ends at x - 1.
Last committed transaction
A string containing a time stamp in ISO format
(yyyy-mm-dd-hh.mm.ss). This time stamp marks the last
transaction committed after the completion of rollforward
recovery. The time stamp applies to the database. For table
v If the standby system does not have access to archive (eg. if TSM is
the archive, it only allows the original machine to retrieve the files)
v It might also be possible that while the production system is
archiving a file, the standby system is retrieving the same file, and
it might then get an incomplete log file. Noretrieve would solve this
problem.
RECOVER DROPPED TABLE drop-table-id
Recovers a dropped table during the rollforward operation. The table
ID can be obtained using the LIST HISTORY command.
TO export-directory
Specifies a directory to which files containing the table data are to be
written. The directory must be accessible to all database partitions.
Examples:
Example 1
Example 2
Roll forward to the end of the logs (two table spaces have been restored):
db2 rollforward db sample to end of logs
db2 rollforward db sample to end of logs and stop
Example 3
After three table spaces have been restored, roll one forward to the end of the
logs, and the other two to a point in time, both to be done online:
db2 rollforward db sample to end of logs tablespace(TBS1) online
Note that two rollforward operations cannot be run concurrently. The second
command can only be invoked after the first rollforward operation completes
successfully.
Example 4
Example 5 (MPP)
There are three database partitions: 0, 1, and 2. Table space TBS1 is defined on
all partitions, and table space TBS2 is defined on partitions 0 and 2. After
restoring the database on database partition 1, and TBS1 on database
partitions 0 and 2, roll the database forward on database partition 1:
db2 rollforward db sample to end of logs and stop
This returns warning SQL1271 (“Database is recovered but one or more table
spaces are off-line on database partition(s) 0 and 2.”).
db2 rollforward db sample to end of logs
Example 6 (MPP)
After restoring table space TBS1 on database partitions 0 and 2 only, roll TBS1
forward on database partitions 0 and 2:
db2 rollforward db sample to end of logs
This fails, because TBS1 is not ready for rollforward recovery on database
partition 1. Reports SQL4906N.
db2 rollforward db sample to end of logs on dbpartitionnums (0, 2)
tablespace(TBS1)
This fails, because TBS1 is not ready for rollforward recovery on database
partition 1; all pieces must be rolled forward together.
Note: With table space rollforward to a point in time, the database partition
clause is not accepted. The rollforward operation must take place on all
the database partitions on which the table space resides.
After restoring a table space on all database partitions, roll forward to PIT2,
but do not specify AND STOP. The rollforward operation is still in progress.
Cancel and roll forward to PIT1:
db2 rollforward db sample to pit2 tablespace(TBS1)
db2 rollforward db sample cancel tablespace(TBS1)
Example 8 (MPP)
This operation to the end of logs (not point in time) completes successfully.
The database partitions on which the table space resides do not have to be
specified. The utility defaults to the db2nodes.cfg file.
This operation to the end of logs (not point in time) completes successfully.
Usage notes:
If one or more table spaces is being rolled forward to a point in time, the
rollforward operation must continue at least to the minimum recovery time,
which is the last update to the system catalogs for this table space or its
tables. The minimum recovery time (in Coordinated Universal Time, or UTC)
for a table space can be retrieved using the LIST TABLESPACES SHOW
DETAIL command.
Rolling databases forward may require a load recovery using tape devices. If
prompted for another tape, the user can respond with one of the following:
c Continue. Continue using the device that generated the warning
message (for example, when a new tape has been mounted)
d Device terminate. Stop using the device that generated the warning
message (for example, when there are no more tapes)
t Terminate. Terminate all devices.
If the rollforward utility cannot find the next log that it needs, the log name is
returned in the SQLCA, and rollforward recovery stops. If no more logs are
available, use the STOP option to terminate rollforward recovery. Incomplete
transactions are rolled back to ensure that the database or table space is left in
a consistent state.
Compatibilities:
Related reference:
Scope:
In a partitioned database environment, this API can only be called from the
catalog partition. A database or table space rollforward call specifying a
point-in-time affects all database partition servers that are listed in the
db2nodes.cfg file. A database or table space rollforward call specifying end of
logs affects the database partition servers that are specified. If no database
partition servers are specified, it affects all database partition servers that are
listed in the db2nodes.cfg file; if no roll forward is needed on a particular
database partition server, that database partition server is ignored.
Authorization:
Required connection:
db2ApiDf.h
C API syntax:
/* File: db2ApiDf.h */
/* API: db2Rollforward */
/* ... */
SQL_API_RC SQL_API_FN
db2Rollforward_api (
db2Uint32 versionNumber,
void *pDB2RollforwardStruct,
struct sqlca *pSqlca);
/* File: db2ApiDf.h */
/* API: db2Rollforward */
/* ... */
SQL_API_RC SQL_API_FN
db2gRollforward_api (
db2Uint32 versionNumber,
void *pDB2gRollforwardStruct,
struct sqlca *pSqlca);
SQL_STRUCTURE db2gRfwdInputStruct
{
db2Uint32 DbAliasLen;
db2Uint32 StopTimeLen;
db2Uint32 UserNameLen;
db2Uint32 PasswordLen;
db2Uint32 OvrflwLogPathLen;
db2Uint32 DroppedTblIDLen;
db2Uint32 ExportDirLen;
sqluint32 Version;
char *pDbAlias;
db2Uint32 CallerAction;
char *pStopTime;
char *pUserName;
char *pPassword;
char *pOverflowLogPath;
db2Uint32 NumChngLgOvrflw;
struct sqlurf_newlogpath *pChngLogOvrflw;
db2Uint32 ConnectMode;
struct sqlu_tablespace_bkrst_list *pTablespaceList;
db2int32 AllNodeFlag;
db2int32 NumNodes;
SQL_PDB_NODE_TYPE *pNodeList;
db2int32 NumNodeInfo;
char *pDroppedTblID;
char *pExportDir;
db2Uint32 RollforwardFlags;
};
API parameters:
versionNumber
Input. Specifies the version and release level of the structure passed as
the second parameter.
pDB2RollforwardStruct
Input. A pointer to the db2RollforwardStruct structure.
pSqlca
Output. A pointer to the sqlca structure.
roll_input
Input. A pointer to the db2RfwdInputStruct structure.
roll_output
Output. A pointer to the db2RfwdOutputStruct structure.
DbAliasLen
Input. Specifies the length in bytes of the database alias.
StopTimeLen
Input. Specifies the length in bytes of the stop time parameter. Set to
zero if no stop time is provided.
UserNameLen
Input. Specifies the length in bytes of the user name. Set to zero if no
user name is provided.
PasswordLen
Input. Specifies the length in bytes of the password. Set to zero if no
password is provided.
OverflowLogPathLen
Input. Specifies the length in bytes of the overflow log path. Set to
zero if no overflow log path is provided.
Version
Input. The version ID of the rollforward parameters. It is defined as
SQLUM_RFWD_VERSION.
pDbAlias
Input. A string containing the database alias. This is the alias that is
cataloged in the system database directory.
CallerAction
Input. Specifies action to be taken. Valid values (defined in sqlutil)
are:
SQLUM_ROLLFWD
Rollforward to the point in time specified by pPointInTime. For
database rollforward, the database is left in rollforward-pending
state. For table space rollforward to a point in time, the table
spaces are left in rollforward-in-progress state.
SQLUM_STOP
End roll-forward recovery. No new log records are processed
and uncommitted transactions are backed out. The
rollforward-pending state of the database or table spaces is
turned off. Synonym is SQLUM_COMPLETE.
SQLUM_ROLLFWD_STOP
Rollforward to the point in time specified by pPointInTime,
and end roll-forward recovery. The rollforward-pending state of
the database or table spaces is turned off. Synonym is
SQLUM_ROLLFWD_COMPLETE.
SQLUM_QUERY
Query values for pNextArcFileName, pFirstDelArcFileName,
pLastDelArcFileName, and pLastCommitTime. Return database
status and a node number.
SQLUM_PARM_CHECK
Validate parameters without performing the roll forward.
SQLUM_CANCEL
Cancel the rollforward operation that is currently running.
The database or table space are put in recovery pending state.
RollforwardFlags
Input. Specifies the rollforward flags. Valid values (defined in
sqlpapiRollforward):
SQLP_ROLLFORWARD_LOCAL_TIME
Allows the user to rollforward to a point in time that is the
user’s local time rather than GMT time. This makes it easier
for users to rollforward to a specific point in time on their
local machines, and eliminates potential user errors due to the
translation of local to GMT time.
SQLP_ROLLFORWARD_NO_RETRIEVE
Controls which log files to be rolled forward on the standby
machine by allowing the user to disable the retrieval of
archived logs. By controlling the log files to be rolled forward,
one can ensure that the standby machine is X hours behind
the production machine, to prevent the user affecting both
systems. This option is useful if the standby system does not
have access to archive, for example, if TSM is the archive, it
only allows the original machine to retrieve the files. It will
also remove the possibility that the standby system would
retrieve an incomplete log file while the production system is
archiving a file and the standby system is retrieving the same
file.
pApplicationId
Output. The application ID.
pNumReplies
Output. The number of replies received.
pNodeInfo
Output. Database partition reply information.
username
Identifies the user name under which the database is to be rolled
forward.
password
The password used to authenticate the user name.
point-in-time
A time stamp in ISO format, yyyy-mm-dd-hh.mm.ss.nnnnnn (year,
month, day, hour, minutes, seconds, microseconds), expressed in
Coordinated Universal Time (UTC).
tablespacenames
A compound REXX host variable containing a list of table spaces to be
rolled forward. In the following, XXX is the name of the host variable:
XXX.0 Number of table spaces to be rolled forward
XXX.1 First table space name
XXX.2 Second table space name
XXX.x and so on.
default-log-path
The default overflow log path to be searched for archived logs during
recovery
logpaths
A compound REXX host variable containing a list of alternate log
paths to be searched for archived logs during recovery. In the
following, XXX is the name of the host variable:
XXX.0 Number of changed overflow log paths
XXX.1.1 First node
XXX.1.2 First overflow log path
XXX.2.1 Second node
XXX.2.2 Second overflow log path
XXX.3.1 Third node
XXX.3.2 Third overflow log path
XXX.x.1 and so on.
nodelist
A compound REXX host variable containing a list of database
partition servers. In the following, XXX is the name of the host
variable:
XXX.0 Number of nodes
Usage notes:
The database manager uses the information stored in the archived and the
active log files to reconstruct the transactions performed on the database since
its last backup.
If the database is in roll-forward pending state when this API is called, the
database will be rolled forward. Table spaces are returned to normal state
after a successful database roll-forward, unless an abnormal state causes one
or more table spaces to go offline. If the rollforward_pending flag is set to
TABLESPACE, only those table spaces that are in roll-forward pending state, or
those table spaces requested by name, will be rolled forward.
Note: If table space rollforward terminates abnormally, table spaces that were
being rolled forward will be put in SQLB_ROLLFORWARD_IN_PROGRESS
state. In the next invocation of ROLLFORWARD DATABASE, only
those table spaces in SQLB_ROLLFORWARD_IN_PROGRESS state will be
processed. If the set of selected table space names does not include all
table spaces that are in SQLB_ROLLFORWARD_IN_PROGRESS state, the table
spaces that are not required will be put into SQLB_RESTORE_PENDING
state.
This API reads the log files, beginning with the log file that is matched with
the backup image. The name of this log file can be determined by calling this
API with a caller action of SQLUM_QUERY before rolling forward any log files.
The transactions contained in the log files are reapplied to the database. The
log is processed as far forward in time as information is available, or until the
time specified by the stop time parameter.
If the need for database recovery was caused by application or human error,
the user may want to provide a time stamp value in pStopTime, indicating that
recovery should be stopped before the time of the error. This applies only to
full database roll-forward recovery, and to table space rollforward to a point
in time. It also permits recovery to be stopped before a log read error occurs,
determined during an earlier failed attempt to recover.
Related reference:
v “SQLCA” in the Administrative API Reference
Related samples:
v “dbrecov.sqc -- How to recover a database (C)”
v “dbrecov.sqC -- How to recover a database (C++)”
Example 2
Roll forward to the end of the logs (two table spaces have been restored):
db2 rollforward db sample to end of logs
db2 rollforward db sample to end of logs and stop
Example 3
After three table spaces have been restored, roll one forward to the end of the
logs, and the other two to a point in time, both to be done online:
db2 rollforward db sample to end of logs tablespace(TBS1) online
Example 4
Example 5 (MPP)
There are three nodes: 0, 1, and 2. Table space TBS1 is defined on all nodes,
and table space TBS2 is defined on nodes 0 and 2. After restoring the database
on node 1, and TBS1 on nodes 0 and 2, roll the database forward on node 1:
db2 rollforward db sample to end of logs and stop
This returns warning SQL1271 (“Database is recovered but one or more table
spaces are offline on node(s) 0 and 2.”).
db2 rollforward db sample to end of logs
Example 6 (MPP)
After restoring table space TBS1 on nodes 0 and 2 only, roll TBS1 forward on
nodes 0 and 2:
db2 rollforward db sample to end of logs
Node 1 is ignored.
db2 rollforward db sample to end of logs tablespace(TBS1)
This fails, because TBS1 is not ready for rollforward recovery on node 1.
Reports SQL4906N.
db2 rollforward db sample to end of logs on nodes (0, 2) tablespace(TBS1)
This fails, because TBS1 is not ready for rollforward recovery on node 1; all
pieces must be rolled forward together.
Example 7 (MPP)
After restoring a table space on all nodes, roll forward to PIT2, but do not
specify AND STOP. The rollforward operation is still in progress. Cancel and roll
forward to PIT1:
db2 rollforward db sample to pit2 tablespace(TBS1)
db2 rollforward db sample cancel tablespace(TBS1)
Example 8 (MPP)
Rollforward recover a table space that resides on eight nodes (3 to 10) listed in
the db2nodes.cfg file:
db2 rollforward database dwtest to end of logs tablespace (tssprodt)
This operation to the end of logs (not point in time) completes successfully.
The nodes on which the table space resides do not have to be specified. The
utility defaults to the db2nodes.cfg file.
Example 9 (MPP)
Rollforward recover six small table spaces that reside on a single node
database partition group (on node 6):
db2 rollforward database dwtest to end of logs on node (6)
tablespace(tsstore, tssbuyer, tsstime, tsswhse, tsslscat, tssvendor)
This operation to the end of logs (not point in time) completes successfully.
High Availability
High availability (HA) is the term that is used to describe systems that run and
are available to customers more or less all the time. For this to occur:
v Transactions must be processed efficiently, without appreciable performance
degradations (or even loss of availability) during peak operating periods. In
a partitioned database environment, DB2® can take advantage of both
intrapartition and interpartition parallelism to process transactions
efficiently. Intrapartition parallelism can be used in an SMP environment to
process the various components of a complex SQL statement
simultaneously. Interpartition parallelism in a partitioned database
environment, on the other hand, refers to the simultaneous processing of a
query on all participating nodes; each node processes a subset of the rows
in the table.
v Systems must be able to recover quickly when hardware or software
failures occur, or when disaster strikes. DB2 has an advanced continuous
checkpointing system and a parallel recovery capability that allow for
extremely fast crash recovery.
The ability to recover quickly can also depend on having a proven backup
and recovery strategy in place.
v Software that powers the enterprise databases must be continuously
running and available for transaction processing. To keep the database
manager running, you must ensure that another database manager can take
over if it fails. This is called failover. Failover capability allows for the
automatic transfer of workload from one system to another when there is
hardware failure.
The two most common failover strategies on the market are known as idle
standby and mutual takeover, although the configurations associated with these
terms may also be associated with different terms that depend on the vendor:
Idle Standby
In this configuration, one system is used to run a DB2 instance, and
the second system is “idle”, or in standby mode, ready to take over
the instance if there is an operating system or hardware failure
involving the first system. Overall system performance is not
impacted, because the standby system is idle until needed.
Mutual Takeover
In this configuration, each system is the designated backup for
another system. Overall system performance may be impacted,
because the backup system must do extra work following a failover: it
must do its own work plus the work that was being done by the
failed system.
Related concepts:
v “Parallelism” in the Administration Guide: Planning
v “Developing a Backup and Recovery Strategy” on page 3
v “High Availability through Online Split Mirror and Suspended I/O
Support” on page 167
v “High Availability in the Solaris Operating Environment” on page 189
To ensure that you are able to recover your database in a disaster recovery
situation consider the following:
v The archive location should be geographically separate from the primary
site.
v Remotely mirror the log at the standby database site
v Use a synchronous mirror for no loss support. You can do this through
DB2® log mirroring or modern disk subsystems such as ESS and EMC.
NVRAM cache (both local and remote) is also recommended to minimize
the performance impact of a disaster recovery situation.
Notes:
1. When the standby database processes a log record indicating that an index
rebuild took place on the primary database, the indexes on the standby
server are not automatically rebuilt. The index will be rebuilt on the
standby server either at the first connection to the database, or at the first
attempt to access the index after the standby server is taken out of
rollforward pending state. It is recommended that the standby server be
resynchronized with the primary server if any indexes on the primary
server are rebuilt.
2. If the load utility is run on the primary database with the COPY YES
option specified, the standby database must have access to the copy
image.
Related concepts:
v “High Availability through Online Split Mirror and Suspended I/O
Support” on page 167
Related tasks:
v “Using a Split Mirror as a Standby Database” on page 169
Related reference:
v Appendix G, “User Exit for Database Recovery” on page 323
High Availability through Online Split Mirror and Suspended I/O Support
If you would rather not back up a large database using the DB2® backup
utility, you can make copies from a mirrored image by using suspended I/O
and the split mirror function. This approach also:
v Eliminates backup operation overhead from the production machine
v Represents a fast way to clone systems
The db2inidb command initializes the split mirror so that it can be used:
v As a clone database
v As a standby database
v As a backup image
This command can only be issued against a split mirror, and it must be run
before the split mirror can be used.
Note: Ensure that the split mirror contains all containers and directories
which comprise the database, including the volume directory.
Related reference:
v “db2inidb - Initialize a Mirrored Database” on page 220
Restrictions:
You cannot back up a cloned database, restore the backup image on the
original system, and roll forward through log files produced on the original
system.
Procedure:
Note: This command will roll back transactions that are in flight when the
split occurs, and start a new log chain sequence so that any logs
from the primary database cannot be replayed on the cloned
database.
Related concepts:
v “High Availability through Online Split Mirror and Suspended I/O
Support” on page 167
Related reference:
v “db2inidb - Initialize a Mirrored Database” on page 220
Using a Split Mirror as a Standby Database
Procedure:
Note: If you have only DMS table spaces (database managed space), you
can take a full database backup to offload the overhead of taking a
backup on the production database.
6. Set up a user exit program to retrieve the log files from the primary
system.
Related concepts:
v “High Availability through Online Split Mirror and Suspended I/O
Support” on page 167
Related tasks:
v “Making a Clone Database” on page 168
v “Using a Split Mirror as a Backup Image” on page 170
Related reference:
v “db2inidb - Initialize a Mirrored Database” on page 220
Using a Split Mirror as a Backup Image
Procedure:
Related tasks:
v “Making a Clone Database” on page 168
v “Using a Split Mirror as a Standby Database” on page 169
Related reference:
v “db2inidb - Initialize a Mirrored Database” on page 220
On UNIX® based systems, the Fault Monitor Facility improves the availability
of non-clustered DB2® environments through a sequence of processes that
work together to ensure that DB2 is running. That is, the init daemen
monitors the Fault Monitor Coordinator (FMC), the FMC monitors the fault
monitors and the fault monitors monitor DB2.
The Fault Monitor Coordinator (FMC) is the process of the Fault Monitor
Facility that is started at the UNIX boot sequence. The init daemon starts the
FMC and will restart it if it terminates abnormally. The FMC starts one fault
monitor for each DB2 instance. Each fault monitor runs as a daemon process
and has the same user privileges as the DB2 instance. Once a fault monitor is
started, it will be monitored to make sure it does not exit prematurely. If a
fault monitor fails, it will be restarted by the FMC. Each fault monitor will, in
turn, be responsible for monitoring one DB2 instance. If the DB2 instance exits
prematurely, the fault monitor will restart it.
Notes:
1. If you are using a high availability clustering product (i.e., HACMP or
MSCS), the fault monitor facility must be turned off since the instance
startup and shut down is controlled by the clustering product.
2. The fault monitor will only become inactive if the db2stop command is
issued. If a DB2 instance is shut down in any other way, the fault monitor
will start it up again.
A fault monitor registry file is created for every instance on each physical
machine when the fault monitor daemon is started. The values in this file
specify the behavior of the fault monitors. The file can be found in the
/sqllib/ directory and is called fm.<machine_name>.reg. This file can be
altered using the db2fm command. The entries are as follows:
where:
FM_ON
Specifies whether or not the fault monitor should be started. If the
value is set to NO, the fault monitor daemon will not be started, or will
be turned off if it had already been started. The default value is NO.
FM_ACTIVE
Specifies whether or note the fault monitor is active. The fault monitor
will only take action if both FM_ON and FM_ACTIVE are set to YES.
If FM_ON is set to YES and FM_ACTIVE is set to NO, the fault monitor
daemon will be started, but it will not be active. That means that is
will not try to bring DB2 back online if it shuts down. The default
value is YES.
START_TIMEOUT
Specifies the amount of time within which the fault monitor must
start the service it is monitoring. The default value is 600 seconds.
STOP_TIMEOUT
Specifies the amount of time within which the fault monitor must
bring down the service it is monitoring. The default value is 600
seconds.
STATUS_TIMEOUT
Specifies the amount of time within which the fault monitor must get
the status of the service it is monitoring. The default value is 20
seconds.
STATUS_INTERVAL
Specifies the minimum time between two consecutive calls to obtain
the status of the service that is being monitored. The default value is
20 seconds.
RESTART_RETRIES
Specifies the number of times the fault monitor will try to obtain the
status of the service being monitored after a failed attempt. Once this
number is reached the fault monitor will take action to bring the
service back online. The default value is 3.
This file can be altered using the db2fm command. For example:
Note: If the fault monitor registry file does not exist, the default values will
be used.
Related reference:
v “db2fm - DB2 Fault Monitor” on page 173
Authorization:
Authorization over the instance against which you are running the command.
Required Connection:
None.
Command Syntax:
II db2fm -t service -m module path IM
-i instance -u
-d
-s
-k
-U
-D
-S
-K
-f on
off
-a on
off
-T T1/T2
-l I1/I2
-R R1/R2
-n email
-h
-?
Command Parameters:
-m module-path
Defines the full path of the fault monitor shared library for the
product being monitored. The default is
$INSTANCEHOME/sqllib/lib/libdb2gcf.
-t service
Gives the unique text descriptor for a service.
-i instance
Defines the instance of the service.
-u Brings the service up.
-U Brings the fault monitor daemon up.
-d Brings the service down.
-D Brings the fault monitor daemon down.
-k Kills the service.
-K Kills the fault monitor daemon.
-s Returns the status of the service.
-S Returns the status of the fault monitor daemon.
Note: the status of the service or fault monitor can be one of the
following
v Not properly installed,
v INSTALLED PROPERLY but NOT ALIVE,
v ALIVE but NOT AVAILABLE (maintenance),
v AVAILABLE, or
v UNKNOWN
-f on|off
Turns fault monitor on or off.
Note: If this option is set off, the fault monitor daemon will not be
started, or the daemon will exit if it was running.
-a on|off
Activates or deactivate fault monitoring.
Note: If this option if set off, the fault monitor will not be actively
monitoring, which means if the service goes down it will not
try to bring it back.
-T T1/T2
Overwrites the start and stop time-out.
e.g.
v -T 15/10 updates the two time-outs respectively
v -T 15 updates the start time-out to 15 secs
v -T /10 updates the stop time-out to 10 secs
-I I1/I2
Sets the status interval and time-out respectively.
-R R1/R2
Sets the number of retries for the status method and action before
giving up.
-n email
Sets the email address for notification of events.
-h Prints usage.
-? Prints usage.
Usage Notes:
1. This command may be used on UNIX platforms only.
There are two types of events: standard events that are anticipated within the
operations of HACMP ES, and user-defined events that are associated with
the monitoring of parameters in hardware and software components.
One of the standard events is the node_down event. When planning what
should be done as part of the recovery process, HACMP allows two failover
options: hot (or idle) standby, and mutual takeover.
Note: When using HACMP, ensure that DB2® instances are not started at boot
time by using the db2iauto utility as follows:
db2iauto -off InstName
where
In a hot standby configuration, the AIX processor node that is the takeover
node is not running any other workload. In a mutual takeover configuration,
the AIX processor node that is the takeover node is running other workloads.
For example, consider a DB2 database partition (logical node). If its log and
table space containers were placed on external disks, and other nodes were
linked to those disks, it would be possible for those other nodes to access
these disks and to restart the database partition (on a takeover node). It is this
type of operation that is automated by HACMP. HACMP ES can also be used
to recover NFS file systems used by DB2 instance main user directories.
Notes:
1. % is modulus.
2. In all cases, the operators are evaluated from left to right.
Following are some examples of how to create containers using this special
argument:
v Creating containers for use on a two-node system.
CREATE TABLESPACE TS1 MANAGED BY DATABASE USING
(device ’/dev/rcont $N’ 20000)
A script file, rc.db2pe, is packaged with DB2 UDB Enterprise Server Edition
(and installed on each node in /usr/bin) to assist in configuring for HACMP
ES failover or recovery in either hot standby or mutual takeover nodes. In
addition, DB2 buffer pool sizes can be customized during failover in mutual
takeover configurations from within rc.db2pe. (Buffer pool sizes can be
configured to ensure proper resource allocation when two database partitions
run on one physical node.)
Each object requires one line in the event definition, even if the line is not
used. If these lines are removed, HACMP ES Cluster Manager cannot parse
the event definition properly, and this may cause the system to hang. Any line
beginning with ″#″ is treated as a comment line.
Note: The rules file requires exactly nine lines for each event definition, not
counting any comment lines. When adding a user-defined event at the
bottom of the rules file, it is important to remove the unnecessary
empty line at the end of the file, or the node will hang.
HACMP ES uses PSSP event detection to treat user-defined events. The PSSP
Event Management subsystem provides comprehensive event detection by
monitoring various hardware and software resources.
Related reference:
v “db2start - Start DB2” in the Command Reference
D:
Quorum disk
used by MSCS
E:
DB2 Group 0
C: SQLLIB C: SQLLIB
F:
(Each machine has DB2 code
installed on a local disk) DB2 Group 1
The nodes in an MSCS cluster are connected using one or more shared storage
buses and one or more physically independent networks. The network that
connects only the servers but does not connect the clients to the cluster is
referred to as a private network. The network that supports client connections
is referred to as the public network. There are one or more local disks on each
node. Each shared storage bus attaches to one or more disks. Each disk on the
shared bus is owned by only one node of the cluster at a time. The DB2
software resides on the local disk. DB2 database files (tables, indexes, log files,
etc.) reside on the shared disks. Because MSCS does not support the use of
raw partitions in a cluster, it is not possible to configure DB2 to use raw
devices in an MSCS environment.
Note: The DB2 resource is configured to depend on all other resources in the
same group so the DB2 server can only be started after all other
resources are online.
Failover Configurations
In a partitioned database environment, the clusters do not all have to have the
same type of configuration. You can have some clusters that are set up to use
hot standby, and others that are set up for mutual takeover. For example, if
your DB2 instance consists of five workstations, you can have two machines
Cluster
Workstation A Workstation B
Instance A Instance A
Workstation A Workstation B
Instance A Instance A
Instance B Instance B
Note: When using Sun Cluster 3.0 or Veritas Cluster Server, ensure that DB2
instances are not started at boot time by using the db2iauto utility as
follows:
db2iauto -off InstName
where
High Availability
The computer systems that host data services contain many distinct
components, and each component has a ″mean time before failure″ (MTBF)
associated with it. The MTBF is the average time that a component will
remain usable. The MTBF for a quality hard drive is in the order of one
million hours (approximately 114 years). While this seems like a long time,
one out of 200 disks is likely to fail within a 6-month period.
Figure 19. Failover. When Machine B fails its data service is moved to another machine in the
cluster so that the data can still be accessed.
The private network interfaces are used to send heartbeat messages, as well as
control messages, among the machines in the cluster. The public network
interfaces are used to communicate directly with clients of the HA cluster. The
disks in an HA cluster are connected to two or more machines in the cluster,
so that if one machine fails, another machine has access to them.
One of the benefits of an HA cluster is that a data service can recover without
the aid of support staff, and it can do so at any time. Another benefit is
redundancy. All of the parts in the cluster should be redundant, including the
machines themselves. The cluster should be able to survive any single point of
failure.
Even though highly available data services can be very different in nature,
they have some common requirements. Clients of a highly available data
service expect the network address and host name of the data service to
remain the same, and expect to be able to make requests in the same way,
regardless of which machine the data service is on.
Fault Tolerance
Related concepts:
v “High Availability on Sun Cluster 3.0” on page 192
v “High Availability with VERITAS Cluster Server” on page 195
This section provides an overview of how DB2® works with Sun Cluster 3.0 to
achieve high availability, and includes a description of the high availability
agent, which acts as a mediator between the two software products (see
Figure 20).
Figure 20. DB2, Sun Cluster 3.0, and High Availability. The relationship between DB2, Sun Cluster
3.0 and the high availability agent.
Failover
Multihost Disks
Sun Cluster 3.0 requires multihost disk storage. This means that disks can be
connected to more than one node at a time. In the Sun Cluster 3.0
environment, multihost storage allows disk devices to become highly
available. Disk devices that reside on multihost storage can tolerate single
Global Devices
Global devices are used to provide cluster-wide, highly available access to any
device in a cluster, from any node, regardless of the deviceÆs physical
location. All disks are included in the global namespace with an assigned
device ID (DID) and are configured as global devices. Therefore, the disks
themselves are visible from all cluster nodes.
A cluster or global file system is a proxy between the kernel (on one node)
and the underlying file system volume manager (on a node that has a
physical connection to one or more disks). Cluster file systems are dependent
on global devices with physical connections to one or more nodes. They are
independent of the underlying file system and volume manager. Currently,
cluster file systems can be built on UFS using either Solstice DiskSuite or
VERITAS Volume Manager. The data only becomes available to all nodes if
the file systems on the disks are mounted globally as a cluster file system.
Device Group
All multihost disks must be controlled by the Sun Cluster framework. Disk
groups, managed by either Solstice DiskSuite or VERITAS Volume Manager,
are first created on the multihost disk. Then, they are registered as Sun
Cluster disk device groups. A disk device group is a type of global device.
Multihost device groups are highly available. Disks are accessible through an
alternate path if the node currently mastering the device group fails. The
failure of the node mastering the device group does not affect access to the
device group except for the time required to perform the recovery and
consistency checks. During this time, all requests are blocked (transparently to
the application) until the system makes the device group available.
Data Services
The term data service is used to describe a third-party application that has
been configured to run on a cluster rather than on a single server. A data
service includes the application software and Sun Cluster 3.0 software that
starts, stops and monitors the application. Sun Cluster 3.0 supplies data
service methods that are used to control and monitor the application within
the cluster. These methods run under the control of the Resource Group
Manager (RGM), which uses them to start, stop, and monitor the application
on the cluster nodes. These methods, along with the cluster framework
software and multihost disks, enable applications to become highly available
data services. As highly available data services, they can prevent significant
application interruptions after any single failure within the cluster, regardless
of whether the failure is on a node, on an interface component or in the
application itself. The RGM also manages resources in the cluster, including
network resources (logical host names and shared addresses)and application
instances.
Related concepts:
v “High Availability in the Solaris Operating Environment” on page 189
v “High Availability with VERITAS Cluster Server” on page 195
Hardware Requirements
Software Requirements
While VERITAS Cluster Server does not require a volume manager, the use of
VERITAS Volume Manager is strongly recommended for ease of installation,
configuration and management.
Failover
When a failover occurs with VERITAS Cluster Server, users may or may not
see a disruption in service. This will be based on the type of connection
(stateful or stateless) that the client has with the application service. In
application environments with stateful connections (like DB2 UDB), users may
see a brief interruption in service and may need to reconnect after the failover
has completed. In application environments with stateless connections (like
NFS), users may see a brief delay in service but generally will not see a
disruption and will not need to log back on.
Shared Storage
When used with the VCS HA-DB2 Agent, Veritas Cluster Server requires
shared storage. Shared storage is storage that has a physical connection to
multiple nodes in the cluster. Disk devices resident on shared storage can
tolerate node failures since a physical path to the disk devices still exists
through one or more alternate cluster nodes.
Through the control of VERITAS Cluster Server, cluster nodes can access
shared storage through a logical construct called ″disk groups″. Disk groups
represent a collection of logically defined storage devices whose ownership
can be atomically migrated between nodes in a cluster. A disk group can only
be imported to a single node at any given time. For example, if Disk Group A
is imported to Node 1 and Node 1 fails, Disk Group A can be exported from
the failed node and imported to a new node in the cluster. VERITAS Cluster
Server can simultaneously control multiple disk groups within a single cluster.
Enterprise agents tend to focus on specific applications such as DB2 UDB. The
VCS HA-DB2 Agent can be considered an Enterprise Agent, and it interfaces
with VCS through the VCS Agent framework.
The lowest level object that is monitored is a resource, and there are various
resource types (i.e., share, mount). Each resource must be configured into a
resource group, and VCS will bring all resources in a particular resource
group online and offline together. To bring a resource group online or offline,
VCS will invoke the start or stop methods for each of the resources in the
group. There are two types of resource groups: failover and parallel. A highly
available DB2 UDB configuration, regardless of whether it is partitioned or
not, will use failover resource groups.
Related concepts:
v “High Availability in the Solaris Operating Environment” on page 189
Read a syntax diagram from left to right, and from top to bottom, following
the horizontal line (the main path). If the line ends with an arrowhead, the
command syntax is continued, and the next line starts with an arrowhead. A
vertical bar marks the end of the command syntax.
A stack of parameters, with the first parameter displayed on the main path,
indicates that one of the parameters must be selected:
II COMMAND required choice1 IM
required choice2
A stack of parameters, with the first parameter displayed below the main
path, indicates that one of the parameters can be selected:
II COMMAND IM
optional_choice1
optional_choice2
An arrow returning to the left, above the path, indicates that items can be
repeated in accordance with the following conventions:
v If the arrow is uninterrupted, the item can be repeated in a list with the
items separated by blank spaces:
v If the arrow contains a comma, the item can be repeated in a list with the
items separated by commas:
,
II COMMAND K repeatable_parameter IM
Items from parameter stacks can be repeated in accordance with the stack
conventions for required and optional parameters discussed previously.
conventions discussed previously. That is, if an inner stack does not have a
repeat arrow above it, but an outer stack does, only one parameter from the
inner stack can be chosen and combined with any parameter from the outer
stack, and that combination can be repeated. For example, the following
diagram shows that one could combine parameter choice2a with parameter
choice2, and then repeat that combination again (choice2 plus choice2a):
parameter choice2a
parameter choice2b
parameter choice2c
If this parameter is not supplied, the system searches the current directory for
the command. If it cannot find the command, the system continues searching
for the command in all the directories on the paths listed in the .profile.
You can use the information contained in this book to identify an error or
problem, and to resolve the problem by using the appropriate recovery action.
This information can also be used to understand where messages are
generated and logged.
SQL messages, and the message text associated with SQLSTATE values, are
also accessible from the operating system command line. To access help for
these error messages, enter the following at the operating system command
prompt:
db2 ? SQLnnnnn
where nnnnn represents the message number. On UNIX based systems, the
use of double quotation mark delimiters is recommended; this will avoid
problems if there are single character file names in the directory:
db2 "? SQLnnnnn"
The message identifier accepted as a parameter for the db2 command is not
case sensitive, and the terminating letter is not required. Therefore, the
following commands will produce the same result:
db2 ? SQL0000N
db2 ? sql0000
db2 ? SQL0000n
You can also redirect the output to a file which can then be browsed.
Help can also be invoked from interactive input mode. To access this mode,
enter the following at the operating system command prompt:
db2
To get DB2 message help in this mode, type the following at the command
prompt (db2 =>):
? SQLnnnnn
System Commands
db2adutl - Work with TSM Archived Images
Allows users to query, extract, verify, and delete backup images, logs, and
load copy images saved using Tivoli Storage Manager (formerly ADSM).
Authorization:
None
Required connection:
None
Command syntax:
II db2adutl I
I I
DATABASE database_name DBPARTITIONNUM db-partition-number PASSWORD password
DB
I IM
NODENAME node_name WITHOUT PROMPTING OWNER owner VERBOSE
Command parameters:
QUERY
Queries the TSM server for DB2 objects.
EXTRACT
Copies DB2 objects from the TSM server to the current directory on
the local machine.
DELETE
Either deactivates backup objects or deletes log archives on the TSM
server.
VERIFY
Performs consistency checking on the backup copy that is on the
server.
NONINCREMENTAL
Include only non-incremental backup images.
INCREMENTAL
Include only incremental backup images.
DELTA
Include only incremental delta backup images.
LOADCOPY
Includes only load copy images.
LOGS Includes only log archive images
BETWEEN sn1 AND sn2
Specifies that the logs between log sequence number 1 and log
sequence number 2 are to be used.
SHOW INACTIVE
Includes backup objects that have been deactivated.
TAKEN AT timestamp
Specifies a backup image by its time stamp.
KEEP n
Deactivates all objects of the specified type except for the most recent
n by time stamp.
OLDER THAN timestamp or n days
Specifies that objects with a time stamp earlier than timestamp or n
days will be deactivated.
DATABASE database_name
Considers only those objects associated with the specified database
name.
DBPARTITIONNUM db-partition-number
Considers only those objects created by the specified database
partition number.
PASSWORD password
Specifies the TSM client password for this node, if required. If a
database is specified and the password is not provided, the value
specified for the tsm_password database configuration parameter is
passed to TSM; otherwise, no password is used.
NODENAME node_name
Considers only those images associated with a specific TSM node
name.
WITHOUT PROMPTING
The user is not prompted for verification before objects are deleted.
OWNER owner
Considers only those objects created by the specified owner.
VERBOSE
Displays additional file information
Examples:
The following is sample output from: db2 backup database rawsampl use tsm
Backup successful. The timestamp for this backup is : 19970929130942
db2adutl query
db2adutl query
Usage Notes:
One parameter from each group below can be used to restrict what backup
images types are included in the operation:
Granularity:
v FULL - include only database backup images.
v TABLESPACE - include only table space backup images.
Cumulativiness:
v NONINCREMENTAL - include only non-incremental backup images.
v INCREMENTAL - include only incremental backup images.
v DELTA - include only incremental delta backup images.
Compatibilities:
This utility can be used to test the integrity of a backup image and to
determine whether or not the image can be restored. It can also be used to
display the meta-data stored in the backup header.
Authorization:
Anyone can access the utility, but users must have read permissions on image
backups in order to execute this utility against them.
Required connection:
None
Command syntax:
, ,
II db2ckbkp K K filename IM
-a
-c
-d
-h
-H
-l
-n
-o
Command parameters:
-a Displays all available information.
-c Displays results of checkbits and checksums.
-d Displays information from the headers of DMS table space data pages.
-h Displays media header information including the name and path of
the image expected by the restore utility.
-H Displays the same information as -h but only reads the 4K media
header information from the beginning of the image. It does not
validate the image.
Notes:
1. If the complete backup consists of multiple objects, the validation
will only succeed if db2ckbkp is used to validate all of the objects
at the same time.
2. When checking multiple parts of an image, the first backup image
object (.001) must be specified first.
Examples:
db2ckbkp SAMPLE.0.krodger.NODE0000.CATN0000.19990817150714.*
[1] Buffers processed: ##
[2] Buffers processed: ##
[3] Buffers processed: ##
Image Verification Complete - successful.
db2ckbkp -h SAMPLE2.0.krodger.NODE0000.CATN0000.19990818122909.001
=====================
MEDIA HEADER REACHED:
=====================
Server Database Name -- SAMPLE2
Server Database Alias -- SAMPLE2
Client Database Alias -- SAMPLE2
Timestamp -- 19990818122909
Database Partition Number -- 0
Instance -- krodger
Sequence Number -- 1
Release ID -- 900
Database Seed -- 65E0B395
DB Comment’s Codepage (Volume) -- 0
DB Comment (Volume) --
DB Comment’s Codepage (System) -- 0
DB Comment (System) --
Authentication Value -- 255
Backup Mode -- 0
Backup Type -- 0
Backup Gran. -- 0
Status Flags -- 11
System Cats inc -- 1
Catalog Database Partition No. -- 0
DB Codeset -- ISO8859-1
DB Territory --
Backup Buffer Size -- 4194304
Number of Sessions -- 1
Platform -- 0
Usage notes:
1. If a backup image was created using multiple sessions, db2ckbkp can
examine all of the files at the same time. Users are responsible for
ensuring that the session with sequence number 001 is the first file
specified.
2. This utility can also verify backup images that are stored on tape (except
images that were created with a variable block size). This is done by
preparing the tape as for a restore operation, and then invoking the utility,
specifying the tape device name. For example, on UNIX based systems:
db2ckbkp -h /dev/rmt0
and on Windows:
db2ckbkp -d \\.\tape1
3. If the image is on a tape device, specify the tape device path. You will be
prompted to ensure it is mounted, unless option ’-n’ is given. If there are
multiple tapes, the first tape must be mounted on the first device path
given. (That is the tape with sequence 001 in the header).
The default when a tape device is detected is to prompt the user to mount
the tape. The user has the choice on the prompt. Here is the prompt and
options: (where the device I specified is on device path /dev/rmt0)
Please mount the source media on device /dev/rmt0.
Continue(c), terminate only this device(d), or abort this tool(t)?
(c/d/t)
The user will be prompted for each device specified, and when the device
reaches the end of tape.
Related reference:
v “db2adutl - Work with TSM Archived Images” on page 209
db2ckrst - Check Incremental Restore Image Sequence
Queries the database history and generates a list of timestamps for the backup
images that are required for an incremental restore. A simplified restore syntax
for a manual incremental restore is also generated.
Authorization:
None
Required connection:
None
Command syntax:
I IM
-h
-u
-n K tablespace name -?
Command parameters:
-d database name file-name
Specifies the alias name for the database that will be restored.
-t timestamp
Specifies the timestamp for a backup image that will be incrementally
restored.
-r Specifies the type of restore that will be executed. The default is
database.
Note: If tablespace is chosen and no table space names are given, the
utility looks into the history entry of the specified image and
uses the table space names listed to do the restore.
-n tablespace name
Specifies the name of one or more table spaces that will be restored.
Examples:
db2ckrst -d mr -t 20001015193455 -r database
db2ckrst -d mr -t 20001015193455 -r tablespace
db2ckrst -d mr -t 20001015193455 -r tablespace -n tbsp1 tbsp2
Usage notes:
The database history must exist in order for this utility to be used. If the
database history does not exist, specify the HISTORY FILE option in the
RESTORE command before using this utility.
If the FORCE option of the PRUNE HISTORY command is used, you will be
able to delete entries that are required for recovery from the most recent, full
database backup image. The default operation of the PRUNE HISTORY
command prevents required entries from being deleted. It is recommended
that you do not use the FORCE option of the PRUNE HISTORY command.
This utility should not be used as a replacement for keeping records of your
backups.
db2flsn - Find Log Sequence Number
Returns the name of the file that contains the log record identified by a
specified log sequence number (LSN).
Authorization:
None
Command syntax:
II db2flsn input_LSN IM
-q
Command parameters:
-q Specifies that only the log file name be printed. No error or warning
messages will be printed, and status can only be determined through
the return code. Valid error codes are:
v -100 Invalid input
v -101 Cannot open LFH file
v -102 Failed to read LFH file
v -103 Invalid LFH
v -104 Database is not recoverable
v -105 LSN too big
v -500 Logical error.
Examples:
db2flsn 000000BF0030
Given LSN is contained in log file S0000002.LOG
db2flsn -q 000000BF0030
S0000002.LOG
db2flsn 000000BE0030
Warning: the result is based on the last known log file size.
The last known log file size is 23 4K pages starting from log extent 2.
db2flsn -q 000000BE0030
S0000001.LOG
Usage notes:
The log header control file SQLOGCTL.LFH must reside in the current directory.
Since this file is located in the database directory, the tool can be run from the
database directory, or the control file can be copied to the directory from
which the tool will be run.
The tool uses the logfilsiz database configuration parameter. DB2 records the
three most recent values for this parameter, and the first log file that is created
with each logfilsiz value; this enables the tool to work correctly when logfilsiz
changes. If the specified LSN predates the earliest recorded value of logfilsiz,
the tool uses this value, and returns a warning. The tool can be used with
database managers prior to UDB Version 5.2; in this case, the warning is
returned even with a correct result (obtained if the value of logfilsiz remains
unchanged).
Authorization:
Required connection:
None
Command syntax:
II db2inidb database_alias AS SNAPSHOT IM
STANDBY RELOCATE USING configFile
MIRROR
Command parameters:
database_alias
Specifies the alias of the database to be initialized.
SNAPSHOT
Specifies that the mirrored database will be initialized as a clone of
the primary database.
STANDBY
Specifies that the database will be placed in roll forward pending
state.
Note: New logs from the primary database can be fetched and
applied to the standby database. The standby database can then
be used in place of the primary database if it goes down.
MIRROR
Specifies that the mirrored database is to be used as a backup image
which can be used to restore the primary database.
RELOCATE USING configFile
Specifies that the database files are to be relocated based on the
information listed in the configuration file.
Related reference:
v “db2relocatedb - Relocate Database” in the Command Reference
db2mscs - Set up Windows Failover Utility
Authorization:
The user must be logged on to a domain user account that belongs to the
Administrators group of each machine in the MSCS cluster.
Command syntax:
II db2mscs IM
-f: input_file
-u: instance_name
Command parameters:
-f:input_file
Specifies the DB2MSCS.CFG input file to be used by the MSCS utility. If
this parameter is not specified, the DB2MSCS utility reads the
DB2MSCS.CFG file that is in the current directory.
-u:instance_name
This option allows you to undo the db2mscs operation and revert the
instance back to the non-MSCS instance specified by instance_name.
Usage notes:
Two example configuration files can be found in the CFG subdirectory under
the DB2 install directory. The first, DB2MSCS.EE, is an example for
single-partition database environments. The second, DB2MSCS.EEE, is an
example for partitioned database environments.
NETNAME_VALUE
The value for the Network Name resource. This parameter must be
specified if the NETNAME_NAME parameter is specified.
NETNAME_DEPENDENCY
The name for the IP resource that the Network Name resource
depends on. Each Network Name resource must have a dependency
on an IP Address resource. This parameter is optional. If it is not
specified, the Network Name resource has a dependency on the first
IP resource in the group.
SERVICE_DISPLAY_NAME
The display name of the Generic Service resource. Specify this
parameter if you want to create a Generic Service resource.
SERVICE_NAME
The service name of the Generic Service resource. This parameter
must be specified if the SERVICE_DISPLAY_NAME parameter is
specified.
SERVICE_STARTUP
Optional startup parameter for the Generic Resource service.
DISK_NAME
The name of the physical disk resource to be moved to the current
group. Specify as many disk resources as you need. The disk
resources must already exist. When the DB2MSCS utility configures
the DB2 instance for failover support, the instance directory is copied
to the first MSCS disk in the group. To specify a different MSCS disk
for the instance directory, use the INSTPROF_DISK parameter. The
disk name used should be entered exactly as seen in Cluster
Administrator.
INSTPROF_DISK
An optional parameter to specify an MSCS disk to contain the DB2
instance directory. If this parameter is not specified the DB2MSCS
utility uses the first disk that belongs to the same group.
INSTPROF_PATH
An optional parameter to specify the exact path where the instance
directory will be copied. This parameter MUST be specified when
using IPSHAdisks, a ServerRAID Netfinity disk resource (i.e.
INSTPROF_PATH=p:\db2profs). INSTPROF_PATH will take
precedence over INSTPROF_DISK if both are specified.
TARGET_DRVMAP_DISK
An optional parameter to specify the target MSCS disk for database
drive mapping for a the multi-partitioned database system. This
parameter will specify the disk the database will be created on by
mapping it from the drive the create database command specifies. If
CLP Commands
ARCHIVE LOG
Closes and truncates the active log file for a recoverable database. If user exit
is enabled, an archive request is issued.
Authorization:
Required connection:
None. This command establishes a database connection for the duration of the
command.
Command syntax:
II ARCHIVE LOG FOR DATABASE database-alias I
DB
I I
USER username
USING password
I IM
On Database Partition Number Clause
DBPARTITIONNUM ( K db-partition-number )
DBPARTITIONNUMS TO db-partition-number
Command parameters:
DATABASE database-alias
Specifies the alias of the database whose active log is to be archived.
USER username
Identifies the user name under which a connection will be attempted.
USING password
Specifies the password to authenticate the user name.
ON ALL DBPARTITIONNUMS
Specifies that the command should be issued on all database
partitions in the db2nodes.cfg file. This is the default if a database
partition number clause is not specified.
EXCEPT
Specifies that the command should be issued on all database
partitions in the db2nodes.cfg file, except those specified in the
database partition number list.
ON DBPARTITIONNUM/ON DBPARTITIONNUMS
Specifies that the logs should be archived for the specified database
on a set of database partitions.
db-partition-number
Specifies a database partition number in the database partition
number list.
TO db-partition-number
Used when specifying a range of database partitions for which the
logs should be archived. All database partitions from the first
Usage notes:
This command can be used to collect a complete set of log files up to a known
point. The log files can then be used to update a standby database.
This command can only be executed when the invoking application or shell
does not have a database connection to the specified database. This prevents a
user from executing the command with uncommitted transactions. As such,
the ARCHIVE LOG command will not forcibly commit the user’s incomplete
transactions. If the invoking application or shell already has a database
connection to the specified database, the command will terminate and return
an error. If another application has transactions in progress with the specified
database when this command is executed, there will be a slight performance
degradation since the command flushes the log buffer to disk. Any other
transactions attempting to write log records to the buffer will have to wait
until the flush is complete.
Using this command will use up a portion of the active log space due to the
truncation of the active log file. The active log space will resume its previous
size when the truncated log becomes inactive. Frequent use of this command
may drastically reduce the amount of the active log space available for
transactions.
Compatibilities:
Authorization:
None
Required connection:
None
Command syntax:
II INITIALIZE TAPE IM
ON device USING blksize
Command parameters:
ON device
Specifies a valid tape device name. The default value is \\.\TAPE0.
USING blksize
Specifies the block size for the device, in bytes. The device is
initialized to use the block size specified, if the value is within the
supported range of block sizes for the device.
Related reference:
v “BACKUP DATABASE” on page 72
v “RESTORE DATABASE” on page 95
v “REWIND TAPE” on page 232
v “SET TAPE POSITION” on page 233
LIST HISTORY
Lists entries in the history file. The history file contains a record of recovery
and administrative events. Recovery events include full database and table
space level backup, incremental backup, restore, and rollforward operations.
Additional logged events include create, alter, drop, or rename table space,
reorganize table, drop table, and load.
Authorization:
None
Required connection:
Command syntax:
II LIST HISTORY ALL I
BACKUP SINCE timestamp
ROLLFORWARD CONTAINING schema.object_name
DROPPED TABLE object_name
LOAD
CREATE TABLESPACE
ALTER TABLESPACE
RENAME TABLESPACE
REORG
I FOR database-alias IM
DATABASE
DB
Command parameters:
HISTORY
Lists all events that are currently logged in the history file.
BACKUP
Lists backup and restore operations.
ROLLFORWARD
Lists rollforward operations.
DROPPED TABLE
Lists dropped table records.
LOAD
Lists load operations.
CREATE TABLESPACE
Lists table space create and drop operations.
RENAME TABLESPACE
Lists table space renaming operations.
REORG
Lists reorganization operations.
ALTER TABLESPACE
Lists alter table space operations.
ALL Lists all entries of the specified type in the history file.
SINCE timestamp
A complete time stamp (format yyyymmddhhnnss), or an initial prefix
(minimum yyyy) can be specified. All entries with time stamps equal
to or greater than the time stamp provided are listed.
CONTAINING schema.object_name
This qualified name uniquely identifies a table.
CONTAINING object_name
This unqualified name uniquely identifies a table space.
FOR DATABASE database-alias
Used to identify the database whose recovery history file is to be
listed.
Examples:
db2 list history since 19980201 for sample
db2 list history backup containing userspace1 for sample
db2 list history dropped table all for db sample
Usage notes:
Type
Backup types:
F - Offline
N - Online
I - Incremental offline
O - Incremental online
D - Delta offline
E - Delta online
Rollforward types:
E - End of logs
P - Point in time
Load types:
I - Insert
R - Replace
C - Add containers
R - Rebalance
Quiesce types:
S - Quiesce share
U - Quiesce update
X - Quiesce exclusive
Z - Quiesce reset
PRUNE HISTORY/LOGFILE
Used to delete entries from the recovery history file, or to delete log files from
the active log file path. Deleting entries from the recovery history file may be
necessary if the file becomes excessively large and the retention period is
high. Deleting log files from the active log file path may be necessary if logs
are being archived manually (rather than through a user exit program).
Authorization:
Required connection:
Database
Command syntax:
Command parameters:
HISTORY timestamp
Identifies a range of entries in the recovery history file that will be
deleted. A complete time stamp (in the form yyyymmddhhmmss), or an
initial prefix (minimum yyyy) can be specified. All entries with time
stamps equal to or less than the time stamp provided are deleted from
the recovery history file.
WITH FORCE OPTION
Specifies that the entries will be pruned according to the time stamp
specified, even if some entries from the most recent restore set are
deleted from the file. A restore set is the most recent full database
backup including any restores of that backup image. If this parameter
is not specified, all entries from the backup image forward will be
maintained in the history.
LOGFILE PRIOR TO log-file-name
Specifies a string for a log file name, for example S0000100.LOG. All
log files prior to (but not including) the specified log file will be
deleted. The LOGRETAIN database configuration parameter must be
set to RECOVERY or CAPTURE.
Examples:
To remove the entries for all restores, loads, table space backups, and full
database backups taken before and including December 1, 1994 from the
recovery history file, enter:
db2 prune history 199412
Usage notes:
Pruning backup entries from the history file causes related file backups on
DB2 Data Links Manager servers to be deleted.
REWIND TAPE
Authorization:
None
Required connection:
None
Command syntax:
II REWIND TAPE IM
ON device
Command parameters:
ON device
Specifies a valid tape device name. The default value is \\.\TAPE0.
Related reference:
v “INITIALIZE TAPE” on page 227
v “SET TAPE POSITION” on page 233
SET TAPE POSITION
Authorization:
None
Required connection:
None
Command syntax:
II SET TAPE POSITION TO position IM
ON device
Command parameters:
ON device
Specifies a valid tape device name. The default value is \\.\TAPE0.
TO position
Specifies the mark at which the tape is to be positioned. DB2 for
Windows NT/2000 writes a tape mark after every backup image. A
value of 1 specifies the first position, 2 specifies the second position,
and so on. If the tape is positioned at tape mark 1, for example,
archive 2 is positioned to be restored.
Related reference:
v “INITIALIZE TAPE” on page 227
v “REWIND TAPE” on page 232
UPDATE HISTORY FILE
Authorization:
Required connection:
Database
Command syntax:
II UPDATE HISTORY FOR object-part WITH I
Command parameters:
FOR object-part
Specifies the identifier for the backup or copy image. It is a time
stamp with an optional sequence number from 001 to 999.
LOCATION new-location
Specifies the new physical location of a backup image. The
interpretation of this parameter depends on the device type.
Examples:
To update the history file entry for a full database backup taken on April 13,
1997 at 10:00 a.m., enter:
db2 update history for 19970413100000001 with
location /backup/dbbackup.1 device type d
Usage notes:
Related reference:
v “PRUNE HISTORY/LOGFILE” on page 231
/* File: db2ApiDf.h */
/* API: Archive Active Log */
/* ... */
SQL_API_RC SQL_API_FN
db2ArchiveLog (
db2Uint32 version,
void * pDB2ArchiveLogStruct,
struct sqlca * pSqlca);
typedef struct
{
char * piDatabaseAlias;
char * piUserName;
char * piPassword;
db2Uint16 iAllNodeFlag;
db2Uint16 iNumNodes;
SQL_PDB_NODE_TYPE * piNodeList;
db2Uint32 iOptions;
} db2ArchiveLogStruct;
/* ... */
/* File: db2ApiDf.h */
/* API: Archive Active Log */
/* ... */
SQL_API_RC SQL_API_FN
db2gArchiveLog (
db2Uint32 version,
void * pDB2gArchiveLogStruct,
struct sqlca * pSqlca);
typedef struct
{
db2Uint32 iAliasLen;
db2Uint32 iUserNameLen;
db2Uint32 iPasswordLen;
char * piDatabaseAlias;
char * piUserName;
char * piPassword;
db2Uint16 iAllNodeFlag;
db2Uint16 iNumNodes;
SQL_PDB_NODE_TYPE * piNodeList;
db2Uint32 iOptions;
} db2gArchiveLogStruct;
/* ... */
API Parameters
version
Input. Specifies the version and release level of the variable passed in
as the second parameter, pDB2ArchiveLogStruct.
pDB2ArchiveLogStruct
Input. A pointer to the db2ArchiveLogStruct structure.
pSqlca
Output. A pointer to the sqlca structure.
iAliasLen
Input. A 4-byte unsigned integer representing the length in bytes of
the database alias.
iUserNameLen
Input. A 4-byte unsigned integer representing the length in bytes of
the user name. Set to zero if no user name is used.
iPasswordLen
Input. A 4-byte unsigned integer representing the length in bytes of
the password. Set to zero if no password is used.
piDatabaseAlias
Input. A string containing the database alias (as cataloged in the
system database directory) of the database for which the active log is
to be archived.
piUserName
Input. A string containing the user name to be used when attempting
a connection.
piPassword
Input. A string containing the password to be used when attempting a
connection.
iAllNodeFlag
Input. MPP only. Flag indicating whether the operation should apply
to all nodes listed in the db2nodes.cfg file. Valid values are:
DB2ARCHIVELOG_ALL_NODES
Apply to all nodes (piNodeList should be NULL). This is the
default value.
DB2ARCHIVELOG_NODE_LIST
Apply to all nodes specified in a node list that is passed in
piNodeList.
DB2ARCHIVELOG_ALL_EXCEPT
Apply to all nodes except those specified in a node list that is
passed in piNodeList.
iNumNodes
Input. MPP only. Specifies the number of nodes in the piNodeList
array.
piNodeList
Input. MPP only. A pointer to an array of node numbers against
which to apply the archive log operation.
iOptions
Input. Reserved for future use.
Usage Notes
This API can be used to collect a complete set of log files up to a known
point. The log files can then be used to update a standby database.
This API causes the database to lose a portion of its LSN space, thereby
hastening the exhaustion of valid LSNs.
Ends a history file scan and frees DB2 resources required for the scan. This
API must be preceded by a successful call to db2HistoryOpenScan.
Authorization:
None
Required connection:
db2ApiDf.h
C API syntax:
/* File: db2ApiDf.h */
/* API: db2HistoryCloseScan */
/* ... */
SQL_API_RC SQL_API_FN
db2HistoryCloseScan (
db2Uint32 version,
void *piHandle,
struct sqlca *pSqlca);
/* ... */
/* File: db2ApiDf.h */
/* API: db2GenHistoryCloseScan */
/* ... */
SQL_API_RC SQL_API_FN
db2GenHistoryCloseScan (
db2Uint32 version,
void *piHandle,
struct sqlca *pSqlca);
/* ... */
API parameters:
version
Input. Specifies the version and release level of the second parameter,
piHandle.
piHandle
Input. Specifies a pointer to the handle for scan access that was
returned by db2HistoryOpenScan.
pSqlca
Output. A pointer to the sqlca structure.
Usage notes:
For a detailed description of the use of the history file APIs, see
db2HistoryOpenScan.
Related reference:
v “db2Prune - Prune History File” on page 253
v “db2HistoryUpdate - Update History File” on page 250
v “db2HistoryOpenScan - Open History File Scan” on page 245
v “db2HistoryGetEntry - Get Next History File Entry” on page 242
v “SQLCA” in the Administrative API Reference
Related samples:
v “dbrecov.sqc -- How to recover a database (C)”
v “dbrecov.sqC -- How to recover a database (C++)”
Gets the next entry from the history file. This API must be preceded by a
successful call to db2HistoryOpenScan.
Authorization:
None
Required connection:
db2ApiDf.h
C API syntax:
/* File: db2ApiDf.h */
/* API: db2HistoryGetEntry */
/* ... */
SQL_API_RC SQL_API_FN
db2HistoryGetEntry (
db2Uint32 version,
void *pDB2HistoryGetEntryStruct,
struct sqlca *pSqlca);
typedef struct
{
db2Uint16 iHandle,
db2Uint16 iCallerAction,
struct db2HistData *pioHistData
} db2HistoryGetEntryStruct;
/* ... */
/* File: db2ApiDf.h */
/* API: db2GenHistoryGetEntry */
/* ... */
SQL_API_RC SQL_API_FN
db2GenHistoryGetEntry (
db2Uint32 version,
void *pDB2GenHistoryGetEntryStruct,
struct sqlca *pSqlca);
typedef struct
{
db2Uint16 iHandle,
db2Uint16 iCallerAction,
struct db2HistData *pioHistData
} db2GenHistoryGetEntryStruct;
/* ... */
API parameters:
version
Input. Specifies the version and release level of the structure passed in
as the second parameter, pDB2HistoryGetEntryStruct.
pDB2HistoryGetEntryStruct
Input. A pointer to the db2HistoryGetEntryStruct structure.
pSqlca
Output. A pointer to the sqlca structure.
iHandle
Input. Contains the handle for scan access that was returned by
db2HistoryOpenScan.
iCallerAction
Input. Specifies the type of action to be taken. Valid values (defined in
db2ApiDf) are:
DB2HISTORY_GET_ENTRY
Get the next entry, but without any command data.
DB2HISTORY_GET_DDL
Get only the command data from the previous fetch.
DB2HISTORY_GET_ALL
Get the next entry, including all data.
pioHistData
Input. A pointer to the db2HistData structure.
Usage notes:
The records that are returned will have been selected using the values
specified on the call to db2HistoryOpenScan.
For a detailed description of the use of the history file APIs, see
db2HistoryOpenScan.
Related reference:
v “db2Prune - Prune History File” on page 253
v “db2HistoryUpdate - Update History File” on page 250
v “db2HistoryOpenScan - Open History File Scan” on page 245
v “db2HistoryCloseScan - Close History File Scan” on page 241
v “SQLCA” in the Administrative API Reference
v “db2HistData” on page 267
Related samples:
v “dbrecov.sqc -- How to recover a database (C)”
v “dbrecov.sqC -- How to recover a database (C++)”
Authorization:
None
Required connection:
Instance. It is not necessary to call sqleatin before calling this API. If the
database is cataloged as remote, an instance attachment to the remote node is
established.
db2ApiDf.h
C API syntax:
/* File: db2ApiDf.h */
/* API: db2HistoryOpenScan */
/* ... */
SQL_API_RC SQL_API_FN
db2HistoryOpenScan (
db2Uint32 version,
void *pDB2HistoryOpenStruct,
struct sqlca *pSqlca);
typedef struct
{
char *piDatabaseAlias,
char *piTimestamp,
char *piObjectName,
db2Uint32 oNumRows,
db2Uint16 iCallerAction,
db2Uint16 oHandle
} db2HistoryOpenStruct;
/* ... */
/* File: db2ApiDf.h */
/* API: db2GenHistoryOpenScan */
/* ... */
SQL_API_RC SQL_API_FN
db2GenHistoryOpenScan (
db2Uint32 version,
void *pDB2GenHistoryOpenStruct,
struct sqlca *pSqlca);
typedef struct
{
char *piDatabaseAlias,
char *piTimestamp,
char *piObjectName,
db2Uint32 oNumRows,
db2Uint16 iCallerAction,
db2Uint16 oHandle
} db2GenHistoryOpenStruct;
/* ... */
API parameters:
version
Input. Specifies the version and release level of the structure passed in
as the second parameter, pDB2HistoryOpenStruct.
pDB2HistoryOpenStruct
Input. A pointer to the db2HistoryOpenStruct structure.
pSqlca
Output. A pointer to the sqlca structure.
piDatabaseAlias
Input. A pointer to a string containing the database alias.
piTimestamp
Input. A pointer to a string specifying the time stamp to be used for
selecting records. Records whose time stamp is equal to or greater
than this value are selected. Setting this parameter to NULL, or
pointing to zero, prevents the filtering of entries using a time stamp.
piObjectName
Input. A pointer to a string specifying the object name to be used for
selecting records. The object may be a table or a table space. If it is a
table, the fully qualified table name must be provided. Setting this
parameter to NULL, or pointing to zero, prevents the filtering of
entries using the object name.
oNumRows
Output. Upon return from the API, this parameter contains the
number of matching history file entries.
iCallerAction
Input. Specifies the type of action to be taken. Valid values (defined in
db2ApiDf) are:
DB2HISTORY_LIST_HISTORY
Lists all events that are currently logged in the history file.
DB2HISTORY_LIST_BACKUP
Lists backup and restore operations.
DB2HISTORY_LIST_ROLLFORWARD
Lists rollforward operations.
DB2HISTORY_LIST_DROPPED_TABLE
Lists dropped table records. The DDL field associated with an
entry is not returned. To retrieve the DDL information for an
entry, db2HistoryGetEntry must be called with a caller action
of DB2HISTORY_GET_DDL immediately after the entry is fetched.
DB2HISTORY_LIST_LOAD
Lists load operations.
DB2HISTORY_LIST_CRT_TABLESPACE
Lists table space create and drop operations.
DB2HISTORY_LIST_REN_TABLESPACE
Lists table space renaming operations.
DB2HISTORY_LIST_ALT_TABLESPACE
Lists alter table space operations. The DDL field associated
with an entry is not returned. To retrieve the DDL information
for an entry, db2HistoryGetEntry must be called with a caller
action of DB2HISTORY_GET_DDL immediately after the entry is
fetched.
DB2HISTORY_LIST_REORG
Lists REORGANIZE TABLE operations. This value is not
currently supported.
oHandle
Output. Upon return from the API, this parameter contains the handle
for scan access. It is subsequently used in db2HistoryGetEntry, and
db2HistoryCloseScan.
Usage notes:
The combination of time stamp, object name and caller action can be used to
filter records. Only records that pass all specified filters are returned.
The filtering effect of the object name depends on the value specified:
v Specifying a table will return records for load operations, because this is the
only information for tables in the history file.
v Specifying a table space will return records for backup, restore, and load
operations for the table space.
To list every entry in the history file, a typical application will perform the
following steps:
1. Call db2HistoryOpenScan, which will return oNumRows.
2. Allocate an db2HistData structure with space for n oTablespace fields, where
n is an arbitrary number.
3. Set the iDB2NumTablespace field of the db2HistData structure to n.
4. In a loop, perform the following:
v Call db2HistoryGetEntry to fetch from the history file.
v If db2HistoryGetEntry returns an SQLCODE of SQL_RC_OK, use the sqld
field of the db2HistData structure to determine the number of table space
entries returned.
v If db2HistoryGetEntry returns an SQLCODE of
SQLUH_SQLUHINFO_VARS_WARNING, not enough space has been allocated for
all of the table spaces that DB2 is trying to return; free and reallocate the
db2HistData structure with enough space for oDB2UsedTablespace table
space entries, and set iDB2NumTablespace to oDB2UsedTablespace.
v If db2HistoryGetEntry returns an SQLCODE of SQLE_RC_NOMORE, all
history file entries have been retrieved.
v Any other SQLCODE indicates a problem.
5. When all of the information has been fetched, call db2HistoryCloseScan to
free the resources allocated by the call to db2HistoryOpenScan.
Related reference:
Related samples:
v “dbrecov.sqc -- How to recover a database (C)”
v “dbrecov.sqC -- How to recover a database (C++)”
Authorization:
Required connection:
Database. To update entries in the history file for a database other than the
default database, a connection to the database must be established before
calling this API.
db2ApiDf.h
C API syntax:
/* File: db2ApiDf.h */
/* API: db2HistoryUpdate */
/* ... */
SQL_API_RC SQL_API_FN
db2HistoryUpdate (
db2Uint32 version,
void *pDB2HistoryUpdateStruct,
struct sqlca *pSqlca);
typedef struct
{
char *piNewLocation,
char *piNewDeviceType,
char *piNewComment,
db2Uint32 iEID
} db2HistoryUpdateStruct;
/* ... */
/* File: db2ApiDf.h */
/* API: db2GenHistoryUpdate */
/* ... */
SQL_API_RC SQL_API_FN
db2GenHistoryUpdate (
db2Uint32 version,
void *pDB2GenHistoryUpdateStruct,
struct sqlca *pSqlca);
typedef struct
{
char *piNewLocation,
char *piNewDeviceType,
char *piNewComment,
db2Uint32 iEID
} db2GenHistoryUpdateStruct;
/* ... */
API parameters:
version
Input. Specifies the version and release level of the structure passed in
as the second parameter, pDB2HistoryUpdateStruct.
pDB2HistoryUpdateStruct
Input. A pointer to the db2HistoryUpdateStruct structure.
pSqlca
Output. A pointer to the sqlca structure.
piNewLocation
Input. A pointer to a string specifying a new location for the backup,
restore, or load copy image. Setting this parameter to NULL, or
pointing to zero, leaves the value unchanged.
piNewDeviceType
Input. A pointer to a string specifying a new device type for storing
the backup, restore, or load copy image. Setting this parameter to
NULL, or pointing to zero, leaves the value unchanged.
piNewComment
Input. A pointer to a string specifying a new comment to describe the
entry. Setting this parameter to NULL, or pointing to zero, leaves the
comment unchanged.
iEID Input. A unique identifier that can be used to update a specific entry
in the history file.
Usage notes:
This is an update function, and all information prior to the change is replaced
and cannot be recreated. These changes are not logged.
The history file is used for recording purposes only. It is not used directly by
the restore or the rollforward functions. During a restore operation, the
location of the backup image can be specified, and the history file is useful for
tracking this location. The information can subsequently be provided to the
backup utility. Similarly, if the location of a load copy image is moved, the
rollforward utility must be provided with the new location and type of
storage media.
Related reference:
v “db2Rollforward - Rollforward Database” on page 145
v “db2Prune - Prune History File” on page 253
v “db2HistoryOpenScan - Open History File Scan” on page 245
v “db2HistoryGetEntry - Get Next History File Entry” on page 242
v “db2HistoryCloseScan - Close History File Scan” on page 241
v “SQLCA” in the Administrative API Reference
v “db2Backup - Backup database” on page 77
Related samples:
v “dbrecov.sqc -- How to recover a database (C)”
v “dbrecov.sqC -- How to recover a database (C++)”
Deletes entries from the history file or log files from the active log path.
Authorization:
Required connection:
Database. To delete entries from the history file for any database other than
the default database, a connection to the database must be established before
calling this API.
db2ApiDf.h
C API syntax:
/* File: db2ApiDf.h */
/* API: db2Prune */
/* ... */
SQL_API_RC SQL_API_FN
db2Prune (
db2Uint32 version,
void *pDB2PruneStruct,
struct sqlca *pSqlca);
typedef struct
{
char *piString,
db2Uint32 iEID,
db2Uint32 iCallerAction,
db2Uint32 iOptions
} db2PruneStruct;
/* ... */
/* File: db2ApiDf.h */
/* API: db2GenPrune */
/* ... */
SQL_API_RC SQL_API_FN
db2GenPrune (
db2Uint32 version,
void *pDB2GenPruneStruct,
struct sqlca *pSqlca);
typedef struct
{
db2Uint32 iStringLen;
char *piString,
db2Uint32 iEID,
db2Uint32 iCallerAction,
db2Uint32 iOptions
} db2GenPruneStruct;
/* ... */
API parameters:
version
Input. Specifies the version and release level of the structure passed in
as the second parameter, pDB2PruneStruct.
pDB2PruneStruct
Input. A pointer to the db2PruneStruct structure.
pSqlca
Output. A pointer to the sqlca structure.
iStringLen
Input. Specifies the length in bytes of piString.
piString
Input. A pointer to a string specifying a time stamp or a log sequence
number (LSN). The time stamp or part of a time stamp (minimum
yyyy, or year) is used to select records for deletion. All entries equal to
or less than the time stamp will be deleted. A valid time stamp must
be provided; there is no default behavior for a NULL parameter.
This parameter can also be used to pass an LSN, so that inactive logs
can be pruned.
iEID Input. Specifies a unique identifier that can be used to prune a single
entry from the history file.
iCallerAction
Input. Specifies the type of action to be taken. Valid values (defined in
db2ApiDf) are:
DB2PRUNE_ACTION_HISTORY
Remove history file entries.
DB2PRUNE_ACTION_LOG
Remove log files from the active log path.
iOptions
Input. Valid values (defined in db2ApiDf) are:
DB2PRUNE_OPTION_FORCE
Force the removal of the last backup.
DB2PRUNE_OPTION_LSNSTRING
Specify that the value of piString is an LSN, used when a
caller action of DB2PRUNE_ACTION_LOG is specified.
Usage notes:
Pruning the history file does not delete the actual backup or load files. The
user must manually delete these files to free up the space they consume on
storage media.
CAUTION:
If the latest full database backup is deleted from the media (in addition to
being pruned from the history file), the user must ensure that all table
spaces, including the catalog table space and the user table spaces, are
backed up. Failure to do so may result in a database that cannot be
recovered, or the loss of some portion of the user data in the database.
Related reference:
v “db2HistoryUpdate - Update History File” on page 250
v “db2HistoryOpenScan - Open History File Scan” on page 245
v “db2HistoryGetEntry - Get Next History File Entry” on page 242
v “db2HistoryCloseScan - Close History File Scan” on page 241
v “SQLCA” in the Administrative API Reference
Related samples:
v “dbrecov.sqc -- How to recover a database (C)”
v “dbrecov.sqC -- How to recover a database (C++)”
Extract log records from the DB2 UDB database logs and query the Log
Manager for current log state information. Prior to using this API, use
db2ReadLogNoConnInit to allocate the memory that is passed as an input
parameter to this API. After using this API, use db2ReadLogNoConnTerm to
deallocate the memory.
Authorization:
Required connection:
Database
db2ApiDf.h
C API syntax:
/* File: db2ApiDf.h */
/* API: db2ReadLogNoConn */
/* ... */
SQL_API_RC SQL_API_FN
db2ReadLogNoConn (
db2Uint32 versionNumber,
void *pDB2ReadLogNoConnStruct,
struct sqlca *pSqlca);
/* ... */
API parameters:
version
Input. Specifies the version and release level of the structure passed as
the second parameter pDB2ReadLogNoConnStruct.
pParamStruct
Input. A pointer to the db2ReadLogNoConnStruct structure.
pSqlca
Output. A pointer to the sqlca structure.
iCallerAction
Input. Specifies the action to be performed. Valid values are:
DB2READLOG_READ
Read the database log from the starting log sequence to the
ending log sequence number and return log records within
this range.
DB2READLOG_READ_SINGLE
Read a single log record (propagatable or not) identified by
the starting log sequence number.
DB2READLOG_QUERY
Query the database log. Results of the query will be sent back
via the db2ReadLogNoConnInfoStruct structure.
piStartLSN
Input. The starting log sequence number specifies the starting relative
byte address for the reading of the log. This value must be the start of
an actual log record.
piEndLSN
Input. The ending log sequence number specifies the ending relative
byte address for the reading of the log. This value must be greater
than piStartLsn, and does not need to be the end of an actual log
record.
poLogBuffer
Output. The buffer where all the propagatable log records read within
the specified range are stored sequentially. This buffer must be large
enough to hold a single log record. As a guideline, this buffer should
be a minimum of 32 bytes. Its maximum size is dependent on the size
of the requested range. Each log record in the buffer is prefixed by a
six byte log sequence number (LSN), representing the LSN of the
following log record.
iLogBufferSize
Input. Specifies the size, in bytes, of the log buffer.
piReadLogMemPtr
Input. Block of memory of size iReadLogMemoryLimit that was
allocated in the initialization call. This memory contains persistent
data that the API requires at each invocation. This memory block
must not be reallocated or altered in any way by the caller.
poReadLogInfo
Output. A pointer to the db2ReadLogNoConnInfoStruct structure.
firstAvailableLSN
First available LSN in available logs.
firstReadLSN
First LSN read on this call.
nextStartLSN
Next readable LSN.
logRecsWritten
Number of log records written to the log buffer field, poLogBuffer.
logBytesWritten
Number of bytes written to the log buffer field, poLogBuffer.
lastLogFullyRead
Number indicating the last log file that was read to completion.
Usage notes:
When requesting a sequential read of log, the API requires a log sequence
number (LSN) range and the allocated memory . The API will return a
sequence of log records based on the filter option specified when initialized
and the LSN range. When requesting a query, the read log information
structure will contain a valid starting LSN, to be used on a read call. The
value used as the ending LSN on a read can be one of the following:
v A value greater than the caller-specified startLSN.
v FFFF FFFF FFFF which is interpreted by the asynchronous log reader as the
end of the available logs.
The propagatable log records read within the starting and ending LSN range
are returned in the log buffer. A log record does not contain its LSN, it is
contained in the buffer before the actual log record. Descriptions of the
various DB2 UDB log records returned by db2ReadLogNoConn can be found
in the DB2 UDB Log Records section.
After the initial read, in order to read the next sequential log record, use the
nextStartLSN value returned in db2ReadLogNoConnInfoStruct. Resubmit the
call, with this new starting LSN and a valid ending LSN and the next block of
records is then read. An sqlca code of SQLU_RLOG_READ_TO_CURRENT
means the log reader has read to the end of the available log files.
Related reference:
v “db2ReadLogNoConnInit - Initialize Read Log Without a Database
Connection” on page 260
Required connection:
Database
db2ApiDf.h
C API syntax:
/* File: db2ApiDf.h */
/* API: db2ReadLogNoConnInit */
/* ... */
SQL_API_RC SQL_API_FN
db2ReadLogNoConnInit (
db2Uint32 versionNumber,
void * pDB2ReadLogNoConnInitStruct,
struct sqlca * pSqlca);
API parameters:
version
Input. Specifies the version and release level of the structure passed as
the second parameter pDB2ReadLogNoConnInitStruct.
pParamStruct
Input. A pointer to the db2ReadLogNoConnInitStruct structure.
pSqlca
Output. A pointer to the sqlca structure.
iFilterOption
Input. Specifies the level of log record filtering to be used when
reading the log records. Valid values are:
DB2READLOG_FILTER_OFF
Read all log records in the given LSN range.
DB2READLOG_FILTER_ON
Reads only log records in the given LSN range marked as
propagatable. This is the traditional behavior of the
asynchronous log read API.
piLogFilePath
Input. Path where the log files to be read are located.
piOverflowLogPath
Input. Alternate path where the log files to be read may be located.
iRetrieveLogs
Input. Option specifying if userexit should be invoked to retrieve log
files that cannot be found in either the log file path or the overflow
log path. Valid values are:
DB2READLOG_RETRIEVE_OFF
Userexit should not be invoked to retrieve missing log files.
DB2READLOG_RETRIEVE_LOGPATH
Userexit should be invoked to retrieve missing log files into
the specified log file path.
DB2READLOG_RETRIEVE_OVERFLOW
Userexit should be invoked to retrieve missing log files into
the specified overflow log path.
piDatabaseName
Input. Name of the database that owns the recovery logs being read.
This is required if the retrieve option above is specified.
piNodeName
Input. Name of the node that owns the recovery logs being read. This
is required if the retrieve option above is specified.
iReadLogMemoryLimit
Input. Maximum number of bytes that the API may allocate internally.
poReadLogMemPtr
Output. API-allocated block of memory of size iReadLogMemoryLimit.
This memory contains persistent data that the API requires at each
invocation. This memory block must not be reallocated or altered in
any way by the caller.
Usage notes:
Related reference:
v “db2ReadLogNoConn - Read Log Without a Database Connection” on page
256
v “db2ReadLogNoConnTerm - Terminate Read Log Without a Database
Connection” on page 262
Authorization:
Required connection:
Database
db2ApiDf.h
C API syntax:
/* File: db2ApiDf.h */
/* API: db2ReadLogNoConnTerm */
/* ... */
SQL_API_RC SQL_API_FN
db2ReadLogNoConnTerm (
db2Uint32 versionNumber,
void * pDB2ReadLogNoConnTermStruct,
struct sqlca * pSqlca);
API parameters:
version
Input. Specifies the version and release level of the structure passed as
the second parameter pDB2ReadLogNoConnTermStruct.
pParamStruct
Input. A pointer to the db2ReadLogNoConnTermStruct structure.
pSqlca
Output. A pointer to the sqlca structure.
poReadLogMemPtr
Output. Pointer to the block of memory allocated in the initialization
call. This pointer will be freed and set to NULL.
Related reference:
v “db2ReadLogNoConn - Read Log Without a Database Connection” on page
256
v “db2ReadLogNoConnInit - Initialize Read Log Without a Database
Connection” on page 260
Extract log records from the DB2 UDB database logs and the Log Manager for
current log state information. This API can only be used with recoverable
databases. A database is recoverable if it is configured with logretain set to
RECOVERY or userexit set to ON.
Authorization:
Required connection:
Database
db2ApiDf.h
C API syntax:
/* File: db2ApiDf.h */
/* API: db2ReadLog */
/* ... */
SQL_API_RC SQL_API_FN
db2ReadLog (
db2Uint32 versionNumber,
void *pDB2ReadLogStruct,
struct sqlca *pSqlca);
API parameters:
versionNumber
Input. Specifies the version and release level of the structure passed as
the second parameter, pDB2ReadLogStruct.
pDB2ReadLogStruct
Input. A pointer to the db2ReadLogStruct.
pSqlca
Output. A pointer to the sqlca structure.
iCallerAction
Input. Specifies the action to be performed.
DB2READLOG_READ
Read the database log from the starting log sequence to the
ending log sequence number and return log records within
this range.
DB2READLOG_READ_SINGLE
Read a single log record (propagatable or not) identified by
the starting log sequence number.
DB2READLOG_QUERY
Query the database log. Results of the query will be sent back
via the db2ReadLogInfoStruct structure.
piStartLsn
Input. The starting log sequence number specifies the starting relative
byte address for the reading of the log. This value must be the start of
an actual log record.
piEndLsn
Input. The ending log sequence number specifies the ending relative
byte address for the reading of the log. This value must be greater
than startLsn, and does not need to be the end of an actual log record.
poLogBuffer
Output. The buffer where all the propagatable log records read within
the specified range are stored sequentially. This buffer must be large
enough to hold a single log record. As a guideline, this buffer should
be a minimum of 32 bytes. Its maximum size is dependent on the size
of the requested range. Each log record in the buffer is prefixed by a
six byte log sequence number (LSN), representing the LSN of the
following log record.
iLogBufferSize
Input. Specifies the size, in bytes, of the log buffer.
iFilterOption
Input. Specifies the level of log record filtering to be used when
reading the log records. Valid values are:
DB2READLOG_FILTER_OFF
Read all log records in the given LSN range.
DB2READLOG_FILTER_ON
Reads only log records in the given LSN range marked as
propagatable. This is the traditional behaviors of the
asynchronous log read API.
poReadLogInfo
Output. A structure detailing information regarding the call and the
database log.
Usage notes:
If the requested action is to read the log, the caller will provide a log sequence
number range and a buffer to hold the log records. This API reads the log
sequentially, bounded by the requested LSN range, and returns log records
associated with tables having the DATA CAPTURE option CHANGES, and a
db2ReadLogInfoStruct structure with the current active log information. If the
requested action is query, the API returns an db2ReadLogInfoStruct structure
with the current active log information.
To use the Asynchronous Log Reader, first query the database log for a valid
starting LSN. Following the query call, the read log information structure
(db2ReadLogInfoStruct) will contain a valid starting LSN (in the initialLSN
member), to be used on a read call. The value used as the ending LSN on a
read can be one of the following:
v A value greater than initialLSN
v FFFF FFFF FFFF, which is interpreted by the asynchronous log reader as the
end of the current log.
The propagatable log records read within the starting and ending LSN range
are returned in the log buffer. A log record does not contain its LSN; it is
contained in the buffer before the actual log record. Descriptions of the
various DB2 log records returned by db2ReadLog the DB2 UDB Log Records
section.
To read the next sequential log record after the initial read, use the
nextStartLSN field returned in the db2ReadLogStruct structure. Resubmit the
call, with this new starting LSN and a valid ending LSN. The next block of
records is then read. An sqlca code of SQLU_RLOG_READ_TO_CURRENT
means that the log reader has read to the end of the current active log.
Related reference:
v “SQLCA” in the Administrative API Reference
Related samples:
v “dbrecov.sqc -- How to recover a database (C)”
db2HistData
Language syntax:
C Structure
/* File: db2ApiDf.h */
/* ... */
typedef SQL_STRUCTURE db2HistoryData
{
char ioHistDataID[8];
db2Char oObjectPart;
db2Char oEndTime;
db2Char oFirstLog;
db2Char oLastLog;
db2Char oID;
db2Char oTableQualifier;
db2Char oTableName;
db2Char oLocation;
db2Char oComment;
db2Char oCommandText;
SQLU_LSN oLastLSN;
db2HistoryEID oEID;
struct sqlca * poEventSQLCA;
db2Char * poTablespace;
db2Uint32 ioNumTablespaces;
char oOperation;
char oObject;
char oOptype;
char oStatus;
char oDeviceType
} db2HistoryData;
Related reference:
v “db2HistoryGetEntry - Get Next History File Entry” on page 242
v “SQLCA” in the Administrative API Reference
SQLU-LSN
This union, used by the db2ReadLog API, contains the definition of the log
sequence number. A log sequence number (LSN) represents a relative byte
address within the database log. All log records are identified by this number.
It represents the log record’s byte offset from the beginning of the database
log.
Table 7. Fields in the SQLU-LSN Union
Field Name Data Type Description
lsnChar Array of UNSIGNED Specifies the 6-member character array log
CHAR sequence number.
lsnWord Array of UNSIGNED Specifies the 3-member short array log
SHORT sequence number.
Language syntax:
C Structure
Related reference:
v “db2ReadLog - Asynchronous Read Log” on page 263
** ALTER TABLE
** COMMIT
** DELETE
** INSERT
** ROLLBACK
**
** OUTPUT FILE: dbrecov.out (available in the online documentation)
**
** For detailed information about database backup and recovery, see the
** "Data Recovery and High Availability Guide and Reference". This manual
** will help you to determine which database and table space recovery methods
** are best suited to your business environment.
**
** For more information about the sample programs, see the README file.
**
** For more information about programming in C, see the
** "Programming in C and C++" section of the "Application Development Guide".
**
** For more information about building C applications, see the
** section for your compiler in the "Building Applications" chapter
** for your platform in the "Application Development Guide".
**
** For more information about SQL, see the "SQL Reference".
**
** For more information on DB2 APIs, see the Administrative API Reference.
**
** For the latest information on programming, compiling, and running DB2
** applications, refer to the DB2 application development website at
** http://www.software.ibm.com/data/db2/udb/ad
****************************************************************************/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sqlenv.h>
#include <sqlutil.h>
#include <db2ApiDf.h>
#include "utilemb.h"
/* DbCreate will create a new database on the server with the server’s
code page.
Use this function only if you want to restore a remote database.
This support function is being called by DbBackupAndRedirectedRestore()
and DbBackupRestoreAndRollforward(). */
int DbCreate(char *, char *);
strcpy(restoredDbAlias, dbAlias);
strcpy(redirectedRestoredDbAlias, "RRDB");
strcpy(rolledForwardDbAlias, "RFDB");
rc = DbBackupAndRestore(dbAlias,
restoredDbAlias,
user,
pswd,
serverWorkingPath);
rc = DbBackupAndRedirectedRestore(dbAlias,
redirectedRestoredDbAlias,
user,
pswd,
serverWorkingPath);
rc = DbBackupRestoreAndRollforward(dbAlias,
rolledForwardDbAlias,
user,
pswd,
serverWorkingPath);
rc = DbLogRecordsForCurrentConnectionRead(dbAlias,
user,
pswd,
serverWorkingPath);
rc = DbRecoveryHistoryFileRead(dbAlias);
rc = DbReadLogRecordsNoConn(dbAlias);
return 0;
} /* end main */
/* initialize cfgParameters */
/* SQLF_DBTN_LOGPATH is a token of the non-updatable database configuration
parameter ’logpath’; it is used to get the server log path */
cfgParameters[0].flags = 0;
cfgParameters[0].token = SQLF_DBTN_LOGPATH;
cfgParameters[0].ptrvalue =
(char *)malloc((SQL_PATH_SZ + 1) * sizeof(char));
/* initialize cfgStruct */
cfgStruct.numItems = 1;
cfgStruct.paramArray = cfgParameters;
cfgStruct.flags = db2CfgDatabase;
cfgStruct.dbname = dbAlias;
strcpy(serverLogPath, cfgParameters[0].ptrvalue);
free(cfgParameters[0].ptrvalue);
return 0;
} /* ServerWorkingPathGet */
printf("\n Create ’%s’ empty database with the same code set as ’%s’
database.\n", newDbAlias, existingDbAlias);
/* initialize cfgParameters */
cfgParameters[0].flags = 0;
cfgParameters[0].token = SQLF_DBTN_TERRITORY;
cfgParameters[0].ptrvalue = (char *)malloc(10 * sizeof(char));
memset(cfgParameters[0].ptrvalue, ’\0’, 10);
cfgParameters[1].flags = 0;
cfgParameters[1].token = SQLF_DBTN_CODESET;
cfgParameters[1].ptrvalue = (char *)malloc(20 * sizeof(char));
memset(cfgParameters[1].ptrvalue, ’\0’, 20);
/* initialize cfgStruct */
cfgStruct.numItems = 2;
cfgStruct.paramArray = cfgParameters;
cfgStruct.flags = db2CfgDatabase;
cfgStruct.dbname = existingDbAlias;
strcpy(dbDescriptor.sqldbdid, SQLE_DBDESC_2);
dbDescriptor.sqldbccp = 0;
dbDescriptor.sqldbcss = SQL_CS_NONE;
strcpy(dbDescriptor.sqldbcmt, "");
dbDescriptor.sqldbsgp = 0;
dbDescriptor.sqldbnsg = 10;
dbDescriptor.sqltsext = -1;
dbDescriptor.sqlcatts = NULL;
dbDescriptor.sqlusrts = NULL;
dbDescriptor.sqltmpts = NULL;
/* create database */
sqlecrea(dbName,
dbLocalAlias,
dbPath,
&dbDescriptor,
&countryInfo,
’\0’,
NULL,
&sqlca);
DB2_API_CHECK("Database -- Create");
return 0;
} /* DbCreate */
return 0;
} /* DbDrop */
db2BackupStruct backupStruct;
db2TablespaceStruct tablespaceStruct;
db2MediaListStruct mediaListStruct;
db2Uint32 backupImageSize;
db2RestoreStruct restoreStruct;
db2TablespaceStruct rtablespaceStruct;
db2MediaListStruct rmediaListStruct;
printf("\n**************************************\n");
printf("*** BACK UP AND RESTORE A DATABASE ***\n");
printf("**************************************\n");
printf("\nUSE THE DB2 APIs:\n");
printf(" db2CfgSet -- Set Configuration\n");
printf(" db2Backup -- Backup Database\n");
printf(" db2Restore -- Restore Database\n");
printf("TO BACK UP AND RESTORE A DATABASE.\n");
/* initialize cfgParameters */
/* SQLF_DBTN_LOG_RETAIN is a token of the updatable database configuration
parameter ’logretain’; it is used to update the database configuration
file */
cfgParameters[0].flags = 0;
cfgParameters[0].token = SQLF_DBTN_LOG_RETAIN;
cfgParameters[0].ptrvalue = (char *)&logretain;
/* initialize cfgStruct */
cfgStruct.numItems = 1;
cfgStruct.paramArray = cfgParameters;
cfgStruct.flags = db2CfgDatabase | db2CfgDelayed;
cfgStruct.dbname = dbAlias;
/*******************************/
/* BACK UP THE DATABASE */
/*******************************/
printf("\n Backing up the ’%s’ database...\n", dbAlias);
tablespaceStruct.tablespaces = NULL;
tablespaceStruct.numTablespaces = 0;
mediaListStruct.locations = &serverWorkingPath;
mediaListStruct.numLocations = 1;
mediaListStruct.locationType = SQLU_LOCAL_MEDIA;
backupStruct.piDBAlias = dbAlias;
backupStruct.piTablespaceList = &tablespaceStruct;
backupStruct.piMediaList = &mediaListStruct;
backupStruct.piUsername = user;
backupStruct.piPassword = pswd;
backupStruct.piVendorOptions = NULL;
backupStruct.iVendorOptionsSize = 0;
backupStruct.iCallerAction = DB2BACKUP_BACKUP;
backupStruct.iBufferSize = 16; /* 16 x 4KB */
backupStruct.iNumBuffers = 1;
backupStruct.iParallelism =1;
backupStruct.iOptions = DB2BACKUP_OFFLINE | DB2BACKUP_DB;
DB2_API_CHECK("Database -- Backup");
while (sqlca.sqlcode != 0)
{
/* continue the backup operation */
backupStruct.iCallerAction = DB2BACKUP_CONTINUE;
DB2_API_CHECK("Database -- Backup");
}
/******************************/
/* RESTORE THE DATABASE */
/******************************/
strcpy(restoreTimestamp, backupStruct.oTimestamp);
rtablespaceStruct.tablespaces = NULL;
rtablespaceStruct.numTablespaces = 0;
rmediaListStruct.locations = &serverWorkingPath;
rmediaListStruct.numLocations = 1;
rmediaListStruct.locationType = SQLU_LOCAL_MEDIA;
restoreStruct.piSourceDBAlias = dbAlias;
restoreStruct.piTargetDBAlias = restoredDbAlias;
restoreStruct.piTimestamp = restoreTimestamp;
restoreStruct.piTargetDBPath = NULL;
restoreStruct.piReportFile = NULL;
restoreStruct.piTablespaceList = &rtablespaceStruct;
restoreStruct.piMediaList = &rmediaListStruct;
restoreStruct.piUsername = user;
restoreStruct.piPassword = pswd;
restoreStruct.piNewLogPath = NULL;
restoreStruct.piVendorOptions = NULL;
restoreStruct.iVendorOptionsSize = 0;
restoreStruct.iParallelism = 1;
restoreStruct.iBufferSize = 1024; /* 1024 x 4KB */;
restoreStruct.iNumBuffers = 1;
restoreStruct.iCallerAction = DB2RESTORE_RESTORE;
restoreStruct.iOptions = DB2RESTORE_OFFLINE | DB2RESTORE_DB |
DB2RESTORE_NODATALINK | DB2RESTORE_NOROLLFWD;
/* The API db2Restore is used to restore a database that has been backed
up using the API db2Backup. */
db2Restore (db2Version810, &restoreStruct, &sqlca);
while (sqlca.sqlcode != 0)
{
/* continue the restore operation */
printf("\n Continuing the restore operation...\n");
restoreStruct.iCallerAction = DB2RESTORE_CONTINUE;
return 0;
} /* DbBackupAndRestore */
db2BackupStruct backupStruct;
db2TablespaceStruct tablespaceStruct;
db2MediaListStruct mediaListStruct;
db2Uint32 backupImageSize;
db2RestoreStruct restoreStruct;
db2TablespaceStruct rtablespaceStruct;
db2MediaListStruct rmediaListStruct;
printf("\n**************************\n");
printf("*** REDIRECTED RESTORE ***\n");
printf("**************************\n");
printf("\nUSE THE DB2 APIs:\n");
printf(" db2CfgSet -- Upate Configuration\n");
printf(" db2Backup -- Backup Database\n");
printf(" sqlecrea -- Create Database\n");
printf(" db2Restore -- Restore Database\n");
printf(" sqlbmtsq -- Tablespace Query\n");
printf(" sqlbtcq -- Tablespace Container Query\n");
printf(" sqlbstsc -- Set Tablespace Containers\n");
printf(" sqlefmem -- Free Memory\n");
printf(" sqledrpd -- Drop Database\n");
printf("TO BACK UP AND DO A REDIRECTED RESTORE OF A DATABASE.\n");
/* initialize cfgParameters */
/* SQLF_DBTN_LOG_RETAIN is a token of the updatable database configuration
parameter ’logretain’; it is used to update the database configuration
file */
cfgParameters[0].flags = 0;
cfgParameters[0].token = SQLF_DBTN_LOG_RETAIN;
cfgParameters[0].ptrvalue = (char *)&logretain;
/* initialize cfgStruct */
cfgStruct.numItems = 1;
cfgStruct.paramArray = cfgParameters;
cfgStruct.flags = db2CfgDatabase | db2CfgDelayed;
cfgStruct.dbname = dbAlias;
/*******************************/
/* BACK UP THE DATABASE */
/*******************************/
printf("\n Backing up the ’%s’ database...\n", dbAlias);
tablespaceStruct.tablespaces = NULL;
tablespaceStruct.numTablespaces = 0;
mediaListStruct.locations = &serverWorkingPath;
mediaListStruct.numLocations = 1;
mediaListStruct.locationType = SQLU_LOCAL_MEDIA;
backupStruct.piDBAlias = dbAlias;
backupStruct.piTablespaceList = &tablespaceStruct;
backupStruct.piMediaList = &mediaListStruct;
backupStruct.piUsername = user;
backupStruct.piPassword = pswd;
backupStruct.piVendorOptions = NULL;
backupStruct.iVendorOptionsSize = 0;
backupStruct.iCallerAction = DB2BACKUP_BACKUP;
backupStruct.iBufferSize = 16; /* 16 x 4KB */
backupStruct.iNumBuffers = 1;
backupStruct.iParallelism =1;
backupStruct.iOptions = DB2BACKUP_OFFLINE | DB2BACKUP_DB;
DB2_API_CHECK("Database -- Backup");
while (sqlca.sqlcode != 0)
{
/* continue the backup operation */
backupStruct.iCallerAction = DB2BACKUP_CONTINUE;
/*
rc = DbCreate(dbAlias, restoredDbAlias);
if (rc != 0)
{
return rc;
}
*/
/******************************/
/* RESTORE THE DATABASE */
/******************************/
strcpy(restoreTimestamp, backupStruct.oTimestamp);
rtablespaceStruct.tablespaces = NULL;
rtablespaceStruct.numTablespaces = 0;
rmediaListStruct.locations = &serverWorkingPath;
rmediaListStruct.numLocations = 1;
rmediaListStruct.locationType = SQLU_LOCAL_MEDIA;
restoreStruct.piSourceDBAlias = dbAlias;
restoreStruct.piTargetDBAlias = restoredDbAlias;
restoreStruct.piTimestamp = restoreTimestamp;
restoreStruct.piTargetDBPath = NULL;
restoreStruct.piReportFile = NULL;
restoreStruct.piTablespaceList = &rtablespaceStruct;
restoreStruct.piMediaList = &rmediaListStruct;
restoreStruct.piUsername = user;
restoreStruct.piPassword = pswd;
restoreStruct.piNewLogPath = NULL;
restoreStruct.piVendorOptions = NULL;
restoreStruct.iVendorOptionsSize = 0;
restoreStruct.iParallelism = 1;
restoreStruct.iBufferSize = 1024; /* 1024 x 4KB */;
restoreStruct.iNumBuffers = 1;
restoreStruct.iOptions = DB2RESTORE_OFFLINE | DB2RESTORE_DB |
DB2RESTORE_NODATALINK | DB2RESTORE_NOROLLFWD;
restoreStruct.iCallerAction = DB2RESTORE_RESTORE_STORDEF;
/* The API db2Restore is used to restore a database that has been backed
up using the API db2Backup. */
db2Restore(db2Version810, &restoreStruct, &sqlca);
while (sqlca.sqlcode != 0)
{
/* continue the restore operation */
printf("\n Continuing the restore operation...\n");
if (sqlca.sqlcode == SQLUD_INACCESSABLE_CONTAINER)
{
/* redefine the table space container layout */
printf("\n Find and redefine inaccessable containers.\n");
rc = InaccessableContainersRedefine(serverWorkingPath);
if (rc != 0)
{
return rc;
}
}
restoreStruct.iCallerAction = DB2RESTORE_CONTINUE;
return 0;
} /* DbBackupAndRedirectedRestore */
/* The API sqlbmtsq provides a one-call interface to the table space query
data. The query data for all table spaces in the database is returned
in an array. */
sqlbmtsq(&sqlca,
&numTablespaces,
&ppTablespaces,
SQLB_RESERVED1,
SQLB_RESERVED2);
DB2_API_CHECK("tablespaces -- get");
sprintf(pContainers[contNb].name, "%s%sSQLT%04d.%d",
serverWorkingPath, pathSep,
ppTablespaces[tspNb]->id,
pContainers[contNb].id);
printf(" - new container name: %s\n",
pContainers[contNb].name);
break;
case SQLB_CONT_DISK:
case SQLB_CONT_FILE:
default:
printf(" Unknown container type.\n");
break;
}
}
}
/* The API sqlefmem is used here to free memory allocated by DB2 for use
with the API sqlbtcq (Tablespace Container Query). */
sqlefmem(&sqlca, pContainers);
DB2_API_CHECK("tablespace containers memory -- free");
}
/* The API sqlefmem is used here to free memory allocated by DB2 for
use with the API sqlbmtsq (Tablespace Query). */
sqlefmem(&sqlca, ppTablespaces);
DB2_API_CHECK("tablespaces memory -- free");
return 0;
} /* InaccessableContainersRedefine */
db2BackupStruct backupStruct;
db2TablespaceStruct tablespaceStruct;
db2MediaListStruct mediaListStruct;
db2Uint32 backupImageSize;
db2RestoreStruct restoreStruct;
db2TablespaceStruct rtablespaceStruct;
db2MediaListStruct rmediaListStruct;
db2RfwdInputStruct rfwdInput;
db2RfwdOutputStruct rfwdOutput;
db2RollforwardStruct rfwdStruct;
printf("\n****************************\n");
printf("*** ROLLFORWARD RECOVERY ***\n");
printf("****************************\n");
printf("\nUSE THE DB2 APIs:\n");
printf(" db2CfgSet -- Set Configuration\n");
printf(" db2Backup -- Backup Database\n");
printf(" sqlecrea -- Create Database\n");
printf(" db2Restore -- Restore Database\n");
printf(" db2Rollforward -- Rollforward Database\n");
printf(" sqledrpd -- Drop Database\n");
printf("TO BACK UP, RESTORE, AND ROLL A DATABASE FORWARD. \n");
/* initialize cfgParameters */
cfgParameters[0].flags = 0;
cfgParameters[0].token = SQLF_DBTN_LOG_RETAIN;
cfgParameters[0].ptrvalue = (char *)&logretain;
/* initialize cfgStruct */
cfgStruct.numItems = 1;
cfgStruct.paramArray = cfgParameters;
cfgStruct.flags = db2CfgDatabase | db2CfgDelayed;
cfgStruct.dbname = dbAlias;
tablespaceStruct.tablespaces = NULL;
tablespaceStruct.numTablespaces = 0;
mediaListStruct.locations = &serverWorkingPath;
mediaListStruct.numLocations = 1;
mediaListStruct.locationType = SQLU_LOCAL_MEDIA;
backupStruct.piDBAlias = dbAlias;
backupStruct.piTablespaceList = &tablespaceStruct;
backupStruct.piMediaList = &mediaListStruct;
backupStruct.piUsername = user;
backupStruct.piPassword = pswd;
backupStruct.piVendorOptions = NULL;
backupStruct.iVendorOptionsSize = 0;
backupStruct.iCallerAction = DB2BACKUP_BACKUP;
backupStruct.iBufferSize = 16; /* 16 x 4KB */
backupStruct.iNumBuffers = 1;
backupStruct.iParallelism =1;
backupStruct.iOptions = DB2BACKUP_OFFLINE | DB2BACKUP_DB;
DB2_API_CHECK("Database -- Backup");
while (sqlca.sqlcode != 0)
{
/* continue the backup operation */
printf("\n Continuing the backup operation...\n");
backupStruct.iCallerAction = DB2BACKUP_CONTINUE;
DB2_API_CHECK("Database -- Backup");
}
/*
rc = DbCreate(dbAlias, rolledForwardDbAlias);
if (rc != 0)
{
return rc;
}
*/
/******************************/
/* RESTORE THE DATABASE */
/******************************/
strcpy(restoreTimestamp, backupStruct.oTimestamp);
rtablespaceStruct.tablespaces = NULL;
rtablespaceStruct.numTablespaces = 0;
rmediaListStruct.locations = &serverWorkingPath;
rmediaListStruct.numLocations = 1;
rmediaListStruct.locationType = SQLU_LOCAL_MEDIA;
restoreStruct.piSourceDBAlias = dbAlias;
restoreStruct.piTargetDBAlias = rolledForwardDbAlias;
restoreStruct.piTimestamp = restoreTimestamp;
restoreStruct.piTargetDBPath = NULL;
restoreStruct.piReportFile = NULL;
restoreStruct.piTablespaceList = &rtablespaceStruct;
restoreStruct.piMediaList = &rmediaListStruct;
restoreStruct.piUsername = user;
restoreStruct.piPassword = pswd;
restoreStruct.piNewLogPath = NULL;
restoreStruct.piVendorOptions = NULL;
restoreStruct.iVendorOptionsSize = 0;
restoreStruct.iParallelism = 1;
restoreStruct.iBufferSize = 1024; /* 1024 x 4KB */;
restoreStruct.iNumBuffers = 1;
restoreStruct.iCallerAction = DB2RESTORE_RESTORE;
restoreStruct.iOptions = DB2RESTORE_OFFLINE | DB2RESTORE_DB |
DB2RESTORE_NODATALINK | DB2RESTORE_ROLLFWD;
/* The API db2Restore is used to restore a database that has been backed
up using the API db2Backup. */
db2Restore (db2Version810, &restoreStruct, &sqlca);
while (sqlca.sqlcode != 0)
{
/* continue the restore operation */
printf("\n Continuing the restore operation...\n");
restoreStruct.iCallerAction = DB2RESTORE_CONTINUE;
/******************************/
/* ROLLFORWARD RECOVERY */
/******************************/
rfwdInput.version = SQLUM_RFWD_VERSION;
rfwdInput.pDbAlias = rolledForwardDbAlias;
rfwdInput.CallerAction = SQLUM_ROLLFWD_STOP;
rfwdInput.pStopTime = SQLUM_INFINITY_TIMESTAMP;
rfwdInput.pUserName = user;
rfwdInput.pPassword = pswd;
rfwdInput.pOverflowLogPath = serverWorkingPath;
rfwdInput.NumChngLgOvrflw = 0;
rfwdInput.pChngLogOvrflw = NULL;
rfwdInput.ConnectMode = SQLUM_OFFLINE;
rfwdInput.pTablespaceList = NULL;
rfwdInput.AllNodeFlag = SQLURF_ALL_NODES;
rfwdInput.NumNodes = 0;
rfwdInput.pNodeList = NULL;
rfwdInput.pDroppedTblID = NULL;
rfwdInput.pExportDir = NULL;
rfwdInput.NumNodeInfo = 1;
rfwdInput.RollforwardFlags = 0;
rfwdOutput.pApplicationId = rollforwardAppId;
rfwdOutput.pNumReplies = &numReplies;
rfwdOutput.pNodeInfo = &nodeInfo;
rfwdStruct.roll_input = &rfwdInput;
rfwdStruct.roll_output = &rfwdOutput;
/* rollforward database */
/* The API db2Rollforward rollforward recovers a database by
applying transactions recorded in the database log files. */
db2Rollforward(db2Version810, &rfwdStruct, &sqlca);
DB2_API_CHECK("rollforward -- start");
return 0;
} /* DbBackupRestoreAndRollforward */
db2BackupStruct backupStruct;
db2TablespaceStruct tablespaceStruct;
db2MediaListStruct mediaListStruct;
db2Uint32 backupImageSize;
db2RestoreStruct restoreStruct;
db2TablespaceStruct rtablespaceStruct;
db2MediaListStruct rmediaListStruct;
SQLU_LSN startLSN;
SQLU_LSN endLSN;
char *logBuffer;
sqluint32 logBufferSize;
db2ReadLogInfoStruct readLogInfo;
db2ReadLogStruct readLogInput;
int i;
printf("\n*****************************\n");
printf("*** ASYNCHRONOUS READ LOG ***\n");
printf("*****************************\n");
printf("\nUSE THE DB2 APIs:\n");
printf(" db2CfgSet -- Set Configuration\n");
printf(" db2Backup -- Backup Database\n");
printf(" db2ReadLog -- Asynchronous Read Log\n");
printf("AND THE SQL STATEMENTS:\n");
printf(" CONNECT\n");
printf(" ALTER TABLE\n");
printf(" COMMIT\n");
printf(" INSERT\n");
printf(" DELETE\n");
printf(" ROLLBACK\n");
printf(" CONNECT RESET\n");
printf("TO READ LOG RECORDS FOR THE CURRENT CONNECTION.\n");
/* initialize cfgParameters */
cfgParameters[0].flags = 0;
cfgParameters[0].token = SQLF_DBTN_LOG_RETAIN;
cfgParameters[0].ptrvalue = (char *)&logretain;
/* enable LOGRETAIN */
logretain = SQLF_LOGRETAIN_RECOVERY;
/* initialize cfgStruct */
cfgStruct.numItems = 1;
cfgStruct.paramArray = cfgParameters;
cfgStruct.flags = db2CfgDatabase | db2CfgDelayed;
cfgStruct.dbname = dbAlias;
tablespaceStruct.tablespaces = NULL;
tablespaceStruct.numTablespaces = 0;
mediaListStruct.locations = &serverWorkingPath;
mediaListStruct.numLocations = 1;
mediaListStruct.locationType = SQLU_LOCAL_MEDIA;
backupStruct.piDBAlias = dbAlias;
backupStruct.piTablespaceList = &tablespaceStruct;
backupStruct.piMediaList = &mediaListStruct;
backupStruct.piUsername = user;
backupStruct.piPassword = pswd;
backupStruct.piVendorOptions = NULL;
backupStruct.iVendorOptionsSize = 0;
backupStruct.iCallerAction = DB2BACKUP_BACKUP;
backupStruct.iBufferSize = 16; /* 16 x 4KB */
backupStruct.iNumBuffers = 1;
backupStruct.iParallelism =1;
backupStruct.iOptions = DB2BACKUP_OFFLINE | DB2BACKUP_DB;
DB2_API_CHECK("Database -- Backup");
while (sqlca.sqlcode != 0)
{
/* continue the backup operation */
printf("\n Continuing the backup operation...\n");
backupStruct.iCallerAction = DB2BACKUP_CONTINUE;
DB2_API_CHECK("Database -- Backup");
}
" COMMIT;\n"
" DELETE FROM emp_resume WHERE empno = ’000777’;\n"
" DELETE FROM emp_resume WHERE empno = ’777777’;\n"
" COMMIT;\n"
" DELETE FROM emp_resume WHERE empno = ’000140’;\n"
" ROLLBACK;\n"
" ALTER TABLE emp_resume DATA CAPTURE NONE;\n"
" COMMIT;\n");
logBuffer = NULL;
logBufferSize = 0;
readLogInput.iCallerAction = DB2READLOG_QUERY;
readLogInput.piStartLSN = NULL;
readLogInput.piEndLSN = NULL;
readLogInput.poLogBuffer = NULL;
readLogInput.iLogBufferSize = 0;
readLogInput.iFilterOption = DB2READLOG_FILTER_ON;
readLogInput.poReadLogInfo = &readLogInfo;
rc = db2ReadLog(db2Version810,
&readLogInput,
&sqlca);
logBufferSize = 64 * 1024;
logBuffer = (char *)malloc(logBufferSize);
rc = db2ReadLog(db2Version810,
&readLogInput,
&sqlca);
if (sqlca.sqlcode != SQLU_RLOG_READ_TO_CURRENT)
{
DB2_API_CHECK("database logs -- read");
}
else
{
if (readLogInfo.logRecsWritten == 0)
{
printf("\n Database log empty.\n");
}
}
return 0;
} /* DbLogRecordsForCurrentConnectionRead */
printf("\n*********************************\n");
printf("*** NO DB CONNECTION READ LOG ***\n");
printf("*********************************\n");
printf("\nUSE THE DB2 APIs:\n");
printf(" db2ReadLogNoConnInit -- Initialize No Db Connection Read Log\n");
printf(" db2ReadLogNoConn -- No Db Connection Read Log\n");
printf(" db2ReadLogNoConnTerm -- Terminate No Db Connection Read Log\n");
printf("TO READ LOG RECORDS FROM A DATABASE LOG DIRECTORY.\n");
cfgParameters[0].flags = 0;
cfgParameters[0].token = SQLF_DBTN_LOGPATH;
cfgParameters[0].ptrvalue =
(char *)malloc((SQL_PATH_SZ + 1) * sizeof(char));
/* Initialize cfgStruct */
cfgStruct.numItems = 1;
cfgStruct.paramArray = cfgParameters;
cfgStruct.flags = db2CfgDatabase;
cfgStruct.dbname = dbAlias;
strcpy(logPath, cfgParameters[0].ptrvalue);
free(cfgParameters[0].ptrvalue);
/* First we must allocate memory for the API’s control blocks and log
buffer */
readLogMemory = (char*)malloc(readLogMemSize);
rc = db2ReadLogNoConnInit(db2Version810,
&readLogInit,
&sqlca);
if (sqlca.sqlcode != SQLU_RLOG_LSNS_REUSED)
{
DB2_API_CHECK("database logs no db conn -- initialization");
}
rc = db2ReadLogNoConn(db2Version810,
&readLogInput,
&sqlca);
if (sqlca.sqlcode != 0)
{
DB2_API_CHECK("database logs no db conn -- query");
}
readLogInput.iCallerAction = DB2READLOG_READ;
readLogInput.piStartLSN = &startLSN;
readLogInput.piEndLSN = &endLSN;
readLogInput.poLogBuffer = logBuffer;
readLogInput.iLogBufferSize = logBufferSize;
readLogInput.piReadLogMemPtr = readLogMemory;
readLogInput.poReadLogInfo = &readLogInfo;
rc = db2ReadLogNoConn(db2Version810,
&readLogInput,
&sqlca);
if (sqlca.sqlcode != SQLU_RLOG_READ_TO_CURRENT)
{
DB2_API_CHECK("database logs no db conn -- read");
}
else
{
if (readLogInfo.logRecsWritten == 0)
{
printf("\n Database log empty.\n");
}
}
readLogTerm.poReadLogMemPtr = &readLogMemory;
rc = db2ReadLogNoConnTerm(db2Version810,
&readLogTerm,
&sqlca);
if (sqlca.sqlcode != 0)
{
DB2_API_CHECK("database logs no db conn -- terminate");
}
return 0;
} /* DbReadLogRecordsNoConn */
/* initialize recordBuffer */
recordBuffer = logBuffer + sizeof(SQLU_LSN);
return 0;
} /* LogBufferDisplay */
/* determine logManagerLogRecordHeaderSize */
if (recordType == 0x0043)
{ /* compensation */
if (recordFlag & 0x0002)
{ /* propagatable */
logManagerLogRecordHeaderSize = 32;
}
else
{
logManagerLogRecordHeaderSize = 26;
}
}
else
{ /* non compensation */
logManagerLogRecordHeaderSize = 20;
}
switch (recordType)
{
case 0x008A:
case 0x0084:
case 0x0041:
recordDataBuffer = recordBuffer + logManagerLogRecordHeaderSize;
recordDataSize = recordSize - logManagerLogRecordHeaderSize;
rc = SimpleLogRecordDisplay(recordType,
recordFlag,
recordDataBuffer,
recordDataSize);
break;
case 0x004E:
case 0x0043:
recordHeaderBuffer = recordBuffer + logManagerLogRecordHeaderSize;
componentIdentifier = *(sqluint8 *)recordHeaderBuffer;
switch (componentIdentifier)
{
case 1:
recordHeaderSize = 6;
break;
default:
printf(" Unknown complex log record: %lu %c %u\n",
recordSize, recordType, componentIdentifier);
return 1;
}
recordDataBuffer = recordBuffer +
logManagerLogRecordHeaderSize +
recordHeaderSize;
recordDataSize = recordSize -
logManagerLogRecordHeaderSize -
recordHeaderSize;
rc = ComplexLogRecordDisplay(recordType,
recordFlag,
recordHeaderBuffer,
recordHeaderSize,
componentIdentifier,
recordDataBuffer,
recordDataSize);
break;
default:
printf(" Unknown log record: %lu \"%c\"\n",
recordSize, (char)recordType);
break;
}
return 0;
} /* LogRecordDisplay */
switch (recordType)
{
case 138:
printf("\n Record type: Local pending list\n");
timeTransactionCommited = *(sqluint32 *)(recordDataBuffer);
authIdLen = *(sqluint16 *)(recordDataBuffer + 4);
memcpy(authId, (char *)(recordDataBuffer + 6), authIdLen);
authId[authIdLen] = ’\0’;
printf(" %s: %lu\n",
"UTC transaction committed (in seconds since 01/01/70)",
timeTransactionCommited);
printf(" authorization ID of the application: %s\n", authId);
break;
case 132:
printf("\n Record type: Normal commit\n");
timeTransactionCommited = *(sqluint32 *)(recordDataBuffer);
authIdLen = (sqluint16) (recordDataSize - 4);
memcpy(authId, (char *)(recordDataBuffer + 4), authIdLen);
authId[authIdLen] = ’\0’;
printf(" %s: %lu\n",
"UTC transaction committed (in seconds since 01/01/70)",
timeTransactionCommited);
printf(" authorization ID of the application: %s\n", authId);
break;
case 65:
printf("\n Record type: Normal abort\n");
authIdLen = (sqluint16) (recordDataSize);
memcpy(authId, (char *)(recordDataBuffer), authIdLen);
authId[authIdLen] = ’\0’;
printf(" authorization ID of the application: %s\n", authId);
break;
default:
printf(" Unknown simple log record: %d %lu\n",
recordType, recordDataSize);
break;
}
return 0;
} /* SimpleLogRecordDisplay */
switch ((char)recordType)
{
case ’N’:
printf("\n Record type: Normal\n");
break;
case ’C’:
printf("\n Record type: Compensation\n");
break;
default:
printf("\n Unknown complex log record type: %c\n", recordType);
break;
}
switch (componentIdentifier)
{
case 1:
printf(" component ID: DMS log record\n");
break;
default:
printf(" unknown component ID: %d\n", componentIdentifier);
break;
oldSubRecordLen +
recordHeaderSize +
10);
printf(" oldRID: %lu\n", oldRID);
printf(" old subrecord length: %u\n", oldSubRecordLen);
printf(" old subrecord offset: %u\n", oldSubRecordOffset);
oldSubRecordBuffer = recordDataBuffer + 12;
rc = LogSubRecordDisplay(oldSubRecordBuffer, oldSubRecordLen);
printf(" newRID: %lu\n", newRID);
printf(" new subrecord length: %u\n", newSubRecordLen);
printf(" new subrecord offset: %u\n", newSubRecordOffset);
newSubRecordBuffer = recordDataBuffer +
12 +
oldSubRecordLen +
recordHeaderSize +
12;
rc = LogSubRecordDisplay(newSubRecordBuffer, newSubRecordLen);
break;
case 124:
printf(" function ID: Alter Table Attribute\n");
alterBitMask = *(sqluint32 *)(recordDataBuffer + 2);
alterBitValues = *(sqluint32 *)(recordDataBuffer + 6);
if (alterBitMask & 0x00000001)
{
/* Alter the value of the ’propagation’ attribute: */
printf(" Propagation attribute is changed to: ");
if (alterBitValues & 0x00000001)
{
printf("ON\n");
}
else
{
printf("OFF\n");
}
}
if (alterBitMask & 0x00000002)
{
/* Alter the value of the ’pending’ attribute: */
printf(" Pending attribute is changed to: ");
if (alterBitValues & 0x00000002)
{
printf("ON\n");
}
else
{
printf("OFF\n");
}
}
if (alterBitMask & 0x00010000)
{
/* Alter the value of the ’append mode’ attribute: */
printf(" Append Mode attribute is changed to: ");
if (alterBitValues & 0x00010000)
{
printf("ON\n");
}
else
{
printf("OFF\n");
}
}
if (alterBitMask & 0x00200000)
{
/* Alter the value of the ’LF Propagation’ attribute: */
printf(" LF Propagation attribute is changed to: ");
if (alterBitValues & 0x00200000)
{
printf("ON\n");
}
else
{
printf("OFF\n");
}
}
if (alterBitMask & 0x00400000)
{
/* Alter the value of the ’LOB Propagation’ attribute: */
printf(" LOB Propagation attribute is changed to: ");
if (alterBitValues & 0x00400000)
{
printf("ON\n");
}
else
{
printf("OFF\n");
}
}
break;
default:
printf(" unknown function identifier: %u\n",
functionIdentifier);
break;
}
return 0;
} /* ComplexLogRecordDisplay */
{
printf(" Unknown subrecord type: %x\n", recordType);
}
else if (recordType == 4)
{
printf(" subrecord type: Special control\n");
}
else
{
/* recordType == 0 or recordType == 16
* record Type 0 indicates a normal record
* record Type 16, for the purposes of this program, should be treated
* as type 0
*/
printf(" subrecord type: Updatable, ");
updatableRecordType = *(sqluint8 *)(recordBuffer + 4);
if (updatableRecordType != 1)
{
printf("Internal control\n");
}
else
{
printf("Formatted user data\n");
userDataFixedLength = *(sqluint16 *)(recordBuffer + 6);
printf(" user data fixed length: %u\n",
userDataFixedLength);
userDataBuffer = recordBuffer + 8;
userDataSize = recordSize - 8;
rc = UserDataDisplay(userDataBuffer, userDataSize);
}
}
return 0;
} /* LogSubRecordDisplay */
}
}
printf("*");
for (col = 0; col < 10; col = col + 1)
{
if (line * 10 + col < dataSize)
{
if (isalpha(dataBuffer[line * 10 + col]) ||
isdigit(dataBuffer[line * 10 + col]))
{
printf("%c", dataBuffer[line * 10 + col]);
}
else
{
printf(".");
}
}
else
{
printf(" ");
}
}
printf("*");
printf("\n");
}
return 0;
} /* UserDataDisplay */
printf("\n*********************************************\n");
printf("*** READ A DATABASE RECOVERY HISTORY FILE ***\n");
printf("*********************************************\n");
printf("\nUSE THE DB2 APIs:\n");
printf(" db2HistoryOpenScan -- Open Recovery History File Scan\n");
printf(" db2HistoryGetEntry -- Get Next Recovery History File Entry\n");
printf(" db2HistoryCloseScan -- Close Recovery History File Scan\n");
printf("TO READ A DATABASE RECOVERY HISTORY FILE.\n");
dbHistoryEntryGetParam.pioHistData = &histEntryData;
dbHistoryEntryGetParam.iCallerAction = DB2HISTORY_GET_ALL;
rc = HistoryEntryDataFieldsAlloc(&histEntryData);
if (rc != 0)
{
return rc;
}
/*******************************************/
/* OPEN THE DATABASE RECOVERY HISTORY FILE */
/*******************************************/
printf("\n Open recovery history file for ’%s’ database.\n", dbAlias);
numEntries = dbHistoryOpenParam.oNumRows;
/**********************************************/
/* READ AN ENTRY IN THE RECOVERY HISTORY FILE */
/**********************************************/
for (entryNb = 0; entryNb < numEntries; entryNb = entryNb + 1)
{
printf("\n Read entry number %u.\n", entryNb);
/********************************************/
/* CLOSE THE DATABASE RECOVERY HISTORY FILE */
/********************************************/
printf("\n Close recovery history file for ’%s’ database.\n", dbAlias);
/* The API db2HistoryCloseScan ends the recovery history file scan and
frees DB2 resources required for the scan. */
db2HistoryCloseScan(db2Version810, &recoveryHistoryFileHandle, &sqlca);
DB2_API_CHECK("database recovery history file -- close");
return 0;
} /* DbRecoveryHistoryFileRead */
strcpy(pHistEntryData->ioHistDataID, "SQLUHINF");
pHistEntryData->poEventSQLCA =
(struct sqlca *)malloc(sizeof(struct sqlca));
pHistEntryData->iNumTablespaces = 3;
return 0;
} /* HistoryEntryDataFieldsAlloc */
char buf[129];
sqluint32 tsNb;
memcpy(buf, histEntryData.oObjectPart.pioData,
histEntryData.oObjectPart.oLength);
buf[histEntryData.oObjectPart.oLength] = ’\0’;
printf(" object part: %s\n", buf);
memcpy(buf, histEntryData.oEndTime.pioData,
histEntryData.oEndTime.oLength);
buf[histEntryData.oEndTime.oLength] = ’\0’;
printf(" end time: %s\n", buf);
memcpy(buf, histEntryData.oFirstLog.pioData,
histEntryData.oFirstLog.oLength);
buf[histEntryData.oFirstLog.oLength] = ’\0’;
printf(" first log: %s\n", buf);
memcpy(buf, histEntryData.oLastLog.pioData,
histEntryData.oLastLog.oLength);
buf[histEntryData.oLastLog.oLength] = ’\0’;
printf(" last log: %s\n", buf);
memcpy(buf, histEntryData.oTableQualifier.pioData,
histEntryData.oTableQualifier.oLength);
buf[histEntryData.oTableQualifier.oLength] = ’\0’;
printf(" table qualifier: %s\n", buf);
memcpy(buf, histEntryData.oTableName.pioData,
histEntryData.oTableName.oLength);
buf[histEntryData.oTableName.oLength] = ’\0’;
printf(" table name: %s\n", buf);
memcpy(buf, histEntryData.oLocation.pioData,
histEntryData.oLocation.oLength);
buf[histEntryData.oLocation.oLength] = ’\0’;
printf(" location: %s\n", buf);
memcpy(buf, histEntryData.oComment.pioData,
histEntryData.oComment.oLength);
buf[histEntryData.oComment.oLength] = ’\0’;
printf(" comment: %s\n", buf);
memcpy(buf, histEntryData.oCommandText.pioData,
histEntryData.oCommandText.oLength);
buf[histEntryData.oCommandText.oLength] = ’\0’;
printf(" command text: %s\n", buf);
printf(" history file entry ID: %u\n", histEntryData.oEID.ioHID);
printf(" table spaces:\n");
{
memcpy(buf, histEntryData.poTablespace[tsNb].pioData,
histEntryData.poTablespace[tsNb].oLength);
buf[histEntryData.poTablespace[tsNb].oLength] = ’\0’;
printf(" %s\n", buf);
}
return 0;
} /* HistoryEntryDisplay */
free(pHistEntryData->oObjectPart.pioData);
free(pHistEntryData->oEndTime.pioData);
free(pHistEntryData->oFirstLog.pioData);
free(pHistEntryData->oLastLog.pioData);
free(pHistEntryData->oID.pioData);
free(pHistEntryData->oTableQualifier.pioData);
free(pHistEntryData->oTableName.pioData);
free(pHistEntryData->oLocation.pioData);
free(pHistEntryData->oComment.pioData);
free(pHistEntryData->oCommandText.pioData);
free(pHistEntryData->poEventSQLCA);
free(pHistEntryData->poTablespace);
return 0;
} /* HistoryEntryDataFieldsFree */
{
int rc = 0;
struct sqlca sqlca;
struct db2HistoryOpenStruct dbHistoryOpenParam;
sqluint16 recoveryHistoryFileHandle;
struct db2HistoryGetEntryStruct dbHistoryEntryGetParam;
struct db2HistoryData histEntryData;
char newLocation[DB2HISTORY_LOCATION_SZ + 1];
char newComment[DB2HISTORY_COMMENT_SZ + 1];
struct db2HistoryUpdateStruct dbHistoryUpdateParam;
printf("\n*****************************************************\n");
printf("*** UPDATE A DATABASE RECOVERY HISTORY FILE ENTRY ***\n");
printf("*****************************************************\n");
printf("\nUSE THE DB2 APIs:\n");
printf(" db2HistoryOpenScan -- Open Recovery History File Scan\n");
printf(" db2HistoryGetEntry -- Get Next Recovery History File Entry\n");
printf(" db2HistoryUpdate -- Update Recovery History File\n");
printf(" db2HistoryCloseScan -- Close Recovery History File Scan\n");
printf("TO UPDATE A DATABASE RECOVERY HISTORY FILE ENTRY.\n");
/*******************************************/
/* OPEN THE DATABASE RECOVERY HISTORY FILE */
/*******************************************/
printf("\n Open the recovery history file for ’%s’ database.\n", dbAlias);
/*****************************************************/
/* READ THE FIRST ENTRY IN THE RECOVERY HISTORY FILE */
/*****************************************************/
printf("\n Read the first entry in the recovery history file.\n");
/* The API db2HistoryGetEntry gets the next entry from the recovery
history file. */
db2HistoryGetEntry(db2Version810, &dbHistoryEntryGetParam, &sqlca);
/* Call this API to update the location and comment of the first
entry in the history file: */
db2HistoryUpdate(db2Version810, &dbHistoryUpdateParam, &sqlca);
DB2_API_CHECK("first history file entry -- update");
rc = DbDisconn(dbAlias);
if (rc != 0)
{
return rc;
}
/********************************************/
/* CLOSE THE DATABASE RECOVERY HISTORY FILE */
/********************************************/
printf("\n Close recovery history file for ’%s’ database.\n", dbAlias);
/* The API db2HistoryCloseScan ends the recovery history file scan and
frees DB2 resources required for the scan. */
db2HistoryCloseScan(db2Version810, &recoveryHistoryFileHandle, &sqlca);
DB2_API_CHECK("database recovery history file -- close");
/**********************************************/
/* RE-OPEN THE DATABASE RECOVERY HISTORY FILE */
/**********************************************/
printf("\n Open the recovery history file for ’%s’ database.\n", dbAlias);
recoveryHistoryFileHandle = dbHistoryOpenParam.oHandle;
dbHistoryEntryGetParam.iHandle = recoveryHistoryFileHandle;
printf("\n Read the first recovery history file entry.\n");
/************************************************************************/
/* READ THE FIRST ENTRY IN THE RECOVERY HISTORY FILE AFTER MODIFICATION */
/************************************************************************/
db2HistoryGetEntry(db2Version810, &dbHistoryEntryGetParam, &sqlca);
DB2_API_CHECK("first recovery history file entry -- read");
/********************************************/
/* CLOSE THE DATABASE RECOVERY HISTORY FILE */
/********************************************/
printf("\n Close the recovery history file for ’%s’ database.\n",
dbAlias);
return 0;
} /* DbFirstRecoveryHistoryFileEntryUpdate */
printf("\n***************************************\n");
printf("*** PRUNE THE RECOVERY HISTORY FILE ***\n");
printf("***************************************\n");
printf("\nUSE THE DB2 API:\n");
printf(" db2Prune -- Prune Recovery History File\n");
printf("AND THE SQL STATEMENTS:\n");
printf(" CONNECT\n");
printf(" CONNECT RESET\n");
printf("TO PRUNE THE RECOVERY HISTORY FILE.\n");
return rc;
}
/* db2Prune can be called to delete entries from the recovery history file
or log files from the active log path. Here we call it to delete
entries from the recovery history file.
You must have SYSADM, SYSCTRL, SYSMAINT, or DBADM authority to prune
the recovery history file. */
db2Prune(db2Version810, &histPruneParam, &sqlca);
DB2_API_CHECK("recovery history file -- prune");
return 0;
} /* DbRecoveryHistoryFilePrune */
Related concepts:
v “Managing Log Files” on page 45
v “Tivoli Space Manager Hierarchical Storage Manager (AIX)” in the Quick
Beginnings for Data Links Manager
Related reference:
v “db2adutl - Work with TSM Archived Images” on page 209
When a user exit program is invoked, the database manager passes control to
the executable file, db2uext2. The database manager passes parameters to
db2uext2 and, on completion, the program passes a return code back to the
database manager. Because the database manager handles a limited set of
return conditions, the user exit program should be able to handle error
conditions (see “Error Handling” on page 325). And because only one user
exit program can be invoked within a database manager instance, it must
have a section for each of the operations it may be asked to perform.
You should be aware that user exit programs must copy log files from the
active log path to the archive log path. Do not remove log files from the active
log path. (This could cause problems during database recovery.) DB2®
removes archived log files from the active log path when these log files are no
longer needed for recovery.
Following is a description of the sample user exit programs that are shipped
with DB2.
v UNIX® based systems
The user exit sample programs for DB2 for UNIX based systems are found
in the sqllib/samples/c subdirectory. Although the samples provided are
coded in C, your user exit program can be written in a different
programming language.
Your user exit program must be an executable file whose name is db2uext2.
Calling Format
When the database manager calls a user exit program, it passes a set of
parameters (of data type CHAR) to the program. The calling format is
dependent on your operating system:
db2uext2 -OS<os> -RL<db2rel> -RQ<request> -DB<dbname>
-NN<nodenum> -LP<logpath> -LN<logname> -AP<tsmpasswd>
-SP<startpage> -LS<logsize>
os Specifies the platform on which the instance is running. Valid
values are: AIX®, Solaris, HP-UX, SCO, Linux, and NT.
db2rel Specifies the DB2 release level. For example, SQL07020.
request Specifies a request type. Valid values are: ARCHIVE and
RETRIEVE.
Error Handling
Your user exit program should be designed to provide specific and
meaningful return codes, so that the database manager can interpret them
correctly. Because the user exit program is called by the underlying operating
system command processor, the operating system itself could return error
codes. And because these error codes are not remapped, use the operating
system message help utility to obtain information about them.
Table 8 shows the codes that can be returned by a user exit program, and
describes how these codes are interpreted by the database manager. If a return
code is not listed in the table, it is treated as if its value were 32.
Table 8. User Exit Program Return Codes. Applies to archiving and retrieval
operations only.
Return Code Explanation
0 Successful.
4 Temporary resource error encountered.a
8 Operator intervention is required.a
12 Hardware error.b
16 Error with the user exit program or a software function used by the
program.b
20 Error with one or more of the parameters passed to the user exit
program. Verify that the user exit program is correctly processing the
specified parameters.b
b
24 The user exit program was not found.
Note: During archiving and retrieval operations, an alert message is issued for all
return codes except 0, and 4. The alert message contains the return code from the
user exit program, and a copy of the input parameters that were provided to the user
exit program.
DB2 defines a set of function prototypes that provide a general purpose data
interface to backup and restore that can be used by many vendors. These
functions are to be provided by the vendor in a shared library on UNIX based
systems, or DLL on the Windows operating system. When the functions are
invoked by DB2, the shared library or DLL specified by the calling backup or
restore routine is loaded and the functions provided by the vendor are called
to perform the required tasks.
DB2 will call these functions, and they should be provided by the vendor
product in a shared library on UNIX based systems, or in a DLL on the
Windows operating system.
Note: The shared library or DLL code will be run as part of the database
engine code. Therefore, it must be reentrant and thoroughly debugged.
An errant function may compromise data integrity of the database.
The sequence of functions that DB2 will call during a specific backup or
restore operation depends on:
v The number of sessions that will be utilized.
v Whether it is a backup or a restore operation.
v The PROMPTING mode that is specified on the backup or restore
operation.
v The characteristics of the device on which the data is stored.
v The errors that may be encountered during the operation.
Number of Sessions
DB2 supports the backup and restore of database objects using one or more
data streams or sessions. A backup or restore using three sessions would
require three physical or logical devices to be available. When vendor device
support is being used, it is the vendor’s functions that are responsible for
managing the interface to each physical or logical device. DB2 simply sends or
receives data buffers to or from the vendor provided functions.
DB2 will continue to initialize sessions until the specified number is reached,
or it receives an SQLUV_MAX_LINK_GRANT warning return code from an
sqluvint call. In order to warn DB2 that it has reached the maximum number
of sessions that it can support, the vendor product will require code to track
the number of active sessions. Failure to warn DB2 could lead to a DB2
initialize session request that fails, resulting in a termination of all sessions
and the failure of the entire backup or restore operation.
When the operation is backup, DB2 writes a media header record at the
beginning of each session. The record contains information that DB2 uses to
identify the session during a restore operation. DB2 uniquely identifies each
session by appending a sequence number to the name of the backup image.
The number starts at one for the first session, and is incremented by one each
time another session is initiated with an sqluvint call for a backup or a restore
operation.
When the backup operation completes successfully, DB2 writes a media trailer
to the last session it closes. This trailer includes information that tells DB2
how many sessions were used to perform the backup operation. During a
restore operation, this information is used to ensure all the sessions, or data
streams, have been restored.
followed by 1 to n
sqluvput
followed by 1
sqluvend, action = SQLUV_COMMIT
The DB2-INFO structure, used on the sqluvint call, contains the information
required to identify the backup. A sequence number is supplied. The vendor
product may choose to save this information. DB2 will use it during restore to
identify the backup that will be restored.
followed by 1 to n
sqluvget
followed by 1
sqluvend, action = SQLUV_COMMIT
The information in the DB2-INFO structure used on the sqluvint call will
contain the information required to identify the backup. A sequence number is
not supplied. DB2 expects that all backup objects (session outputs committed
during a backup) will be returned. The first backup object returned is the
object generated with sequence number 1, and all other objects are restored in
no specific order. DB2 checks the media tail to ensure that all objects have
been processed.
Note: Not all vendor products will keep a record of the names of the backup
objects. This is most likely when the backups are being done to tapes,
or other media of limited capacity. During the initialization of restore
sessions, the identification information can be utilized to stage the
necessary backup objects so that they are available when required; this
may be most useful when juke boxes or robotic systems are used to
store the backups. DB2 will always check the media header (first record
in each session’s output) to ensure that the correct data is being
restored.
PROMPTING Mode
When a backup or a restore operation is initiated, two prompting modes are
possible:
v WITHOUT PROMPTING or NOINTERRUPT, where there is no opportunity
for the vendor product to write messages to the user, or for the user to
respond to them.
v PROMPTING or INTERRUPT, where the user can receive and respond to
messages from the vendor product.
For PROMPTING mode, backup and restore define three possible user
responses:
v Continue
The operation of reading or writing data to the device will resume.
v Device terminate
The device will receive no additional data, and the session is terminated.
v Terminate
The entire backup or restore operation is terminated.
Device Characteristics
For purposes of the vendor device support APIs, two general types of devices
are defined:
v Limited capacity devices requiring user action to change the media; for
example, a tape drive, diskette, or CDROM drive.
v Very large capacity devices, where normal operations do not require the
user to handle media; for example, a juke box, or an intelligent robotic
media handling device.
A limited capacity device may require that the user be prompted to load
additional media during the backup or restore operation. Generally DB2 is not
sensitive to the order in which the media is loaded for either backup or
restore operations. It also provides facilities to pass vendor media handling
messages to the user. This prompting requires that the backup or restore
operation be initiated with PROMPTING on. The media handling message
text is specified in the description field of the return code structure.
It is possible for the vendor product to hide media mounting and switching
actions from DB2, so that the device appears to have infinite capacity. Some
very large capacity devices operate in this mode. In these cases, it is critical
that all the data that was backed up be returned to DB2 in the same order
when a restore operation is in progress. Failure to do so could result in
missing data, but DB2 assumes a successful restore operation, because it has
no way of detecting the missing data.
DB2 writes data to the vendor product with the assumption that each buffer
will be contained on one and only one media (for example, a tape). It is
possible for the vendor product to split these buffers across multiple media
without DB2’s knowledge. In this case, the order in which the media is
processed during a restore operation is critical, because the vendor product
will be responsible for returning reconstructed buffers from the multiple
media to DB2. Failure to do so will result in a failed restore operation.
The DB2-INFO structure will not contain a sequence number; sqluvdel will
delete all backup objects that match the remaining parameters in the
DB2-INFO structure.
Warning Conditions
It is possible for DB2 to receive warning return codes from the vendor
product; for example, if a device is not ready, or some other correctable
condition has occurred. This is true for both read and write operations.
On sqluvput and sqluvget calls, the vendor can set the return code to
SQLUV_WARNING, and optionally provide additional information, using
message text placed in the description field of the RETURN-CODE structure.
This message text is presented to the user so that corrective action can be
taken. The user can respond in one of three ways: continue, device terminate,
or terminate:
v If the response is continue, DB2 attempts to rewrite the buffer using
sqluvput during a backup operation. During a restore operation, DB2 issues
an sqluvget call to read the next buffer.
v If the response is device terminate or terminate, DB2 terminates the entire
backup or restore operation in the same way that it would respond after an
unrecoverable error (for example, it will terminate active sessions and
delete committed sessions).
Operational Hints and Tips
This section provides some hints and tips for building vendor products.
History File
The history file can be used as an aid in database recovery operations. It is
associated with each database, and is automatically updated with each backup
or restore operation. Information in the file can be viewed, updated, or
pruned through the following facilities:
v Control Center
v Command line processor (CLP)
– LIST HISTORY command
– UPDATE HISTORY FILE command
– PRUNE HISTORY command
v APIs
– db2HistoryOpenScan
– db2HistoryGetEntry
– db2HistoryCloseScan
– db2HistoryUpdate
– db2Prune
For information about the layout of the file, see db2HistData.
The LOCATION field can be updated using the Control Center, the CLP, or an
API. The location of backup information can be updated if limited capacity
devices (for example, removable media) have been used to hold the backup
image, and the media is physically moved to a different (perhaps off-site)
storage location. If this is the case, the history file can be used to help locate a
backup image if a recovery operation becomes necessary.
Invoking a Backup or a Restore Operation Using Vendor Products
Vendor products can be specified when invoking the DB2 backup or the DB2
restore utility from:
v The Control Center
v The command line processor (CLP)
v An application programming interface (API).
Related reference:
v “sqluvint - Initialize and Link to Device” on page 336
v “sqluvget - Reading Data from Device” on page 339
v “sqluvput - Writing Data to Device” on page 341
v “sqluvend - Unlink the Device and Release its Resources” on page 343
v “sqluvdel - Delete Committed Session” on page 346
v “DB2-INFO” on page 347
v “VENDOR-INFO” on page 350
v “INIT-INPUT” on page 351
Authorization:
Required connection:
Database
sql.h
C API syntax:
/* File: sqluvend.h */
/* API: Initialize and Link to Device */
/* ... */
int sqluvint (
struct Init_input *,
struct Init_output *,
struct Return_code *);
/* ... */
API parameters:
Init_input
Input. Structure that contains information provided by DB2 to
establish a logical link with the vendor device.
Init_output
Output. Structure that contains the output returned by the vendor
device.
Return_code
Output. Structure that contains the return code to be passed to DB2,
and a brief text explanation.
Usage notes:
For each media I/O session, DB2 will call this function to obtain a device
handle. If for any reason, the vendor function encounters an error during
initialization, it will indicate it via a return code. If the return code indicates
an error, DB2 may choose to terminate the operation by calling the sqluvend
function. Details on possible return codes, and the DB2 reaction to each of
these, is contained in the return codes table (see Table 9 on page 338).
The INIT-INPUT structure contains elements that can be used by the vendor
product to determine if the backup or restore can proceed:
v size_HI_order and size_LOW_order
This is the estimated size of the backup. They can be used to determine if
the vendor devices can handle the size of the backup image. They can be
used to estimate the quantity of removable media that will be required to
hold the backup. It might be beneficial to fail at the first sqluvint call if
problems are anticipated.
v req_sessions
The number of user requested sessions can be used in conjunction with the
estimated size and the prompting level to determine if the backup or
restore operation is possible.
v prompt_lvl
The prompting level indicates to the vendor if it is possible to prompt for
actions such as changing removable media (for example, put another tape
in the tape drive). This might suggest that the operation cannot proceed
since there will be no way to prompt the user.
If the prompting level is WITHOUT PROMPTING and the quantity of
removable media is greater than the number of sessions requested, DB2 will
not be able to complete the operation successfully.
DB2 names the backup being written or the restore to be read via fields in the
DB2-INFO structure. In the case of an action = SQLUV_READ, the vendor
product must check for the existence of the named object. If it cannot be
found, the return code should be set to SQLUV_OBJ_NOT_FOUND so that
DB2 will take the appropriate action.
Return codes:
Table 9. Valid Return Codes for sqluvint and Resulting DB2 Action
Literal in Header File Description Probable Next Call Other Comments
SQLUV_OK Operation successful. sqluvput, sqluvget (see If action = SQLUV_WRITE, the next call will be
comments) sqluvput (to BACKUP data). If action =
SQLUV_READ, verify the existence of the named
object prior to returning SQLUV_OK; the next call
will be sqluvget to RESTORE data.
SQLUV_LINK_EXIST Session activated no further calls Session initialization fails. Free up memory
previously. allocated for this session and terminate. A
sqluvend call will not be received, since the
session was never established.
SQLUV_COMM_ ERROR Communication error no further calls Session initialization fails. Free up memory
with device. allocated for this session and terminate. A
sqluvend call will not be received, since the
session was never established.
SQLUV_INV_VERSION The DB2 and vendor no further calls Session initialization fails. Free up memory
products are allocated for this session and terminate. A
incompatible. sqluvend call will not be received, since the
session was never established.
SQLUV_INV_ACTION Invalid action is no further calls Session initialization fails. Free up memory
requested. This could allocated for this session and terminate. A
also be used to indicate sqluvend call will not be received, since the
that the combination of session was never established.
parameters results in an
operation which is not
possible.
SQLUV_NO_DEV_ No device is available for no further calls Session initialization fails. Free up memory
AVAIL use at the moment. allocated for this session and terminate. A
sqluvend call will not be received, since the
session was never established.
SQLUV_OBJ_NOT_ Object specified cannot no further calls Session initialization fails. Free up memory
FOUND be found. This should be allocated for this session and terminate. A
used when the action on sqluvend call will not be received, since the
the sqluvint call is ’R’ session was never established.
(read) and the requested
object cannot be found
based on the criteria
specified in the
DB2-INFO structure.
SQLUV_OBJS_FOUND More than 1 object no further calls Session initialization fails. Free up memory
matches the specified allocated for this session and terminate. A
criteria. This will result sqluvend call will not be received, since the
when the action on the session was never established.
sqluvint call is ’R’ (read)
and more than one object
matches the criteria in
the DB2-INFO structure.
SQLUV_INV_USERID Invalid userid specified. no further calls Session initialization fails. Free up memory
allocated for this session and terminate. A
sqluvend call will not be received, since the
session was never established.
SQLUV_INV_ Invalid password no further calls Session initialization fails. Free up memory
PASSWORD provided. allocated for this session and terminate. A
sqluvend call will not be received, since the
session was never established.
Table 9. Valid Return Codes for sqluvint and Resulting DB2 Action (continued)
Literal in Header File Description Probable Next Call Other Comments
SQLUV_INV_OPTIONS Invalid options no further calls Session initialization fails. Free up memory
encountered in the allocated for this session and terminate. A
vendor options field. sqluvend call will not be received, since the
session was never established.
SQLUV_INIT_FAILED Initialization failed and no further calls Session initialization fails. Free up memory
the session is to be allocated for this session and terminate. A
terminated. sqluvend call will not be received, since the
session was never established.
SQLUV_DEV_ERROR Device error. no further calls Session initialization fails. Free up memory
allocated for this session and terminate. A
sqluvend call will not be received, since the
session was never established.
SQLUV_MAX_LINK_ Max number of links sqluvput, sqluvget (see This is treated as a warning by DB2. The warning
GRANT established. comments) tells DB2 not to open additional sessions with the
vendor product, because the maximum number of
sessions it can support has been reached (note: this
could be due to device availability). If action =
SQLUV_WRITE (BACKUP), the next call will be
sqluvput. If action = SQLUV_READ, verify the
existence of the named object prior to returning
SQLUV_MAX_LINK_GRANT; the next call will be
sqluvget to RESTORE data.
SQLUV_IO_ERROR I/O error. no further calls Session initialization fails. Free up memory
allocated for this session and terminate. A
sqluvend call will not be received, since the
session was never established.
SQLUV_NOT_ There is not enough no further calls Session initialization fails. Free up memory
ENOUGH_SPACE space to store the entire allocated for this session and terminate. A
backup image; the size sqluvend call will not be received, since the
estimate is provided as a session was never established.
64-bit value in bytes.
After initialization, this function can be called to read data from the device.
Authorization:
Required connection:
Database
sqluvend.h
C API syntax:
/* File: sqluvend.h */
/* API: Reading Data from Device */
/* ... */
int sqluvget (
void * pVendorCB,
struct Data *,
struct Return_code *);
/* ... */
API parameters:
pVendorCB
Input. Pointer to space allocated for the DATA structure (including the
data buffer) and Return_code.
Data Input/output. A pointer to the data structure.
Return_code
Output. The return code from the API call.
obj_num
Specifies which backup object should be retrieved.
buff_size
Specifies the buffer size to be used.
actual_buff_size
Specifies the actual bytes read or written. This value should be set to
output to indicate how many bytes of data were actually read.
dataptr
A pointer to the data buffer.
reserve
Reserved for future use.
Usage notes:
Return codes:
Table 10. Valid Return Codes for sqluvget and Resulting DB2 Action
Literal in Header File Description Probable Next Call Other Comments
SQLUV_OK Operation successful. sqluvget DB2 processes the data
SQLUV_COMM_ERROR Communication error with sqluvend, action = The session will be terminated.
device. SQLU_ABORTa
SQLUV_INV_ACTION Invalid action is requested. sqluvend, action = The session will be terminated.
SQLU_ABORTa
SQLUV_INV_DEV_HANDLE Invalid device handle. sqluvend, action = The session will be terminated.
SQLU_ABORTa
SQLUV_INV_BUFF_SIZE Invalid buffer size specified. sqluvend, action = The session will be terminated.
SQLU_ABORTa
SQLUV_DEV_ERROR Device error. sqluvend, action = The session will be terminated.
SQLU_ABORTa
SQLUV_WARNING Warning. This should not be sqluvget, or sqluvend, action =
used to indicate end-of-media SQLU_ABORT
to DB2; use
SQLUV_ENDOFMEDIA or
SQLUV_ENDOFMEDIA_NO_
DATA for this purpose.
However, device not ready
conditions can be indicated
using this return code.
SQLUV_LINK_NOT_EXIST No link currently exists. sqluvend, action = The session will be terminated.
SQLU_ABORTa
SQLUV_MORE_DATA Operation successful; more data sqluvget
available.
SQLUV_ENDOFMEDIA_NO_ End of media and 0 bytes read sqluvend
DATA (for example, end of tape).
SQLUV_ENDOFMEDIA End of media and > 0 bytes sqluvend DB2 processes the data, and
read, (for example, end of then handles the end-of-media
tape). condition.
SQLUV_IO_ERROR I/O error. sqluvend, action = The session will be terminated.
SQLU_ABORTa
Next call:
a
If the next call is an sqluvend, action = SQLU_ABORT, this session and all other active sessions will be terminated.
After initialization, this function can be used to write data to the device.
Authorization:
Required connection:
Database
sqluvend.h
C API syntax:
/* File: sqluvend.h */
/* API: Writing Data to Device */
/* ... */
int sqluvput (
void * pVendorCB,
struct Data *,
struct Return_code *);
/* ... */
API parameters:
pVendorCB
Input. Pointer to space allocated for the DATA structure (including the
data buffer) and Return_code.
Data Output. Data buffer filled with data to be written out.
Return_code
Output. The return code from the API call.
obj_num
Specifies which backup object should be retrieved.
buff_size
Specifies the buffer size to be used.
actual_buff_size
Specifies the actual bytes read or written. This value should be set to
indicate how many bytes of data were actually read.
dataptr
A pointer to the data buffer.
reserve
Reserved for future use.
Usage notes:
Return codes:
Table 11. Valid Return Codes for sqluvput and Resulting DB2 Action
Literal in Header File Description Probable Next Call Other Comments
SQLUV_OK Operation successful. sqluvput or sqluvend, if Inform other processes of
complete (for example, DB2 has successful operation.
no more data)
SQLUV_COMM_ERROR Communication error with sqluvend, action = The session will be terminated.
device. SQLU_ABORTa
SQLUV_INV_ACTION Invalid action is requested. sqluvend, action = The session will be terminated.
SQLU_ABORTa
SQLUV_INV_DEV_HANDLE Invalid device handle. sqluvend, action = The session will be terminated.
SQLU_ABORTa
SQLUV_INV_BUFF_SIZE Invalid buffer size specified. sqluvend, action = The session will be terminated.
SQLU_ABORTa
SQLUV_ENDOFMEDIA End of media reached, for sqluvend
example, end of tape.
SQLUV_DATA_RESEND Device requested to have buffer sqluvput DB2 will retransmit the last
sent again. buffer. This will only be done
once.
SQLUV_DEV_ERROR Device error. sqluvend, action = The session will be terminated.
SQLU_ABORTa
SQLUV_WARNING Warning. This should not be sqluvput
used to indicate end-of-media
to DB2; use
SQLUV_ENDOFMEDIA for this
purpose. However, device not
ready conditions can be
indicated using this return
code.
SQLUV_LINK_NOT_EXIST No link currently exists. sqluvend, action = The session will be terminated.
SQLU_ABORTa
SQLUV_IO_ERROR I/O error. sqluvend, action = The session will be terminated.
SQLU_ABORTa
Next call:
a
If the next call is an sqluvend, action = SQLU_ABORT, this session and all other active sessions will be terminated. Committed
sessions are deleted with an sqluvint, sqluvdel, and sqluvend sequence of calls.
Ends or unlinks the device, and frees all of its related resources. The vendor
must free or release unused resources (for example, allocated space and file
handles) before returning to DB2.
Authorization:
Required connection:
Database
sql.h
C API syntax:
/* File: sqluvend.h */
/* API: Unlink the Device and Release its Resources */
/* ... */
int sqluvend (
sqlint32 action,
void * pVendorCB,
struct Init_output *,
struct Return_code *);
/* ... */
API parameters:
action Input. Used to commit or abort the session:
v SQLUV_COMMIT ( 0 = to commit )
v SQLUV_ABORT ( 1 = to abort )
pVendorCB
Input. Pointer to the Init_output structure.
Init_output
Output. Space for Init_output de-allocated. The data has been
committed to stable storage for a backup if action is to commit. The
data is purged for a backup if the action is to abort.
Return code
Output. The return code from the API call.
Usage notes:
This function is called for each session that has been opened. There are two
possible action codes:
v Commit
Output of data to this session, or the reading of data from the session, is
complete.
For a write (backup) session, if the vendor returns to DB2 with a return
code of SQLUV_OK, DB2 assumes that the output data has been
appropriately saved by the vendor product, and can be accessed if
referenced in a later sqluvint call.
For a read (restore) session, if the vendor returns to DB2 with a return code
of SQLUV_OK, the data should not be deleted, because it may be needed
again.
If the vendor returns SQLUV_COMMIT_FAILED, DB2 assumes that there
are problems with the entire backup or restore operation. All active sessions
are terminated by sqluvend calls with action = SQLUV_ABORT. For a
backup operation, committed sessions receive a sqluvint, sqluvdel, and
sqluvend sequence of calls.
v Abort
A problem has been encountered by DB2, and there will be no more
reading or writing of data to the session.
For a write (backup) session, the vendor should delete the partial output
dataset, and use a SQLUV_OK return code if the partial output is deleted.
DB2 assumes that there are problems with the entire backup. All active
sessions are terminated by sqluvend calls with action = SQLUV_ABORT,
and committed sessions receive a sqluvint, sqluvdel, and sqluvend
sequence of calls.
For a read (restore) session, the vendor should not delete the data (because
it may be needed again), but should clean up and return to DB2 with a
SQLUV_OK return code. DB2 terminates all the restore sessions by
sqluvend calls with action = SQLUV_ABORT. If the vendor returns
SQLUV_ABORT_FAILED to DB2, the caller is not notified of this error,
because DB2 returns the first fatal failure and ignores subsequent failures.
In this case, for DB2 to have called sqluvend with action =
SQLUV_ABORT, an initial fatal error must have occurred.
Return codes:
Table 12. Valid Return Codes for sqluvend and Resulting DB2 Action
Literal in Header File Description Probable Next Call Other Comments
SQLUV_OK Operation successful. no further calls Free all memory allocated
for this session and
terminate.
SQLUV_COMMIT_FAILED Commit request failed. no further calls Free all memory allocated
for this session and
terminate.
SQLUV_ABORT_FAILED Abort request failed. no further calls
Authorization:
Required connection:
Database
sqluvend.h
C API syntax:
/* File: sqluvend.h */
/* API: Delete Committed Session */
/* ... */
int sqluvdel (
struct Init_input *,
struct Init_output *,
struct Return_code *);
/* ... */
API parameters:
Init_input
Input. Space allocated for Init_input and Return_code.
Return_code
Output. Return code from the API call. The object pointed to by the
Init_input structure is deleted.
Usage notes:
If multiple sessions are opened, and some sessions are committed, but one of
them fails, this function is called to delete the committed sessions. No
sequence number is specified; sqluvdel is responsible for finding all of the
objects that were created during a particular backup operation, and deleting
them. Information in the INIT-INPUT structure is used to identify the output
data to be deleted. The call to sqluvdel is responsible for establishing any
connection or session that is required to delete a backup object from the
Return codes:
Table 13. Valid Return Codes for sqluvdel and Resulting DB2 Action
Literal in Header File Description Probable Next Call Other Comments
SQLUV_OK Operation successful. no further calls
SQLUV_DELETE_FAILED Delete request failed. no further calls
DB2-INFO
Table 14. Fields in the DB2-INFO Structure (continued). All fields are
NULL-terminated strings.
Field Name Data Type Description
type char Specifies the type of backup being taken or
the type of restore being performed. The
following are possible values:
Language syntax:
C Structure
/* File: sqluvend.h */
/* ... */
typedef struct DB2_info
{
char *DB2_id;
char *version;
char *release;
char *level;
char *action;
char *filename;
char *server_id;
char *db2instance;
char *type;
char *dbname;
char *alias;
char *timestamp;
char *sequence;
struct sqlu_gen_list *obj_list;
long max_bytes_per_txn;
char *image_filename;
void *reserve;
char *nodename;
char *password;
char *owner;
char *mcNameP;
SQL_PDB_NODE_TYPE nodeNum;
} DB2_info;
/* ... */
VENDOR-INFO
This structure contains information identifying the vendor and version of the
device.
Table 15. Fields in the VENDOR-INFO Structure. All fields are NULL-terminated
strings.
Field Name Data Type Description
vendor_id char An identifier for the vendor. Maximum length
of the string it points to is 64 characters.
version char The current version of the vendor product.
Maximum length of the string it points to is 8
characters.
release char The current release of the vendor product. Set
to NULL if it is insignificant. Maximum length
of the string it points to is 8 characters.
Table 15. Fields in the VENDOR-INFO Structure (continued). All fields are
NULL-terminated strings.
Field Name Data Type Description
level char The current level of the vendor product. Set to
NULL if it is insignificant. Maximum length of
the string it points to is 8 characters.
server_id char A unique name identifying the server where
the database resides. Maximum length of the
string it points to is 8 characters.
max_bytes_per_txn sqlint32 The maximum supported transfer buffer size.
Specified by the vendor, in bytes. This is used
only if the return code from the vendor
initialize function is SQLUV_BUFF_SIZE,
indicating that an invalid buffer size was
specified.
num_objects_in_backup sqlint32 The number of sessions that were used to
make a complete backup. This is used to
determine when all backup images have been
processed during a restore operation.
reserve void Reserved for future use.
Language syntax:
C Structure
INIT-INPUT
Table 16. Fields in the INIT-INPUT Structure (continued). All fields are
NULL-terminated strings.
Field Name Data Type Description
size_options unsigned short The length of the options field. When using
the DB2 backup or restore function, the data
in this field is passed directly from the
VendorOptionsSize parameter.
size_HI_order sqluint32 High order 32 bits of DB size estimate in
bytes; total size is 64 bits.
size_LOW_order sqluint32 Low order 32 bits of DB size estimate in bytes;
total size is 64 bits.
options void This information is passed from the
application when the backup or the restore
function is invoked. This data structure must
be flat; in other words, no level of indirection
is supported. Byte-reversal is not done, and
the code page for this data is not checked.
When using the DB2 backup or restore
function, the data in this field is passed
directly from the pVendorOptions parameter.
reserve void Reserved for future use.
prompt_lvl char Prompting level requested by the user when a
backup or a restore operation was invoked.
Maximum length of the string it points to is 1
character.
num_sessions unsigned short Number of sessions requested by the user
when a backup or a restore operation was
invoked.
Language syntax:
C Structure
typedef struct Init_input
{
struct DB2_info *DB2_session;
unsigned short size_options;
sqluint32 size_HI_order;
sqluint32 size_LOW_order;
void *options;
void *reserve;
char *prompt_lvl;
unsigned short num_sessions;
} Init_input;
INIT-OUTPUT
Language syntax:
C Structure
DATA
This structure contains data transferred between DB2 and the vendor device.
Table 18. Fields in the DATA Structure
Field Name Data Type Description
obj_num sqlint32 The sequence number assigned by DB2 during
a backup operation.
buff_size sqlint32 The size of the buffer.
Language syntax:
C Structure
RETURN-CODE
This structure contains the return code and a short explanation of the error
being returned to DB2.
Table 19. Fields in the RETURN-CODE Structure
Field Name Data Type Description
a
return_code sqlint32 Return code from the vendor function.
description char A short description of the return code.
reserve void Reserved for future use.
a
This is a vendor-specific return code that is not the same as the value returned by various DB2
APIs. See the individual API descriptions for the return codes that are accepted from vendor
products.
Language syntax:
C Structure
The following tables describe, for each book in the DB2 library, the
information needed to order the hard copy, print or view the PDF, or locate
the HTML directory for that book. A full description of each of the books in
the DB2 library is available from the IBM Publications Center at
www.ibm.com/shop/publications/order
The installation directory for the HTML documentation CD differs for each
category of information:
htmlcdpath/doc/htmlcd/%L/category
where:
In the PDF file name column in the following tables, the character in the sixth
position of the file name indicates the language version of a book. For
example, the file name db2d1e80 identifies the English version of the
Administration Guide: Planning and the file name db2d1g80 identifies the
German version of the same book. The following letters are used in the sixth
position of the file name to indicate the language version:
Language Identifier
Arabic w
Brazilian Portuguese b
Bulgarian u
Croatian 9
Czech x
Danish d
Dutch q
English e
Finnish y
French f
German g
Greek a
Hungarian h
Italian i
Japanese j
Korean k
Norwegian n
Polish p
Portuguese v
Romanian 8
Russian r
Simp. Chinese c
Slovakian 7
Slovenian l
Spanish z
Swedish s
Trad. Chinese t
Turkish m
No form number indicates that the book is only available online and does not
have a printed version.
Administration information
The information in this category covers those topics required to effectively
design, implement, and maintain DB2 databases, data warehouses, and
federated systems.
Release notes
The release notes provide additional information specific to your product’s
release and FixPak level. They also provides summaries of the documentation
updates incorporated in each release and FixPak.
Table 28. Release notes
Name Form number PDF file name HTML directory
DB2 Release Notes See note. See note. doc/prodcd/%L/db2ir
where %L is the
language identifier.
DB2 Connect Release See note. See note. doc/prodcd/%L/db2cr
Notes
where %L is the
language identifier.
DB2 Installation Notes Available on Available on
product CD-ROM product CD-ROM
only. only.
Note: The HTML version of the release notes is available from the
Information Center and on the product CD-ROMs. To view the ASCII
file:
v On UNIX-based platforms, see the Release.Notes file. This file is
located in the DB2DIR/Readme/%L directory, where %L represents the
locale name and DB2DIR represents:
– /usr/opt/db2_08_01 on AIX
– /opt/IBM/db2/V8.1 on all other UNIX operating systems
v On other platforms, see the RELEASE.TXT file. This file is located in
the directory where the product is installed.
Related tasks:
v “Printing DB2 books from PDF files” on page 365
Prerequisites:
Ensure that you have Adobe Acrobat Reader. It is available from the Adobe
Web site at www.adobe.com
Procedure:
Related tasks:
v “Ordering printed DB2 books” on page 366
v “Finding product information by accessing the DB2 Information Center
from the administration tools” on page 370
v “Viewing technical documentation online directly from the DB2 HTML
Documentation CD” on page 371
Related reference:
Procedure:
Related tasks:
v “Printing DB2 books from PDF files” on page 365
v “Finding topics by accessing the DB2 Information Center from a browser”
on page 368
v “Viewing technical documentation online directly from the DB2 HTML
Documentation CD” on page 371
Related reference:
v “Overview of DB2 Universal Database technical information” on page 357
The online help that comes with all DB2 components is available in three
types:
v Window and notebook help
v Command line help
v SQL statement help
Window and notebook help explain the tasks that you can perform in a
window or notebook and describe the controls. This help has two types:
v Help accessible from the Help button
v Infopops
The Help button gives you access to overview and prerequisite information.
The infopops describe the controls in the window or notebook. Window and
notebook help are available from DB2 centers and components that have user
interfaces.
SQL statement help includes SQL help and SQLSTATE help. DB2 returns an
SQLSTATE value for conditions that could be the result of an SQL statement.
SQLSTATE help explains the syntax of SQL statements (SQL states and class
codes).
Procedure:
For example, ? catalog displays help for all the CATALOG commands,
while ? catalog database displays help for the CATALOG DATABASE
command.
v For Message help:
? XXXnnnnn
where sqlstate represents a valid five-digit SQL state and class code
represents the first two digits of the SQL state.
For example, ? 08003 displays help for the 08003 SQL state, while ? 08
displays help for the 08 class code.
– For SQLSTATE help:
For example, help SELECT displays help about the SELECT statement.
Related tasks:
v “Finding topics by accessing the DB2 Information Center from a browser”
on page 368
v “Viewing technical documentation online directly from the DB2 HTML
Documentation CD” on page 371
Prerequisites:
To access the DB2 Information Center from a browser, you must use one of
the following browsers:
v Microsoft Explorer, version 5 or later
v Netscape Navigator, version 6.1 or later
The DB2 Information Center contains only those sets of topics that you chose
to install from the DB2 HTML Documentation CD. If your Web browser returns
a File not found error when you try to follow a link to a topic, you must
install one or more additional sets of topics DB2 HTML Documentation CD.
Procedure:
Related tasks:
v “Finding product information by accessing the DB2 Information Center
from the administration tools” on page 370
v “Updating the HTML documentation installed on your machine” on page
372
v “Troubleshooting DB2 documentation search with Netscape 4.x” on page
374
v “Searching the DB2 documentation” on page 375
Related reference:
v “Overview of DB2 Universal Database technical information” on page 357
Finding product information by accessing the DB2 Information Center from the
administration tools
The DB2 Information Center provides quick access to DB2 product
information and is available on all operating systems for which the DB2
administration tools are available.
The DB2 Information Center accessed from the tools provides six types of
information.
Tasks Key tasks you can perform using DB2.
Concepts
Key concepts for DB2.
Reference
DB2 reference information, such as keywords, commands, and APIs.
Troubleshooting
Error messages and information to help you with common DB2
problems.
Samples
Links to HTML listings of the sample programs provided with DB2.
Tutorials
Instructional aid designed to help you learn a DB2 feature.
Prerequisites:
Procedure:
Related concepts:
v “Accessibility” on page 377
v “DB2 Information Center for topics” on page 379
Related tasks:
v “Finding topics by accessing the DB2 Information Center from a browser”
on page 368
v “Searching the DB2 documentation” on page 375
Restrictions:
Procedure:
1. Insert the DB2 HTML Documentation CD. On UNIX operating systems,
mount the DB2 HTML Documentation CD. Refer to your Quick Beginnings
book for details on how to mount a CD on UNIX operating systems.
2. Start your HTML browser and open the appropriate file:
v For Windows operating systems:
e:\Program Files\sqllib\doc\htmlcd\%L\index.htm
Related tasks:
v “Finding topics by accessing the DB2 Information Center from a browser”
on page 368
v “Copying files from the DB2 HTML Documentation CD to a Web Server”
on page 374
Related reference:
v “Overview of DB2 Universal Database technical information” on page 357
Procedure:
Related reference:
v “Overview of DB2 Universal Database technical information” on page 357
Procedure:
To copy files from the DB2 HTML Documentation CD to a Web server, use the
appropriate path:
v For Windows operating systems:
E:\Program Files\sqllib\doc\htmlcd\%L\*.*
where cdrom represents the CD-ROM drive and %L represents the language
identifier.
Related tasks:
v “Searching the DB2 documentation” on page 375
Related reference:
v “Supported DB2 interface languages, locales, and code pages” in the Quick
Beginnings for DB2 Servers
v “Overview of DB2 Universal Database technical information” on page 357
Most search problems are related to the Java support provided by web
browsers. This task describes possible workarounds.
Procedure:
If your Netscape browser still fails to display the search input window, try the
following:
v Stop all instances of Netscape browsers to ensure that there is no Netscape
code running on the machine. Then open a new instance of the Netscape
browser and try to start the search again.
v Purge the browser’s cache.
v Try a different version of Netscape, or a different browser.
Related tasks:
v “Searching the DB2 documentation” on page 375
A pop-up search window opens when you click the search icon in the
navigation toolbar of the Information Center accessed from a browser. If you
are using the search for the first time it may take a minute or so to load into
the search window.
Restrictions:
In general, you will get better search results if you search for phrases instead
of single words.
Procedure:
Note: When you perform a search, the first result is automatically loaded into
your browser frame. To view the contents of other search results, click
on the result in results lists.
Related tasks:
v “Troubleshooting DB2 documentation search with Netscape 4.x” on page
374
Refer to the DB2 Online Support site if you are experiencing problems and
want help finding possible causes and solutions. The support site contains a
Related concepts:
v “DB2 Information Center for topics” on page 379
Related tasks:
v “Finding product information by accessing the DB2 Information Center
from the administration tools” on page 370
Accessibility
Keyboard Input
You can operate the DB2 Tools using only the keyboard. You can use keys or
key combinations to perform most operations that can also be done using a
mouse.
Font Settings
The DB2 Tools allow you to select the color, size, and font for the text in
menus and dialog windows, using the Tools Settings notebook.
Non-dependence on Color
You do not need to distinguish between colors in order to use any of the
functions in this product.
Alternative Alert Cues
You can specify whether you want to receive alerts through audio or visual
cues, using the Tools Settings notebook.
Compatibility with Assistive Technologies
The DB2 Tools interface supports the Java Accessibility API enabling use by
screen readers and other assistive technologies used by people with
disabilities.
Accessible Documentation
Documentation for the DB2 family of products is available in HTML format.
This allows you to view documentation according to the display preferences
set in your browser. It also allows you to use screen readers and other
assistive technologies.
DB2 tutorials
The DB2® tutorials help you learn about various aspects of DB2 Universal
Database. The tutorials provide lessons with step-by-step instructions in the
areas of developing applications, tuning SQL query performance, working
with data warehouses, managing metadata, and developing Web services
using DB2.
Before you can access these tutorials using the links below, you must install
the tutorials from the DB2 HTML Documentation CD-ROM.
Some tutorial lessons use sample data or code. See each individual tutorial for
a description of any prerequisites for its specific tasks.
If you installed the tutorials from the DB2 HTML Documentation CD-ROM,
you can click on a tutorial title in the following list to view that tutorial.
Business Intelligence Tutorial: Introduction to the Data Warehouse Center
Perform introductory data warehousing tasks using the Data
Warehouse Center.
Business Intelligence Tutorial: Extended Lessons in Data Warehousing
Perform advanced data warehousing tasks using the Data Warehouse
Center.
Development Center Tutorial for Video Online using Microsoft® Visual Basic
Build various components of an application using the Development
Center Add-in for Microsoft Visual Basic.
Information Catalog Center Tutorial
Create and manage an information catalog to locate and use metadata
using the Information Catalog Center.
Video Central for e-business Tutorial
Develop and deploy an advanced DB2 Web Services application using
WebSphere® products.
Visual Explain Tutorial
Analyze, optimize, and tune SQL statements for better performance
using Visual Explain.
The DB2® Information Center gives you access to all of the information you
need to take full advantage of DB2 Universal Database™ and DB2 Connect™
in your business. The DB2 Information Center also documents major DB2
features and components including replication, data warehousing, the
Information Catalog Center, Life Sciences Data Connect, and DB2 extenders.
The DB2 Information Center accessed from a browser has the following
features:
Regularly updated documentation
Keep your topics up-to-date by downloading updated HTML.
Related tasks:
v “Finding topics by accessing the DB2 Information Center from a browser”
on page 368
v “Finding product information by accessing the DB2 Information Center
from the administration tools” on page 370
v “Updating the HTML documentation installed on your machine” on page
372
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give
you any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
The following paragraph does not apply to the United Kingdom or any
other country/region where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY,
OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow
disclaimer of express or implied warranties in certain transactions; therefore,
this statement may not apply to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those
Web sites. The materials at those Web sites are not part of the materials for
this IBM product, and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the
purpose of enabling: (i) the exchange of information between independently
created programs and other programs (including this one) and (ii) the mutual
use of the information that has been exchanged, should contact:
IBM Canada Limited
Office of the Lab Director
8200 Warden Avenue
Markham, Ontario
L6G 1C7
CANADA
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer
Agreement, IBM International Program License Agreement, or any equivalent
agreement between us.
This information may contain examples of data and reports used in daily
business operations. To illustrate them as completely as possible, the examples
include the names of individuals, companies, brands, and products. All of
these names are fictitious, and any similarity to the names and addresses used
by an actual business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Each copy or any portion of these sample programs or any derivative work
must include a copyright notice as follows:
© (your company name) (year). Portions of this code are derived from IBM
Corp. Sample Programs. © Copyright IBM Corp. _enter the year or years_. All
rights reserved.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
Intel and Pentium are trademarks of Intel Corporation in the United States,
other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and
other countries.
Index 389
SQL messages 207 Tivoli Storage Manager (TSM) vendor products (continued)
SQLCODE (continued) INITIALIZE AND LINK TO
overview 207 with RESTORE DATABASE DEVICE 336
SQLSTATE command 319 operation 327
overview 207 transactions READING DATA FROM
SQLU-LSN structure 273 blocking when log directory is DEVICE 339
sqluvdel - Delete Committed full 50 RETURN-CODE structure 354
Session 346 failure recovery sqluvdel 346
sqluvend - Unlink the Device and crashes 16 sqluvend 343
Release its Resources 343 n the failed database partition sqluvget 339
sqluvget - Reading Data from server 16 sqluvint 336
Device 339 on active database partition sqluvput 341
sqluvint - Initialize and Link to server 16 UNLINK THE DEVICE 343
Device 336 reducing the impact of VENDOR-INFO structure 350
sqluvput - Writing Data to failure 11 WRITING DATA TO
Device 341 troubleshooting DEVICE 341
states DB2 documentation search 374 VENDOR-INFO structure 350
pending 59 online information 376 VERITAS Cluster Server 195
storage TSM archived images 209 high availability 195
media failure 9 tutorials 378 version levels
required for backup and two-phase commit version recovery of the
recovery 9 protocol 16 database 24
Sun Cluster 3.0, high
availability 192 U W
suspended I/O to support UNLINK THE DEVICE AND warning messages
continuous availability 167 RELEASE ITS RESOURCES overview 207
synchronization (sqluvend) 343 Windows NT
database partition 132 Update History File API 250 failover
node 132 UPDATE HISTORY FILE hot standby 183
recovery considerations 132 command 234 mutual takeover 183
syntax diagrams user exit program types 183
reading 203 archive and retrieve WRITING DATA TO DEVICE
considerations 47 (sqluvput) 341
T backup 9
table calling format 323 X
relationships 10 error handling 323 XBSA (Backup Services APIs) 72
table spaces for database recovery 323
recovery 12 logs 9
restoring 25 sample programs 323
roll-forward recovery 25 user-defined events 177
tape backup 69, 72 userexit database configuration
Terminate Read Log Without a parameter 39
Database Connection API 262
time V
database recovery time 7 variables
timestamps syntax 203
conversion, client/server vendor products
environment 134 backup and restore 327
Tivoli Storage Manager (TSM) DATA structure 353
backup restrictions 319 DB2-INFO structure 347
client setup 319 DELETE COMMITTED
timeout problem resolution 319 SESSION 346
using 319 description 327
with BACKUP DATABASE INIT-INPUT structure 351
command 319 INIT-OUTPUT structure 353
Product information
Information regarding DB2 Universal Database products is available by
telephone or by the World Wide Web at
www.ibm.com/software/data/db2/udb
This site contains the latest information on the technical library, ordering
books, client downloads, newsgroups, FixPaks, news, and links to web
resources.
If you live in the U.S.A., then you can call one of the following numbers:
v 1-800-IBM-CALL (1-800-426-2255) to order products or to obtain general
information.
v 1-800-879-2755 to order publications.
For information on how to contact IBM outside of the United States, go to the
IBM Worldwide page at www.ibm.com/planetwide
SC09-4831-00
Spine information: