IBM Informix Backup and Restore Guide
IBM Informix Backup and Restore Guide
Version 10.0/8.5
G251-2269-00
DB2 IBM Informix
®
Version 10.0/8.5
G251-2269-00
Note!
Before using this information and the product it supports, read the information in “Notices” on page F-1.
Contents v
Restoring Save Sets with ISM . . . . . . . . . . . . . . . . . . . . . . . . 6-8
Performing a Complete Restore . . . . . . . . . . . . . . . . . . . . . . . . 6-8
Performing a Physical-Only or Logical-Only Restore . . . . . . . . . . . . . . . . 6-12
Using IBM Informix Server Administrator to Restore Data . . . . . . . . . . . . . . 6-15
Examples of ON-Bar Restore Commands . . . . . . . . . . . . . . . . . . . 6-15
Renaming Chunks During a Restore (IDS) . . . . . . . . . . . . . . . . . . . . 6-30
Key Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-30
New-Chunk Requirements . . . . . . . . . . . . . . . . . . . . . . . . 6-31
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-31
Examples of Renaming Chunks During a Restore . . . . . . . . . . . . . . . . . 6-32
Transferring Data with the Imported Restore . . . . . . . . . . . . . . . . . . . 6-34
Importing a Restore . . . . . . . . . . . . . . . . . . . . . . . . . . 6-35
Initializing High-Availability Data Replication with ON-Bar (IDS) . . . . . . . . . . . 6-37
Restoring Nonlogging Databases and Tables . . . . . . . . . . . . . . . . . . . 6-39
Restoring Table Types . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-39
Using Restartable Restore to Recover Data (IDS) . . . . . . . . . . . . . . . . . . 6-41
Restartable Restore Example . . . . . . . . . . . . . . . . . . . . . . . . 6-41
Restarting a Restore . . . . . . . . . . . . . . . . . . . . . . . . . . 6-42
Resolving a Failed Restore . . . . . . . . . . . . . . . . . . . . . . . . 6-44
Understanding ON-Bar Restore Processes . . . . . . . . . . . . . . . . . . . . 6-46
Warm-Restore Sequence on Dynamic Server . . . . . . . . . . . . . . . . . . 6-46
Cold-Restore Sequence on Dynamic Server . . . . . . . . . . . . . . . . . . . 6-48
Warm-Restore Sequence on Extended Parallel Server . . . . . . . . . . . . . . . 6-49
Cold-Restore Sequence on Extended Parallel Server . . . . . . . . . . . . . . . . 6-50
Contents vii
Backup Scheduler SMI Tables (XPS) . . . . . . . . . . . . . . . . . . . . . . 10-10
sysbuobject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-11
sysbuobjses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-12
sysbusession . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-12
sysbusm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-12
sysbusmdbspace . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-13
sysbusmlog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-13
sysbusmworker . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-13
sysbuworker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-14
Chapter 15. Performing an External Backup and Restore Using ontape . . . . . . . . . 15-1
Recovering Data Using an External Backup and Restore . . . . . . . . . . . . . . . 15-1
What Is Backed Up in an External Backup? . . . . . . . . . . . . . . . . . . . . 15-2
Rules for an External Backup . . . . . . . . . . . . . . . . . . . . . . . 15-2
Performing an External Backup . . . . . . . . . . . . . . . . . . . . . . . 15-3
Preparing for an External Backup . . . . . . . . . . . . . . . . . . . . . . . 15-3
Blocking and Unblocking Dynamic Server . . . . . . . . . . . . . . . . . . . 15-3
Tracking an External Backup . . . . . . . . . . . . . . . . . . . . . . . . 15-4
What Is Restored in an External Restore? . . . . . . . . . . . . . . . . . . . . 15-4
Using External Restore Commands . . . . . . . . . . . . . . . . . . . . . 15-5
Rules for an External Restore . . . . . . . . . . . . . . . . . . . . . . . 15-5
Performing an External Restore . . . . . . . . . . . . . . . . . . . . . . . 15-6
Initializing HDR with an External Backup and Restore . . . . . . . . . . . . . . . 15-7
Contents ix
Logical Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-5
Using the archecker Utility to Restore Data . . . . . . . . . . . . . . . . . . . . 16-6
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-6
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-8
The archecker Schema Reference . . . . . . . . . . . . . . . . . . . . . . . 16-10
The CREATE TABLE Statement . . . . . . . . . . . . . . . . . . . . . . 16-10
The CREATE EXTERNAL TABLE Statement . . . . . . . . . . . . . . . . . . 16-11
The DATABASE Statement . . . . . . . . . . . . . . . . . . . . . . . . 16-12
The INSERT Statement . . . . . . . . . . . . . . . . . . . . . . . . . 16-13
The RESTORE Statement . . . . . . . . . . . . . . . . . . . . . . . . 16-14
The SET Statement . . . . . . . . . . . . . . . . . . . . . . . . . . 16-15
SQL Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-16
Schema Command File Examples . . . . . . . . . . . . . . . . . . . . . . 16-16
The archecker Configuration Parameter Reference . . . . . . . . . . . . . . . . . 16-19
AC_DEBUG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-20
AC_IXBAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-20
AC_LTAPEBLOCK . . . . . . . . . . . . . . . . . . . . . . . . . . 16-20
AC_LTAPEDEV (IDS). . . . . . . . . . . . . . . . . . . . . . . . . . 16-21
AC_MSGPATH . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-21
AC_PAGESIZE (XPS) . . . . . . . . . . . . . . . . . . . . . . . . . . 16-21
AC_SCHEMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-21
AC_STORAGE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-21
AC_TAPEBLOCK . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-22
AC_TAPEDEV (IDS) . . . . . . . . . . . . . . . . . . . . . . . . . . 16-22
AC_VERBOSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-22
Part 5. Appendixes
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F-1
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-1
In This Introduction
This introduction provides an overview of the information in this manual and
describes the conventions it uses.
ON–Bar requires IBM Informix Storage Manager, Version 2.2, IBM Tivoli
Storage Manager, or a third-party storage manager to manage the storage
devices. The backup and restore processes, the ON–Bar commands, and the
ON–Bar configuration parameters are different for IBM Informix Dynamic
Server and IBM Informix Extended Parallel Server.
The ontape utility does not require a storage manager and works on Dynamic
Server only.
Types of Users
This manual is written for the following users:
v Database administrators
v System administrators
v Backup operators
v Technical support personnel
This manual is written with the assumption that you have the following
background:
v Some experience with storage managers, which are applications that
manage the storage devices and media that contain backups
v A working knowledge of your computer, your operating system, and the
utilities that your operating system provides
v Some experience working with relational databases or exposure to database
concepts
v Some experience with database server administration, operating-system
administration, or network administration
The examples in this manual are written with the assumption that you are
using the default locale, en_us.8859-1. This locale supports U.S. English
format conventions for date, time, and currency. In addition, this locale
supports the ISO 8859-1 code set, which includes the ASCII code set plus
many 8-bit characters such as é, è, and ñ.
If you plan to use nondefault characters in your data or your SQL identifiers,
or if you want to conform to the nondefault collation rules of character data,
you need to specify the appropriate nondefault locale.
Dynamic Server
Introduction xiii
For information about how to create and populate the demonstration
databases, see the IBM Informix: DB–Access User's Guide. For descriptions of
the databases and their contents, see the IBM Informix: Guide to SQL Reference.
The scripts that you use to install the demonstration databases reside in the
$INFORMIXDIR/bin directory on UNIX and in the %INFORMIXDIR%\bin
directory on Windows.
For a description of all new features, see the IBM Informix: Getting Started
Guide.
Features from Dynamic Server 10.0
The following table provides information about the new backup and restore
features in IBM Informix Dynamic Server, Version 10.0, that this manual
covers.
Features Reference
Renaming a chunk to a different path and “Renaming Chunks During a Restore
offset during a cold restore (IDS)” on page 6-30 and page 14-13
Specifying writing to and reading from a “Specifying the Tape-Size Parameters”
tape until the end of the device with ontape on page 12-5
A modifiable shell script, alarmprogram.sh, “ALARMPROGRAM” on page 9-5
to handle event alarms
Introduction xv
Features from Dynamic Server 9.3
Version 9.3 included features that make the database server easier to install,
use, and manage.
Features Reference
Dynamic addition of logical logs “Ensuring That You Have Enough
Logical-Log Space” on page 4-7
The ON–Bar option in IBM Informix Server ISA online help
Administrator
Features Reference
Backup and restore of nonlogging (RAW) “Backing Up Table Types” on page 4-19
tables using ON–Bar or ontape and page 13-10
For a complete list of new features in Extended Parallel Server, Version 8.5,
see the IBM Informix: Getting Started Guide manual.
Documentation Conventions
This section describes the conventions that this manual uses. These
conventions make it easier to gather information from this and other volumes
in the documentation set.
Convention Meaning
KEYWORD All primary elements in a programming language statement
(keywords) appear in uppercase letters in a serif font.
italics Within text, new terms and emphasized words appear in italics.
italics Within syntax and code examples, variable values that you are to
italics specify appear in italics.
boldface Names of program entities (such as classes, events, and tables),
boldface environment variables, file and pathnames, and interface elements
(such as icons, menu items, and buttons) appear in boldface.
monospace Information that the product displays and information that you
monospace enter appear in a monospace typeface.
KEYSTROKE Keys that you are to press appear in uppercase letters in a sans serif
font.
> This symbol indicates a menu item. For example, “Choose Tools >
Options” means choose the Options item from the Tools menu.
Introduction xvii
examples of this markup follow:
Dynamic Server
UNIX Only
Windows Only
This markup can apply to one or more paragraphs within a section. When an
entire section applies to a particular product or platform, this is noted as part
of the heading text, for example:
Table Sorting (Linux Only)
Syntax Diagrams
This guide uses syntax diagrams built with the following components to
describe the syntax for statements and all commands other than system-level
commands.
Introduction xix
Component represented in PDF Component represented in HTML Meaning
.-------,-----------. Optional items. Several
V | items are allowed; a
---+-----------------+--- comma must precede each
+---index_name---+ repetition.
’---table_name---’
-t table
(1)
Setting the Run Mode
-S server -T target
Notes:
1 See page 17-4
The second line in this diagram has a segment named “Setting the Run
Mode,” which according to the diagram footnote, is on page 17-4. This
segment is shown in the following segment diagram (the diagram uses
segment start and end components).
To construct a command correctly, start at the top left with the command.
Follow the diagram to the right, including the elements that you want. The
elements in the diagram are case sensitive.
You must also use any punctuation in your statements and commands exactly
as shown in the syntax diagrams.
Introduction xxi
identifier, or literal, depending on the context. Variables are also used to
represent complex syntax elements that are expanded in additional syntax
diagrams. When a variable appears in a syntax diagram, an example, or text,
it is shown in lowercase italic.
The following syntax diagram uses variables to illustrate the general form of a
simple SELECT statement.
SELECT column_name FROM table_name
When you write a SELECT statement of this form, you replace the variables
column_name and table_name with the name of a specific column and table.
Example Code Conventions
Examples of SQL code occur throughout this manual. Except as noted, the
code is not specific to any single IBM Informix application development tool.
If only SQL statements are listed in the example, they are not delimited by
semicolons. For instance, you might see the code in the following example:
CONNECT TO stores_demo
...
COMMIT WORK
DISCONNECT CURRENT
To use this SQL code for a specific product, you must apply the syntax rules
for that product. For example, if you are using DB–Access, you must delimit
multiple statements with semicolons. If you are using an SQL API, you must
use EXEC SQL at the start of each statement and a semicolon (or other
appropriate delimiter) at the end of the statement.
Tip: Ellipsis points in a code example indicate that more code would be
added in a full application, but it is not necessary to show it to describe
the concept being discussed.
Introduction xxiii
Online File Description Format
TOC Notes The TOC (Table of Contents) notes file HTML
provides a comprehensive directory of
hyperlinks to the release notes, the fixed and
known defects file, and all the documentation
notes files for individual manual titles.
Documentation Notes The documentation notes file for each manual HTML, text
contains important information and
corrections that supplement the information
in the manual or information that was
modified since publication.
Release Notes The release notes file describes feature HTML, text
differences from earlier versions of IBM
Informix products and how these differences
might affect current products. For some
products, this file also contains information
about any known problems and their
workarounds.
Machine Notes (Non-Windows platforms only) The machine text
notes file describes any platform-specific
actions that you must take to configure and
use IBM Informix products on your
computer.
Fixed and Known This text file lists issues that have been text
Defects File identified with the current version. It also lists
customer-reported defects that have been
fixed in both the current version and in
previous versions.
Before Installation
All online notes are located in the /doc directory of the product CD. The
easiest way to access the documentation notes, the release notes, and the fixed
and known defects file is through the hyperlinks from the TOC notes file.
The machine notes file and the fixed and known defects file are only provided
in text format.
After Installation
Dynamic Server
ids_win_fixed_and_known ids_win_fixed_and_known
_defects_version.txt _defects_10.0.txt
On UNIX platforms, use the finderr command to read the error messages and
their corrective actions.
Dynamic Server
On Windows, use the Informix Error Messages utility to read error messages
and their corrective actions. To display this utility, choose Start > Programs >
IBM Informix Dynamic Server version > Informix Error Messages from the
taskbar.
End of Dynamic Server
Introduction xxv
You can also access these files from the IBM Informix Online Documentation
site at http://www.ibm.com/software/data/informix/pubs/library/.
Manuals
Online Manuals
A CD that contains your manuals in electronic format is provided with your
IBM Informix products. You can install the documentation or access it directly
from the CD. For information about how to install, read, and print online
manuals, see the installation insert that accompanies your CD. You can also
obtain the same online manuals from the IBM Informix Online Documentation
site at http://www.ibm.com/software/data/informix/pubs/library/.
Printed Manuals
To order hardcopy manuals, contact your sales representative or visit the IBM
Publications Center Web site at
http://www.ibm.com/software/howtobuy/data.html.
Online Help
IBM Informix online help, provided with each graphical user interface (GUI),
displays information about those interfaces and the functions that they
perform. Use the help facilities that each GUI provides to display the online
help.
Accessibility
IBM is committed to making our documentation accessible to persons with
disabilities. Our books are available in HTML format so that they can be
accessed with assistive technology such as screen reader software. The syntax
diagrams in our manuals are available in dotted decimal format, which is an
accessible format that is available only if you are using a screen reader. For
more information about the dotted decimal format, see the Accessibility
appendix.
IBM Informix Dynamic Server Version 10.0 and CSDK Version 2.90
Documentation Set
The following tables list the manuals that are part of the IBM Informix
Dynamic Server, Version 10.0 and the CSDK Version 2.90, documentation set.
PDF and HTML versions of these manuals are available at
http://www.ibm.com/software/data/informix/pubs/library/. You can order
hardcopy versions of these manuals from the IBM Publications Center at
http://www.ibm.com/software/howtobuy/data.html.
Introduction xxvii
Table 1. Database Server Manuals (continued)
Manual Subject
J/Foundation Developer’s Writing user-defined routines (UDRs) in the Java programming language
Guide for Informix Dynamic Server with J/Foundation.
Large Object Locator Using the Large Object Locator, a foundation DataBlade module that can
DataBlade Module User’s be used by other modules that create or store large-object data. The Large
Guide Object Locator enables you to create a single consistent interface to large
objects and extends the concept of large objects to include data stored
outside the database.
Migration Guide Conversion to and reversion from the latest versions of Informix
database servers. Migration between different Informix database servers.
Optical Subsystem Guide The Optical Subsystem, a utility that supports the storage of BYTE and
TEXT data on optical disk.
Performance Guide Configuring and operating IBM Informix Dynamic Server to achieve
optimum performance.
R-Tree Index User’s Guide Creating R-tree indexes on appropriate data types, creating new operator
classes that use the R-tree access method, and managing databases that
use the R-tree secondary access method.
SNMP Subagent Guide The IBM Informix subagent that allows a Simple Network Management
Protocol (SNMP) network manager to monitor the status of Informix
servers.
Storage Manager Informix Storage Manager (ISM), which manages storage devices and
Administrator’s Guide media for your Informix database server.
Trusted Facility Guide The secure-auditing capabilities of Dynamic Server, including the creation
and maintenance of audit logs.
User-Defined Routines and How to define new data types and enable user-defined routines (UDRs)
Data Types Developer’s to extend IBM Informix Dynamic Server.
Guide
Virtual-Index Interface Creating a secondary access method (index) with the Virtual-Index
Programmer’s Guide Interface (VII) to extend the built-in indexing schemes of IBM Informix
Dynamic Server. Typically used with a DataBlade module.
Virtual-Table Interface Creating a primary access method with the Virtual-Table Interface (VTI)
Programmer’s Guide so that users have a single SQL interface to Informix tables and to data
that does not conform to the storage scheme of Informix Dynamic Server.
Introduction xxix
IBM Welcomes Your Comments
We want to know about any corrections or clarifications that you would find
useful in our manuals, which will help us improve future versions. Include
the following information:
v The name and version of the manual that you are using
v Section and page number
v Your suggestions about the manual
This email address is reserved for reporting errors and omissions in our
documentation. For immediate help with a technical problem, contact IBM
Technical Support.
In This Chapter
Two utilities are provided for backing up and restoring database server data.
This chapter explains basic backup and restore concepts for Informix database
servers and covers the following topics:
v Comparing ON–Bar and ontape
v Planning a recovery strategy
v Planning a backup system for a production database server
Backup media
You do not always have to back up all the storage spaces. If some tables
change daily but others rarely change, it is inefficient to back up the storage
spaces that contain the unchanged tables every time that you back up the
database server. You need to plan your backup schedule carefully to avoid
long delays for backing up or restoring data.
Important: If disks and other media are completely destroyed and need to be
replaced, you need at least a level-0 backup of all storage spaces
and relevant logical logs to restore data completely on the
replacement hardware.
For details, see Chapter 4, “Backing Up with ON-Bar,” on page 4-1, and
Chapter 13, “Backing Up with ontape,” on page 13-1.
What Is a Logical-Log Backup?
A logical-log backup is a copy to disk or tape of all full logical-log files. The
logical-log files store a record of database server activity that occurs between
backups.
To free full logical-log files, back them up. The database server reuses the
freed logical-log files for recording new transactions. For a complete
description of the logical log, see your IBM Informix: Administrator's Guide.
If you turn on continuous logical-log backup, the database server backs up each
logical log automatically when it becomes full. If you turn off continuous
logical-log backup, the logical-log files continue to fill. If all logical logs are
filled, the database server hangs until the logs are backed up.
Save the logical-log backups from the last two level-0 backups so that you can
use them to complete a restore. If a level-0 backup is inaccessible or unusable,
you can restore data from an older backup, if you have one. If any of the
logical-log backups are also inaccessible or unusable, however, you cannot roll
forward the transactions from those logical-log files or from any subsequent
logical-log files.
Warning: You will lose transactions in logical-log files that are not backed up
or salvaged.
If the disks that contain the storage spaces with the logical logs are damaged,
the transactions after midnight on Tuesday might be lost. To restore these
transactions from the last logical-log backup, try to salvage the logical logs
before you repair or replace the bad disk and then perform a cold restore.
Transactions
Time
Monday 10 P.M. Tuesday midnight Wednesday 11 A.M.
For more information, see “Backing Up Logical Logs” on page 4-20 and
“Backing Up Logical-Log Files with ontape” on page 13-12.
What Is a Restore?
A restore re-creates database server data from backed-up storage spaces and
logical-log files. A restore re-creates database server data that has become
inaccessible because of any of the following conditions:
v You need to replace a failed disk that contains database server data.
v A logic error in a program has corrupted a database.
v You need to move your database server data to a new computer.
v A user accidentally corrupted or destroyed data.
To restore data up to the time of the failure, you must have at least one
level-0 backup of each of your storage spaces from before the failure and the
logical-log files that contain all transactions since these backups.
Backup media
Cold Restore: As Figure 1-4 shows, a cold restore salvages the logical logs,
and restores the critical dbspaces (root dbspace and the dbspaces that contain
the physical log and logical-log files), other storage spaces, and the logical
logs.
You can perform a cold restore onto a computer that is not identical to the
one on which the backup was performed by giving any chunk a new
pathname and offset during the restore.
Logical Restore: As Figure 1-6 shows, the database server replays the logical
logs to reapply any database transactions that occurred after the last backup.
The logical restore applies only to the physically-restored storage spaces.
Root dbspace
INSERT...
Dbspace 1
Dbspace 2
For more information, see Chapter 6, “Restoring Data with ON-Bar,” on page
6-1 and Chapter 14, “Restoring with ontape,” on page 14-1.
The ontape utility does not use the sysutils database or the emergency boot
files. The ontape utility supports remote backup devices on other hosts but
ISM does not. ISM supports different sets of tape drives on various hardware
platforms.
The ontape utility supports two simultaneous sessions, one for physical
backup or restore, and one for log backup. Each ISM instance has a limit of
four simultaneous sessions.
The ontape utility allows you to change the logging mode of a database. If
you use ON–Bar, use ondblog to change the logging mode of a database.
Warning: The backup tapes that ontape and ON–Bar produce are not
compatible! You cannot create a backup with ontape and restore it
with ON–Bar, or vice versa.
Ask the following questions to assist in deciding how often and when you
want to back up the data:
v Does your business have down time where the system can be restored?
v If your system is 24x7 (no down time), is there a nonpeak time where a
restore could occur?
v If a restore must occur during a peak period, how critical is the time?
v Which data can you restore with the database server online (warm restore)?
Which data must be restored offline (cold restore)?
v How many storage devices are available to back up and restore the data?
Scheduling Backups
Table 1-3 shows a sample backup plan for a small or medium-sized system.
Tailor your backup plan to the requirements of your system. The more often
the data changes and the more important it is, the more frequently you need
to back it up. For more information, see “Choosing a Backup Level” on page
4-6.
Table 1-3. Sample Backup Plan
Backup Level Backup Schedule
Complete (level-0) backup Saturday at 6 P.M.
Important: Perform a level-0 backup after you change the physical schema,
such as adding a chunk to a storage space. (See “Collecting
Information About Your System Before a Backup” on page 4-7.)
In This Chapter
This chapter introduces the components of ON–Bar and describes how it
works. The following topics are covered:
v Where to find information on ON–Bar, ISM, and TSM
v ON–Bar for Dynamic Server
v ON–Bar for Extended Parallel Server
v ON–Bar utilities
Table 2-1 on page 2-2 shows which database server versions support ON–Bar,
IBM Informix Storage Manager (ISM), Version 2.2, and Tivoli Storage
Manager (TSM), Version 5.1.6 and later.
ON–Bar communicates with both the database server and the storage
manager. Use the onbar command to start a backup or restore. For a backup
session, ON–Bar requests the contents of storage spaces and logical logs from
the database server and passes them to the storage manager. The storage
manager stores the data on storage media. For a restore session, ON–Bar
requests the backed up data from the storage manager and restores it on the
database server.
The onbar_d processes write status and error messages to the ON–Bar activity
log and write information to the emergency boot file that is used in a cold
restore. For more details, see “Backup Sequence on Dynamic Server” on page
4-29.
sysutils database
onbar_d
Storage
manager
Backup media
The onbar_d, onbar_w, and onbar_m processes write status and error
messages to the ON–Bar activity log. The onbar_w and onbar_m processes
write information to the emergency boot files that are used in a cold restore.
Backup Scheduler
The onbar-driver (onbar_d) communicates backup or restore requests to the
Backup Scheduler on Extended Parallel Server. The Backup Scheduler tracks all
active and scheduled backup and restore activities for all the coservers. The
Backup Scheduler creates one or more sessions, each of which contains a list
of objects to back up or restore. It starts onbar-worker processes to back up or
restore the objects and coordinates the session activity. A session is a single
backup or restore request.
For details, see “Backup Sequence on Extended Parallel Server” on page 4-30.
onbar_d onbar_m
sysutils database
XPS
Activity log
onbar_w
XBSA
Backup media
The ISM server resides on the same computer as ON–Bar and the Informix
database server; your storage devices are attached to this computer as well.
ISM can store data on simple tape drives, optical disk devices, and file
systems. ISM also performs the following functions:
TSM stores data on separate TSM servers. Your Informix database servers are
TSM clients using the library that implement XBSA functions for using TSM
with Informix database servers (Informix Interface for TSM). Informix
Interface for TSM is part of your Informix database server installation. TSM is
distributed separately.
TSM efficiently manages disk, optical, and tape library resources. TSM
provides the following functions:
v Reduces network complexity with interfaces and functions that span
network environments.
v Increases administrator productivity by automating repetitive processes,
scheduling unattended processes, and administering TSM from anywhere in
the network
v Reduces risk of data loss with scheduled routine backups
v Optimizes existing storage resources with automated movement of files
from client file systems to TSM storage
Make sure that the storage manager has passed the Informix validation
process. The validation process is specific to the backup and restore product
version, the operating-system version, and the Informix database server
version.
The XBSA Interface
ON–Bar and the storage manager communicate through the X/Open Backup
Services Application Programmer’s Interface (XBSA), which enables the
storage manager to manage media for the database server. By using an
open-system interface to the storage manager, ON–Bar can work with a
variety of storage managers that also use XBSA.
Each storage manager develops and distributes a unique version of the XBSA
shared library. You must use the version of the XBSA shared library provided
with the storage manager. For example, if you use ISM, use the XBSA shared
library provided with ISM. ON–Bar and the XBSA shared library must be
compiled the same (32-bit or 64-bit).
The onsmsync utility uses the following tables to track its operations:
v The bar_ixbar table contains history of all unexpired successful backups in
all timelines. It is maintained and used by onsmsync only.
v The bar_syncdeltab table is normally empty except when onsmsync is
running. It is maintained and used by onsmsync only.
For a description of the content of these tables, see Chapter 10, “ON-Bar
Catalog Tables,” on page 10-1.
The Emergency Boot Files
The ON–Bar emergency boot files reside in the $INFORMIXDIR/etc directory
on UNIX and in the %INFORMIXDIR%\etc directory on Windows. The
emergency boot files contain the information that you need to perform a cold
restore and are updated after every backup.
ON–Bar must be able to restore objects from a storage manager even when
the tables in the sysutils database are not available. During a cold restore, the
database server is not available to access sysutils, so ON–Bar obtains the
information it needs for the cold restore from the emergency boot file.
Warning: Do NOT modify the emergency boot file(s) in any way. Doing so
may cause ON-Bar to select the wrong backup as part of a restore,
During the cold-restore process, ON–Bar follows these steps to create a restore
boot file and restore data.
For a list of ON–Bar informational, warning, and error messages, use the
finderr or Find Error utility or view IBM Informix: Error Messages at
http://www.ibm.com/software/data/informix/pubs/library/.
You can update the value of BAR_DEBUG by editing the ONCONFIG file.
When the updated value of BAR_DEBUG takes affect depends on your
database server:
v For Dynamic Server, the new value of BAR_DEBUG takes affect
immediately for any currently executing ON-Bar command and any
subsequent commands. Any ON-Bar command that is currently executing
when you update BAR_DEBUG reads the new value of BAR_DEBUG and
prints debug messages at the new level.
v For Extended Parallel Server, the new value of BAR_DEBUG takes affect for
ON-Bar commands executed after you update the value of BAR_DEBUG.
In This Chapter
This chapter provides the information that you need to plan and to set up
ON–Bar with a storage manager:
v Installing and configuring a storage manager
v Configuring ON–Bar
v Steps to take before making a test backup
v Choosing storage managers and storage devices
Important: Some storage managers let you specify the kind of data to back
up to specific storage devices. Configure the storage manager to
back up logical logs to one device and storage spaces to a
different device for more efficient backups and restores.
After you configure the storage manager and storage devices and label
volumes for your database server and logical-log backups, you are ready to
initiate a backup or restore operation with ON–Bar.
Configuring ISM
For instructions on how to set up ISM to work with ON–Bar, see the
IBM Informix: Storage Manager Administrator's Guide. The ISM server is
installed with the Informix database server on UNIX or Windows. Several
database server instances can share one ISM instance.
In addition, you must configure Informix Interface for TSM and perform other
TSM configuration tasks on your Informix database server computer. These
tasks are explained in the following sections.
Use the sample dsm.opt.smp and dsm.sys.smp files distributed with the TSM
API to help you get started quickly.
You must be the root user to perform edits to the dsm.opt and dsm.sys files.
See TSM Installing the Clients and TSM Trace Facility Guide for information
regarding options you can specify in these files.
Editing the TSM Client User Options File: The TSM client user options file,
dsm.opt, must refer to the correct TSM server stanza, as listed in the dsm.sys
file.
Editing the TSM Client System Options File: The TSM client systems
options file, dsm.sys, must refer to the correct TSM server address and
communication method.
The following TSM options are the most important to set in the dsm.sys file:
v SERVERNAME: to specify the name you want to use to identify a server
when it is referred to in the dsm.opt file and to create a stanza that contains
options for that server
v COMMMETHOD: to identify the communication method
v TCPSERVERADDRESS: to identify the TSM server
v PASSWORDACCESS: to specify GENERATE to store the TSM password
The SERVERNAME option in the dsm.opt and dsm.sys files define server
stanza names only. The TCPSERVERADDRESS option controls which server is
actually contacted.
You can set up multiple server stanzas in the dsm.sys file. See the Tivoli
Storage Manager Backup-Archive Client Installation and User’s Guide for
information about multiple server stanzas.
XBSA_ver is the release version of the XBSA shared library for the storage
manager. sm_name is the name of the storage manager. sm_ver is the
storage-manager version. The maximum field length is 128 characters.
The following example shows the ISM definition in the sm_versions file:
1|1.0.1|ism| ISM.2.20.UC1.114|
The following example shows the TSM definition in the sm_versions file:
1|5.1.6|adsm|5
4. If all coservers share the sm_versions file in the etc subdirectory, the
sm_versions file should have an entry for each storage-manager brand.
If the etc subdirectory is not shared between coserver nodes, specify one
line in the sm_versions file for the storage manager in use on that
coserver node.
End of Extended Parallel Server
5. Stop any ON–Bar processes (onbar_d, onbar_w, or onbar_m) that are
currently running and restart them for the changes to take effect.
Configuring Multiple Storage Managers on Coserver Nodes (XPS)
Extended Parallel Server allows multiple storage-manager instances, but only
one instance per node. You can configure and use different storage-manager
brands for different purposes. For best performance, run onbar-worker
processes on all nodes that have storage devices. For example, you have two
storage devices, a tape drive and a jukebox, and want to connect them on
different nodes. When an onbar-worker is on each node, the data moves
faster because it does not have to travel over the network. For complex
examples, see “Using ON-Bar Configuration Parameters on Extended Parallel
Server” on page 3-12.
Important: Each hardware MPP node that contains a coserver that runs
onbar-worker processes must have a local copy of the
storage-manager version of the XBSA shared library.
Validating Your Storage Manager
When you convert or revert an Informix database server, the storage manager
that you used on the old version might not be validated for the version that
you are migrating to. Verify that the storage-manager vendor has successfully
completed the Informix validation process for the database server version and
platform. If not, you need to install a validated storage manager before you
perform backups with ON–Bar.
Configuring ON-Bar
ON–Bar is installed with your Informix database server software. To use
ON–Bar with installed storage managers, you set specific parameters in the
ONCONFIG file. The following section describes the required ON–Bar
configuration parameters.
Dynamic Server
Use the onconfig.std file as a template for single coservers. Use the
onconfig.xps file as a template for multiple coservers.
To prevent the loss of user changes in the existing onbar script, the onbar
script is distributed as a file named onbar.sh (UNIX) or onbar.bat (Windows).
When the install program installs the database server files over an existing
installation, it checks whether any difference exists between the new onbar
script and the old onbar script.
v If the two scripts are the same, the installation program renames the
onbar.sh or onbar.bat file to onbar, the new onbar script overwrites the old
onbar script, and no user data is lost.
v If a difference exists between the new onbar script and the old onbar script,
the installation program renames the onbar.sh or onbar.bat file to onbar,
renames the old onbar script to the form onbar.date, and issues a message
to the user that the existing onbar script was renamed.
If you see a message that the old onbar script has been renamed by
appending a date, look at the new onbar script (filename onbar) and integrate
the contents of the old onbar script into the new onbar script. For example, if
onbar has been renamed to onbar.2000.12.15, integrate the contents of
onbar.2000.12.15 into onbar.
For information on using the onbar script, see “Customizing ON-Bar and
Storage-Manager Commands” on page 8-2. For information on installing the
database server, see your IBM Informix: Installation Guide.
Setting ISM Environment Variables and ONCONFIG Parameters
When you use ISM, you need to set certain environment variables. For
information, see the IBM Informix: Storage Manager Administrator's Guide.
Dynamic Server
You can set these environment variables in the onbar script or in your
environment.
End of Dynamic Server
You can set these environment variables in your environment if you start
onbar -w manually or before you start the database server, or set them in
If you use ISM, you can specify the volume pool names for storage spaces
and logical logs in the ISM_DATA_POOL and ISM_LOG_POOL parameters in
the ONCONFIG file. If you do not set these parameters, they default to the
volume pool names ISMData and ISMLogs, respectively.
Setting the Informix Interface for TSM Environment Variables
When you use the Informix Interface for TSM, you need to set certain
environment variables in the user’s environment. The following table
describes these environment variables.
Table 3-1. Informix Interface for TSM Environment Variables
Environment Variable Description
DSMI_CONFIG The fully qualified name for the client user
option file (dsm.opt). The default value is
dsm.opt in the TSM API installation directory.
DSMI_DIR Points to the TSM API installed path. This
environment variable needs to be defined only if
the TSM API is installed in a different path from
the default path. The DSMI_DIR environment
variable is also used to find the dsm.sys file.
DSMI_LOG Points to the directory that contains the API error
log file (dsierror.log).
For error log files, create a directory for the error
logs to be created in, then set the DSMI_LOG
environment variable to that directory. The API
error log file must have writable rights by the
user performing the backup.
The following example shows how to set up these environment variables for
Solaris 32-bit if the TSM API is installed in the /opt/Tivoli/tsm/client/api
directory:
export DSMI_CONFIG=/opt/Tivoli/tsm/client/api/bin/dsm.opt
export DSMI_DIR=/opt/Tivoli/tsm/client/api/bin
export DSMI_LOG=/home/user_a/logdir
UNIX Only
For example, if you are using ISM, you can do either of the following:
v Link $INFORMIXDIR/lib/ibsad001.so to $INFORMIXDIR/lib/libbsa.so
v Set BAR_BSALIB_PATH to $INFORMIXDIR/lib/libbsa.so
For example, if you are using TSM, you can do either of the following:
v Link $INFORMIXDIR/lib/ibsad001.so to $INFORMIXDIR/lib/libtxbsa.so
v Set BAR_BSALIB_PATH to $INFORMIXDIR/lib/libtxbsa.so
Windows Only
On Windows, because no default XBSA shared library name exists, you must
specify its name and location in the BAR_BSALIB_PATH configuration
parameter. If you are using ISM, set BAR_BSALIB_PATH to
%ISMDIR%\bin\libbsa.dll.
End of Windows Only
If you are using a third-party storage manager, ON–Bar must use the version
of the XBSA library that the storage-manager manufacturer provides. For
more information, see “BAR_BSALIB_PATH” on page 9-8 and your release
notes.
The XBSA library must be present on each coserver node where you are
running onbar-worker processes. Each onbar_worker needs to dynamically
link to the functions in the XBSA library. The XBSA library must be either on
a local disk or NFS-mounted disk.
To find out whether onbar_worker processes can share libraries and whether
the sharing can be done through NFS, check your storage-manager
documentation.
If you use the onconfig.std template to configure a single coserver with one
storage manager, copy the section “Storage-Manager instances” from
onconfig.xps into your ONCONFIG file. Use the onconfig.xps template to
configure multiple storage managers.
Warning: Be careful not to get the shared libraries for the two products or
versions mixed up.
Can Be Always
Storage- Storage-
Configuration Manager Manager Always
Parameter Reference Purpose Specific Specific Global
BAR_ACT_LOG page 9-6 Specifies the location and X
name for the ON–Bar
activity log file.
BAR_BOOT_DIR page 9-8 Specifies the directory for X
the emergency boot files
BAR_BSALIB_PATH page 9-8 Specifies the path of the X
storage-manager library
To determine if
BAR_BSALIB_PATH is
supported on your platform,
check your release notes.
Specify BAR_BSALIB_PATH
in the storage-manager
section if the libraries are not
in the same location on all
nodes.
BAR_DBS_COSVR page 9-10 Specifies coservers that send X
backup and restore data to
the storage manager
BAR_HISTORY page 9-11 Specifies whether the X
sysutils database maintains
a backup history
BAR_IDLE_TIMEOUT page 9-11 Specifies the maximum X
number of minutes that an
onbar-worker process is idle
before it is shut down
# Storage-Manager Section
BAR_SM 1
BAR_WORKER_COSVR 1-4,7
END
UNIX Only
Windows Only
After you verify that ON–Bar and your storage manager are set up correctly,
run ON–Bar on your test database to make sure that you can back up and
restore data. For more information, follow the instructions in Chapter 4,
“Backing Up with ON-Bar,” on page 4-1.
If you choose a different storage manager, consider whether it has the features
that you need to back up your storage spaces and logical logs. When you
choose storage devices, make sure that they are compatible with the storage
manager that you choose. The storage devices should have the speed and
capacity that your backups require. The storage manager should be easy to
use and work on your operating system.
Features That ISM Supports
ISM fulfills the following storage-manager requirements:
v ISM allows you to back up logical logs and storage spaces to different
devices and to specify whether to use encryption or compression for data.
v ISM can write the output of parallel backups to a single device, medium, or
volume. Some backup devices can write data faster than the disks used to
hold storage spaces can be read.
v ISM can automatically switch from one tape device to another when the
volume in the first device fills.
v ISM allows migration of data from one backup medium to another.
For speed, you can back up logical logs or storage spaces to disk, but you
must move them later to tape or other removable media or your disk will
become full.
v ISM allows you to clone copies of backup data for on-site and off-site
storage.
v ISM uses automatic expiration of data. Once all data on a backup media
expires, you can reuse the media.
Features That ISM Does Not Support
ISM does not support the following features. Third-party storage managers
might support these features.
v Distributing a single data stream across multiple devices simultaneously,
which improves throughput if you have several slow devices
v Using different encryption or compression methods for specified storage
spaces or databases
v Scheduling backups
v Support for devices such as tape libraries, jukeboxes, silos, tape
autochangers, and stackers
v Remote host operations
In This Chapter
This chapter explains how to use the onbar utility to back up and verify
storage spaces (dbspaces, blobspaces, and sbspaces) and logical-log files. The
onbar utility is a wrapper to onbar_d, the ON–Bar driver. Use any of the
following methods to execute ON–Bar backup and restore commands:
v Issue ON–Bar commands.
To execute ON–Bar commands, you must be user informix or a member of
the bargroup group on UNIX or a member of the Informix-Admin group
or user informix in Windows. (For more information, see “Creating the
bargroup Group (UNIX)” on page 3-7.)
v Include ON–Bar and ISM commands in a shell or batch script.
For information, see “Customizing ON-Bar and Storage-Manager
Commands” on page 8-2.
v Call ON–Bar from a job-scheduling program.
v Set event alarms that trigger a logical-log backup.
For information, see “Backing Up Logical Logs on Dynamic Server” on
page 4-20 and “Backing Up Logical Logs on Extended Parallel Server” on
page 4-25.
Notes:
1 See page 4-8
2 See page 4-19
3 See page 4-20
4 See page 4-22
5 See page 5-1
6 Dynamic Server Only
7 See page 6-15
8 See page 7-15
9 See page 8-11
10 Extended Parallel Server Only
Important: For use in an emergency, you should have a backup copy of the
current version of the following administrative files. You will need
to restore these files if you need to replace disks or if you restore
to a second computer system (imported restore).
The following table lists the administrative files that you should back up.
Make sure your storage manager is ready to receive data before you begin a
backup or restore. To improve performance, it is recommended that you
reserve separate storage devices for storage-space and logical-log backups.
Label and mount all volumes in the storage device. The backup or restore
might pause until you mount the requested tape or optical disk.
What Is a Whole-System Backup? (IDS)
A whole-system backup (onbar -b -w) is a serial backup of all storage spaces
and logical logs based on a single checkpoint. That time is stored with the
backup information. The advantage of using a whole-system backup is that
you can restore the storage spaces with or without the logical logs. Because
the data in all storage spaces is consistent in a whole-system backup, you do
not need to restore the logical logs to make the data consistent. For an
example, see “Performing a Whole-System Backup (IDS)” on page 4-17.
What Is a Standard Backup?
A standard backup (onbar -b) is a parallel backup of selected or all storage
spaces and the logical logs. In a standard backup, the database server
performs a checkpoint for each storage space as it is backed up. Therefore,
you must restore the logical logs from a standard backup to make the data
consistent. For an example, see “Performing a Level-0 Backup of All Storage
Spaces” on page 4-14.
What is an Incremental Backup?
An incremental backup of a storage space backs up only those pages that have
been modified since the last backup of the storage space. ON-Bar supports the
following types of backups:
v Full backups, also called level-0 backups
v Incremental backups of full backups, also called level-1 backups
v Incremental backups of incremental backups, also called level-2 backups
Tip: It is good practice to create a backup schedule that keeps level-1 and
level-2 backups small and to schedule frequent level-0 backups. With
such a backup schedule, you avoid having to restore large level-1 and
level-2 backups or many logical-log backups.
Level-0 Backups
Level-0 backups can be time-consuming because ON–Bar writes all the disk
pages to backup media. Level-1 and level-2 backups might take almost as
much time as a level-0 backup because the database server must scan all the
data to determine what has changed since the last backup. It takes less time to
restore data from level-0, level-1, and level-2 backups than from level-0
backups and a long series of logical-log backups.
Level-1 Backups
A level-1 backup takes less space and might take less time than a level-0
backup because only data that changed since the last level-0 backup is copied
to the storage manager.
Dynamic Server
After you complete the backup, verify it with the archecker utility. For more
information, see Chapter 5, “Verifying Backups,” on page 5-1.
Monitor the logs so that you can back them up before they fill. If insufficient
space exists in the logical log, the database server will stop responding. If the
database server stops responding, add more logical-log files and retry the
ON–Bar command.
Extended Parallel Server keeps one logical log reserved for backups. That way,
enough log space exists for you to back up the logical logs to free enough log
space to back up storage spaces.
End of Extended Parallel Server
Only online storage spaces are backed up. Use the onstat -d utility to
determine which storage spaces are online. After you begin the backup,
monitor its progress in the ON–Bar activity log and database server message
log.
Important: You must back up each storage space at least once. ON–Bar
cannot restore storage spaces that it has never backed up.
Backup Syntax
Backing Up Storage Spaces and Logical Logs:
-b
(1) (1) (1) (1)
-p -q session -S -v
-L level (2) -f filename
-O
dbspace_list
(2)
-w
(2)
-F
Dynamic Server
For example, if you add a new dbspace dbs1, you will see a warning in the
message log that asks you to perform a level-0 backup of the root dbspace
and the new dbspace. If you attempt an incremental backup of the root
dbspace or the new dbspace instead, ON–Bar automatically performs a level-0
backup of the new dbspace.
For Extended Parallel Server, back up the root dbspace and the new or
modified dbspace on the coserver where the change occurred.
End of Extended Parallel Server
Tip: Although you no longer need to backup immediately after adding a log
file, your next backup should be level-0 because the data structures have
changed.
Warning: If you create a new storage space with the same name as a deleted
storage space, perform a level-0 backup twice:
1. Back up the root dbspace after you drop the storage space and
before you create the new storage space with the same name.
2. After you create the new storage space, back up the root
dbspace and the new storage space.
Always keep the volumes from the ISMLogs pool mounted to ensure that a
storage device is always available to accept logical-log data. If the volume is
not mounted, the backup will pause. For more information on using devices
and ISM commands, see the IBM Informix: Storage Manager Administrator's
Guide.
During the backup operation, ISM creates save sets of the backed up data and
enters information about the backed up data in the ISM catalog. You also can
use this command to back up the ISM catalog:
ism_catalog -create_bootstrap
If you use the onbar script to back up storage spaces and logical logs, it backs
up the ISM catalog automatically. If you call onbar_d directly, you must use
the ism_catalog -create_bootstrap command.
Using IBM Informix Server Administrator to Back Up and Verify
IBM Informix Server Administrator (ISA) is a browser-based tool for
performing administrative tasks such as ON–Bar and onstat commands. You
can use ISA to perform the following ON–Bar tasks:
v View messages in the ON–Bar activity log.
v Perform level-0, level-1, or level-2 backups.
– Back up storage spaces (onbar -b).
– Back up the whole system (onbar -b -w).
– Override error checks during the backup (onbar -b -O).
– Perform a fake backup (onbar -b -F).
v Back up the logical logs.
– Include the current log in the log backup (onbar -b -l -c).
– Override error checks to back up logs when blobspaces are offline (onbar
-l -O).
– Start a continuous logical-log backup (onbar -b -l -C).
v Verify backups.
v Edit the onbar script.
Important: Save your logical logs so that you can restore from this backup.
Figure 4-1 shows a sample file that contains a list of storage spaces to be
backed up (blobsp2.1, my_dbspace1, blobsp2.2, dbsl.1, rootdbs.1, and
blobsp2.1
# a comment ignore this text
By default, the onbar -b command backs up all storage spaces in the system,
including read-only dbspaces.
The semantics of the -S option are not changed by the -L option that allows
incremental backups. Thus, the command onbar -b -S -L 1 performs the
following tasks:
v For read-only dbspaces, performs a level-0 backup if no backup of the
storage space that was taken after the storage space became read-only
exists; otherwise skips the storage spaces.
v For operational dbspaces, performs a level-1 archive of the storage space if
a level-0 backup of the storage space exists; otherwise performs a level-0
backup of the storage space.
Tip: If your tables are read-only and not modified, consider placing them in
read-only (static) dbspaces. Use the -S option to the ON-Bar backup
command to avoid repeatedly archiving read-only dbspaces.
End of Extended Parallel Server
Warning: If you need to restore only that table, you must warm restore the
entire dbspace to the current time (onbar -r). This procedure does
not allow you to recover from accidentally dropping or corrupting a
table because it would be dropped again during logical restore.
When you turn on logging for a smart large object, you must immediately
perform a level-0 backup to ensure that the object is fully recoverable. For
details on logging sbspaces, see the IBM Informix: Administrator's Guide.
Warning: It is recommended that you use fake backups on test systems, not
on production systems. You cannot restore data from a fake backup.
(If you performed a fake backup, you must reload the table to be
able to restore the data.)
To restore data, you must have previously backed up the logical logs. For
more information, see “Backing Up Logical Logs on Extended Parallel Server”
on page 4-25.
You can either back up the logical logs manually or start a continuous
logical-log backup. Logical-log backups are always level 0. After you close the
current logical log, you can back it up.
Backing Up Logical Logs on Dynamic Server
If you do not use whole-system backups, you must back up logical logs
because you must restore both the storage spaces and logical logs.
If you perform whole-system backups and restores, you can avoid restoring
logical logs. It is recommended that you also back up the logical logs when
you use whole-system backups. These log backups allow you to recover your
data to a time after the whole-system backup, minimizing data loss. The
following diagram shows the syntax for onbar -b -l commands.
After the continuous logical-log backup starts, it runs indefinitely waiting for
logical logs to fill. To stop the continuous logical-log backup, kill the ON–Bar
process.
Dynamic Server
You cannot view logs that have not been backed-up which are still on the disk
or in shared memory. The contents of these logical logs can only be viewed
with the onlog utility.
-u username -t tblspace_num -x transaction_num
-b -l
-q session -s -f filename
logstreamid
If you want to assign a session name to the log backup, use the -q option, as
follows:
onbar -b -l -q “my_logbackup”
To find the logical-log backup session, use the onstat -g bus command. For
more information, see “Starting and Stopping ON-Bar Sessions (XPS)” on
page 8-11.
The value U means that the logical-log file is used. The value L means that the
last checkpoint occurred when the indicated logical-log file was current. The
value C indicates the current log. If B appears in the third column, the
logical-log file is already backed up and can be reused.
U-B---L
The following example shows the output of onstat -l when you use it to
monitor logical logs on the database server:
Logical Logging
Buffer bufused bufsize numrecs numpages numwrits recs/pages pages/io
L-2 0 16 1 1 1 1.0 1.0
Subsystem numrecs Log Space used
For information about the onstat -l fields, see “onstat -l” on page B-4.
Warning: If you turn off continuous logical-log backup, you must monitor
your logical logs carefully and start logical-log backups as needed.
Dynamic Server
The flag values U---C-L or U---C-- represent the current logical log. While
you are allowed to back up the current logical log, doing so forces a log
switch that wastes logical-log space. Wait until a logical-log file fills before
you back it up.
End of Dynamic Server
On Extended Parallel Server, use the xctl onstat -l utility to monitor logical
logs on all coservers.
End of Extended Parallel Server
ON–Bar salvages logical logs automatically before a cold restore unless you
specify a physical restore only. ON–Bar salvages the logical logs that are used
before it restores the root dbspace. To make sure that no data is lost before
you start the cold restore, manually salvage the logical logs in the following
situations:
v If you must replace the media that contains the logical logs
If the media that contains the logical logs is no longer available, the log
salvage will fail, resulting in data loss.
v If you plan to specify a physical restore only (onbar -r -p)
For more information, see “Salvaging Logical Logs” on page 6-18 and
“Performing a Cold Restore” on page 6-19.
Physical
N backup
?
Y
List of backup storage
Parallel Y Create onbar_d child process for
spaces
? each storage space
N
Back up storage Back up storage spaces in
spaces serially parallel
N Log
backup
?
Y
Back up logs
Backup
complete
Once all the storage spaces are backed up, the onbar-driver sends a list of
logstreams (logical-log data) to the Backup Scheduler that assigns the tasks to
onbar-worker processes. Each onbar-worker process is associated with a
coserver and a storage-manager instance. Once an onbar-worker process
starts, it might be active after the backup or restore session is completed. An
onbar-worker can process parts of several backup or restore sessions in its
lifetime.
Each onbar-worker transfers data between the database server and the storage
manager until the backup request is fulfilled. When an onbar-worker
completes its task, it waits for the next task from the Backup Scheduler. If no
new task is assigned in a user-specified amount of time, the onbar-worker
shuts down. You can set the number of minutes that the onbar-worker
processes wait in BAR_IDLE_TIMEOUT in the ONCONFIG file.
If the Backup Scheduler has new tasks to assign and not enough
onbar-worker processes are running to complete the task, it calls the
start_worker script to start one or more new onbar-worker processes.
After each object is backed up, ON–Bar updates the emergency backup boot
file on the coserver that is local to the onbar-worker and the sysutils
database. The emergency backup boot file is on the coserver of the
onbar-worker that backed it up.
Backup Scheduler
Physical backup, if specified get event
onbar_w
Backup complete
Back up spaces
Back up logs
In This Chapter
This chapter describes the archecker utility for checking the validity and
completeness of backups. To ensure that you can restore a backup safely, issue
the onbar -v command, which calls the archecker utility.
onbar -v
Emergency boot file or
2 sysutils
Storage
manager
3 5
4
archecker
Bitmap of backup
During the verification phase, archecker verifies that all the pages for each
table are present and checks the partition pages, the reserved pages, the
chunk-free list, blobspaces, sbspaces, and extents. The archecker utility also
checks the free and used counts, verifies that the page stamps match and that
no overlap exists in the extents.
dbspace_list
(2)
-w
Notes:
1 Extended Parallel Server Only
2 Dynamic Server Only
Use this option to avoid entering The file can list multiple storage spaces per line.
a long list of storage spaces
every time that you verify them.
-L level Specifies the level of backup to The archecker utility fully verifies level-0
verify (XPS): backups on all database servers and performs
v 0 for a complete backup limited verification of level-1 and level-2
backups.
v 1 for changes since the last
level-0 backup If you do not specify a level, archecker verifies
v 2 for changes since the last the level-0 backup.
level-1 backup
-q session Assigns a name to the verify DBSERVERNAMErandom_number is the default
session (XPS) session name. The session name must be unique
and can be up to 126 characters.
If the backup is verified successfully, these files are deleted. If the backup fails
verification, these files remain. Copy them to another location so that
Technical Support can review them.
If your database server contains only dbspaces, use the following formula to
estimate the amount of temporary space in kilobytes for the archecker
temporary files:
space = (130 KB * number_of_chunks) + (pagesize * number_of_tables) +
(.05 KB * number_of_logs)
Dynamic Server
number_of_chunks
is the maximum number of chunks that you estimate for the
database server.
pagesize is the system page size in kilobytes.
For example, you would need 12.9 megabytes of temporary disk space on a
50-gigabyte system with a page size of 2 kilobytes. This system does not
contain any blobspaces, as the following statement shows:
13,252 KB = (130 KB * 25 chunks) + (2 KB * 5000 tables) +
(.05 KB * 50 logs) + (2 KB * 0)
Verification Examples
The following examples show how to verify an existing backup and how to
verify immediately after backing up.
To verify the backed-up storage spaces listed in the file bkup1, use the
following command:
onbar -v -f /usr/backups/bkup1
In This Chapter
This chapter describes the different types of restores that ON–Bar performs.
If the database server goes offline but the critical dbspaces are all right, bring
the database server online and perform a warm restore. If the database server
goes offline and a critical dbspace is down, perform a cold restore. For details,
see “What Is a Cold Restore?” on page 6-3.
A warm restore can be performed after a dbspace has been renamed and a
level 0 archive of the rootdbs and renamed dbspace is taken.
End of Dynamic Server
Perform a cold restore when the database server fails or you need to perform
one of the following tasks:
v Point in time restore
v Point in log restore
v Whole system restore (IDS only)
v Imported restore
v Renaming chunks (IDS only)
A cold restore starts by physically restoring all critical storage spaces, then the
noncritical storage spaces, and finally the logical logs. The database server
goes into recovery mode after the reserved pages of the root dbspace are
restored. When the logical restore is complete, the database server goes into
quiescent mode. Use the onmode command to bring the database server
online. For more information, see “Performing a Cold Restore” on page 6-19.
Tip: If you mirror the critical dbspaces, you are less likely to have to perform
a cold restore after a disk failure because the database server can use the
mirrored storage space. If you mirror the logical-log spaces, you are
more likely to be able to salvage logical-log data if one or more disks
fail.
Dynamic Server
A cold restore can be performed after a dbspace has been renamed and a
level-0 backup of the rootdbs and renamed dbspace is performed.
End of Dynamic Server
The storage spaces that you do not restore during the cold restore are not
available until after you restore them during a warm restore, although they
might not have been damaged by the failure. While a mixed restore makes the
critical data available sooner, the complete restore takes longer because the
logical logs are restored and replayed several times, once for the initial cold
restore and once for each subsequent warm restore.
What Is a Parallel Restore?
If you perform a restore using the onbar -r command, ON–Bar restores the
storage spaces in parallel and replays the logical logs once.
Dynamic Server
When you restore the database server to a specific time, any transactions that
were uncommitted at the specified point in time are lost. Also, all transactions
after the point-in-time restore are lost. For information on how to restore to a
specific time, see “Restoring Data to a Point in Time” on page 6-24.
What Is an Imported Restore?
ON–Bar allows you to restore objects to a different database server instance
than the one from which the data was backed up. This type of restore is
called an imported restore.
You cannot use a backup from one database server version to restore on a
different version.
What Is a Rename Chunks Restore? (IDS)
ON–Bar allows you to rename chunks by specifying new chunks paths and
offsets during a cold restore. This option is useful if you need to restore
storage spaces to a different disk from the one on which the backup was
made. You can rename any type of chunk, including critical chunks and
mirror chunks.
The ON-Bar rename chunk restore only works for cold restores. The critical
dbspaces (the rootdbs and any dbspace containing logical or physical logs)
must be restored during a cold restore. If you do not specify the list of
dbspaces to be restored, then the server will restore the critical dbspaces and
all the other dbspaces. But if you specify the list of dbspaces to be restored,
then the critical dbspaces must be included in the list.
For more details, see “Using Restartable Restore to Recover Data (IDS)” on
page 6-41 and “RESTARTABLE_RESTORE (IDS)” on page 9-22.
On Extended Parallel Server, you can use the xctl onstat -d utility to check the
storage spaces on all coservers.
End of Extended Parallel Server
When you add a chunk after your last backup, you must ensure that the
chunk device is available to the database server when it rolls forward the
logical log.
Restoring Save Sets with ISM
If you are using ISM, you can restore data from save sets on the storage
volume. When the ISM server receives a restore request, the ism_watch
command prompts you to mount the required storage volume on the storage
device. When you mount the volume, the restore will resume.
You can set the retention period for either a save set or volume. Unless all the
save sets on the volume have expired, you can use ON–Bar to restore it.
After the retention period for a save set expires, ON–Bar can no longer restore
it. To re-create an expired save set, use the ism_catalog -recreate from
command.
If you set the retention period for a volume, ISM retains the save sets until all
the save sets on that volume have expired. To recover an expired volume, use
the ism_catalog -recover from command. For more information, see the
IBM Informix: Storage Manager Administrator's Guide.
dbspace_list
(2)
-w
(3) (2)
Renaming Chunks -p
-t time -f filename
dbspace_list
(2)
-w
Notes:
1 Extended Parallel Server Only
2 Dynamic Server Only
3 See page 6-31
Performing a Restore
To perform a complete cold or warm restore, use the onbar -r command.
ON–Bar restores the storage spaces in parallel. To speed up restores, you can
add additional CPU virtual processors. To perform a restore, use the following
command:
onbar -r
In a warm restore, the -r option restores all down storage spaces and logical
logs, and skips online storage spaces. A down storage space means it is offline
or a chunk in it is inconsistent.
You cannot perform more than one warm restore simultaneously. If you need
to restore multiple storage spaces, specify the set of storage spaces to restore
to ON-Bar (see “Restoring Specific Storage Spaces” on page 6-16) or allow
ON-Bar to restore all down storage spaces by not explicitly specifying any
spaces.
Tip: For faster performance in a restore, assign separate storage devices for
backing up storage spaces and logical logs. If physical and logical
backups are mixed together on the storage media, it takes longer to scan
the media during a restore.
You can also specify the storage spaces to restore by listing them in a text file
and passing the pathname of the file to ON-Bar with the -f option.
If any of the dbspaces that you request to restore are online, they are skipped
in the restore. ON–Bar writes a message to the activity log about the skipped
dbspaces.
Warning: Because the logical-log files are replayed using temporary space
during a warm restore, make sure that you have enough temporary
space for the logical restore.
The minimum amount of temporary space that the database server needs is
equal to:
Dynamic Server
v The total logical-log space for the database server instance, or the number
of log files to be replayed, whichever is smaller.
v The total logical-log space for the coservers on which storage spaces are
being restored, or the number of log files to be replayed, whichever is
smaller. Each coserver must have enough temporary space for all its
temporary log files.
End of Extended Parallel Server
In the following onstat -d example, the S flag shows that dbspace dbnoreplay
is a candidate for skipping logical replay. This S flag disappears the first time
that a logging operation occurs in that dbspace.
Dbspaces
address number flags fchunk nchunks flags owner name
a66c140 1 1 1 1 N informix rootdbs
a68bea8 2 20001 2 1 N S informix dbnoreplay
The database server always replays the logical logs during a cold restore.
If you have a mixture of dbspaces to restore, some that require logical replay
and some that do not, follow these steps:
You can use the dbspaces restored in the first pass sooner but the total restore
time might be longer. This method enables you to quickly restore tables in a
data warehouse. For more information, see “Skipping Logical Replay (XPS)”
on page 6-17.
The onbar -r command automatically salvages the logical logs. Use the onbar
-r -p and onbar -r -l commands if you want to skip log salvage.
Dynamic Server
Warning: Back up all storage spaces before you perform a cold restore. If you
do not and you try to perform a cold restore, data in the storage
spaces that were not backed up will be lost. The storage space is
marked as offline after the cold restore. Drop the storage space so
that you can reuse its disk space.
Dynamic Server
Use the following command:
onmode -ky
Take the database server offline and then bring it to microkernel mode:
xctl onmode -ky
xctl -C oninit -m
If you use onbar -r -p -w, the database server is in fast recovery mode when
the restore completes. Perform either a logical restore (onbar -r -l) or use
onmode -m to bring the database server online. For more information, see
“Performing a Whole-System Backup (IDS)” on page 4-17.
In a mixed restore you perform a cold restore on only the critical dbspaces
and dbspaces containing your urgent data first. Because you do not restore all
dbspaces in the system and you save the time necessary to restore those
dbspaces, you can bring the server online faster. You then restore the
remaining dbspaces in one or more warm restores.
For example, consider a database server with four dbspaces in addition to the
root dbspace: logdbs, dbs_1, dbs_2, and dbs_3. Suppose the logical logs are
stored in logdbs and the physical log is in the root dbspace. The critical
dbspaces that must be restored during the initial cold restore are rootdbs and
logdbs:
onbar -r rootdbs logdbs dbs_1
When the cold restore completes, you can bring the server online and any
data stored in rootdbs, logdbs, and dbs_1 becomes accessible.
Finally, you can perform a warm restore of all remaining dbspaces (for this
example, only dbs_3):
onbar -r
Instead of performing two warm restores, you could have issued the onbar -r
command, without specifying a list of dbspaces, immediately after the initial
cold restore. This command would have restored all dbspaces remaining to be
restored: dbs_2 and dbs_3. Conversely, in a larger system with dozens of
dbspaces, you could divide the warm restore portion of the mixed restore into
several warm restores, each restoring only a small subset of the dbspaces
remaining to be restored in the system.
Tip: If you do not run onsmsync after the cold part of the mixed restore,
ON-Bar automatically runs onsmsync. You should run onsmsync as a
separate step so that you can address any errors that might occur. If you
allow ON-Bar to run onsmsync and onsmsync fails, the restore proceeds
but might fail.
Tip: You can perform both the initial cold restore and each subsequent warm
restore in stages, as described in the section “Performing a Physical
Restore Followed By a Logical Restore” on page 6-18.
For example, consider a database with the catalogs in the dbspace cat_dbs:
create database mydb in cat_dbs with log;
If you need to restore the server, you cannot access all of the data in the
example database until you have restored the dbspaces containing the
database catalogs, table data, and index: in this case, the dbspaces cat_dbsp,
tab_dbs_1, tab_dbs_2, and idx_dbsp.
If you need to restore the server, you would first perform a cold restore of all
critical dbspaces and dbspaces containing urgent data, urgent_dbs_1 through
urgent_dbs_n. For example, assume logical logs are distributed among two
dbspaces, logdbsp_1 and logdbsp_2, and the physical log is in rootdbs. The
critical dbspaces are therefore rootdbs, logdbsp_1, and logdbsp_2.
You would perform the initial cold restore by issuing the following ON-Bar
command:
onbar -r rootdbs logdbsp_1 logdbsp_2 urgent_dbs_1 ... urgent_dbs_2
At this point you can bring the server online and all business-urgent data is
available.
Finally, you can perform a warm restore for the rest of the server by issuing
the following command:
onbar -r non_urgent_dbs_1 non_urgent_dbs_2 ... non_urgent_dbs_r
Alternatively, you can use the following command to restore all storage
spaces:
onbar -r
Important: To determine the appropriate date and time for the point-in-time
restore, use the onlog utility that the IBM Informix: Administrator's
Reference describes. The onlog output displays the date and time
of the committed transactions in the logical log. All data
transactions that occur after time or last_log are lost.
To restore database server data to its state at a specific date and time, enter a
command using the date and time format for your GLS locale, as this example
shows:
onbar -r -t “2004-05-10 11:35:57”
Quotation marks are recommended around the date and time. The format for
the English locale is yyyy-mm-dd hh:mm:ss. If the GL_DATETIME
environment variable is set, you must specify the date and time according to
that variable. For an example of using a point-in-time restore in a non-English
locale, see “Point-in-Time Restore Example” on page D-2.
Dynamic Server
Important: The point-in-time values for the -t option must be identical for
both the physical and logical restore.
The following example performs a cold restore for a subset of the storage
spaces (including all critical dbspaces) in the initial cold restore, and then
performs a warm restore for dbspace_2 and dbspace_3, followed by a warm
restore of dbspace_4 and dbspace_5, and finally performs a warm restore of
all remaining storage spaces:
onbar -r -t "2004-05-10 11:35:57" rootdbs logspace_1 dbspace_1
onsmsync
onbar -r dbspace_2 dbspace_3
onbar -r dbspace_4 dbspace_5
onbar -r
Tip: You can perform the cold part of the mixed restore in stages, as
described in the section “Performing a Point-in-Time Cold Restore in
Stages.” Supply a list of dbspaces or dbslices to the physical restore. For
example:
You can specify any log ID from any timeline to restore to a specific logical
log. If the specific logical log applies to more than one timeline, then ON-Bar
uses the latest one.
The database server automatically shuts down each storage space before it
starts to restore it. Taking the storage space offline ensures that users do not
try to update its tables during the restore process.
Dynamic Server
For special considerations on using the -O option, see “Using the -O Option in
a Whole-System Restore” on page 6-21.
Important: ON–Bar does not re-create chunk files during a logical restore if
the logical logs contain chunk-creation records.
UNIX Only
2. If you use symbolic links to raw devices, create new links for the down
chunks that point to the newly installed disk.
To restore a dropped storage space when the chunk files were also deleted:
1. Use the onlog utility to find the logical-log file that contains the dropped
transaction for the storage space.
2. To restore a dropped storage space when the chunk files were deleted,
enter the following command:
onbar -r -t time -O
The point-in-time restore restores the dropped storage space and
automatically re-creates the chunk files.
Warning: You must restore the data to a point in time before the storage
space was dropped in both the physical and logical restores.
For example, the following command restores the entire system, without
rebuilding indexes:
onbar -r -I
When you use the -I option with ON-Bar, any indexes that did not already
exist at the time the backup was performed will be unavailable or might be
unusable after the restore.
You can re-create the down indexes in one of the following ways:
v Drop and re-define them manually using the DROP INDEX and CREATE
INDEX SQL commands.
v Use the onbar -r -i command. The onbar -r -i command only triggers index
rebuild among storage spaces that are already online; it does not restore
any storage spaces.
The -I and -i options are particularly useful if you have created many indexes
since your last backup and have performed a mixed restore on the system. In
this case, you might want to use the -I option in each phase of the mixed
restore. Then, after you have restored the entire system, run the onbar -r -i
command to re-create the down indexes.
To reinitialize the database server after a failure when you do not need the
old data:
1. Do not copy the old emergency boot file into the database server directory
($INFORMIXDIR/etc on UNIX or %INFORMIXDIR%\etc on Windows).
2. To perform a complete backup, use ON-Bar -b.
Tip: If you use symbolic links to chunk names, you might not need to rename
chunks; you need only edit the symbolic name definitions. For more
information, see the IBM Informix: Administrator's Guide.
Key Considerations
During a cold restore, ON–Bar performs the following validations to rename
chunks:
1. It validates that the old chunk pathnames and offsets exist in the archive
reserved pages.
2. It validates that the new chunk pathnames and offsets do not overlap each
other or existing chunks.
3. If renaming the primary root or mirror root chunk, it updates the
ONCONFIG file parameters ROOTPATH and ROOTOFFSET, or
MIRRORPATH, and MIRROROFFSET. The old version of the ONCONFIG
file is saved as $ONCONFIG.localtime.
4. It restores the data from the old chunks to the new chunks (if the new
chunks exist).
5. It writes the rename information for each chunk to the online log.
If either of the validation steps fail, the renaming process stops and ON–Bar
writes an error message to the ON–Bar activity log.
Warning: Perform a level-0 archive after you rename chunks; otherwise you
will have to restore the renamed chunk to its original pathname
and then rename the chunk again.
Renaming Chunks:
You can use the following options after the -rename command:
v -f filename
v dbspace_list
v -t time
v -n log
v -w
v -p
The following table lists example values for two chunks that are used in the
examples in this section.
Perform a level-0 archive after the rename and restore operation is complete.
Perform a level-0 archive after the rename and restore operation is complete.
Perform a level-0 archive after the rename and restore operation is complete.
You can combine renaming chunks with existing devices and renaming
chunks with nonexistent devices in the same rename operation. This example
shows how to rename a single chunk to a nonexistent device name.
The following table lists example values for the chunks used in this example.
Storage Space Old Chunk Path Old Offset New Chunk Path New Offset
sbspace1 /chunk3 0 /chunk3N 0
Important: You cannot use a backup from one database server version to
restore on a different version.
For information on importing a restore with ISM, see the IBM Informix:
Storage Manager Administrator's Guide. For information on using HDR, see the
IBM Informix: Administrator's Guide. If you are using a third-party storage
manager, use the following procedure for an imported restore.
Dynamic Server
If you are importing a whole-system backup, you can use the onbar -r -w
-p command to restore the data.
End of Dynamic Server
3. Before you expire objects on the target computer and the storage manager
using onsmsync, perform one of the following tasks:
v Manually edit the emergency boot file viz ixbar.servernum in the
$INFORMIXDIR/etc directory on the target computer to replace the
Informix server name that is used on the source computer with the
Informix server name of the target computer.
v Execute the command onsmsync -b as user informix on the target
computer to regenerate the emergency boot file from the sysutils
database only, so that the newly regenerated emergency boot file reflects
the server name of the target computer.
Otherwise, onsmsync expires the incorrect objects.
Important: If you use ON–Bar to perform the backup and restore, ontape is
required on both database servers. You cannot remove ontape
from database servers participating in HDR.
The following online.log messages might display while the database servers
are synchronizing:
19:37:10 DR: Server type incompatible
19:37:23 DR: Server type incompatible
19:37:31 DR: new type = secondary, primary server name = bostonserver
19:37:31 DR: Trying to connect to primary server ...
19:37:36 DR: Secondary server connected
19:37:36 DR: Failure recovery from disk in progress ...
19:37:37 Logical Recovery Started.
19:37:37 Start Logical Recovery - Start Log 11, End Log ?
Warning: If you do not use logging for your databases or tables, ON–Bar can
only restore the data up to the time it was most recently backed up.
Changes made to data since the last standard backup are not
restorable. If you do not use logging, you would need to redo lost
transactions manually.
Dynamic Server
You can then duplicate the table again by using the SQL command
CREATE DUPLICATE OF TABLE. For more information, see the
IBM Informix: Guide to SQL Syntax.
Temp X X No.
Scratch X No.
Operational X You can restore an operational table if no light appends occurred
since the last level-0 backup.
If light appends occur to the table since the last backup, the table is
not wholly restorable. This sort of problem can also occur if you
restore from an older backup. To determine whether your table was
restored to its original state, check the message log for the following
error message:
Portions of partition partnum of table tablename
in database dbname were not logged. This partition
cannot be rolled forward.
If you see this message, the table or table fragments were not
restored to their original state. If you want full access to whatever
remains in this table, you need to alter the table to raw and then to
the desired table type. This alter operation removes inconsistencies in
the table that resulted from replaying non-logged operations such as
light appends.
Raw X X When you restore a raw table, it contains only data that was on disk
at the time of the last backup. Because raw tables are not logged,
changes that occurred since the last backup cannot be restored.
Static X Yes, you can restore the data present at the last dbslice or dbspace
backup.
Static tables cannot change data. If you alter a static table to another
type and update the data, the recoverability of the table depends on
each type the table has been since each dbspace was backed up. For
example, if you alter a static table to raw, it follows the rules for
restoring a raw table.
If the failure occurred during a physical restore, ON–Bar restarts the restore at
the storage space and level where the failure occurred. It does not matter
whether the restore was warm or cold.
If a failure occurred during a cold logical restore, ON–Bar restarts the logical
restore from the most recent log checkpoint. Restartable logical restore is
supported for cold restores only. However, if the failure during a warm
restore caused the database server to shut down, do not restart the restore.
Instead, use the archecker utility to verify the backup and start the whole
restore from the beginning.
Warning: Restartable restore does not work for the logical part of a warm
restore.
Restartable Restore Example
The following example shows how to use restartable restore for a cold restore:
1. Make sure that RESTARTABLE_RESTORE is ON.
If you just set RESTARTABLE_RESTORE to ON, shut down and restart the
database server for the changes to take effect.
2. Restore several storage spaces:
onbar -r rootdbs dbs1 dbs2 dbs3 dbs4
The database server fails while restoring dbs3.
3. Restart the restore:
onbar -RESTART
ON–Bar automatically starts restoring dbs3, dbs4, and the logical logs.
4. If necessary, bring the database server online:
onmode -m
Figure 6-1 shows how a restartable restore works when the restore failed
during a physical restore of dbspace2. For example, you set
RESTARTABLE_RESTORE to ON before you begin the restore. The level-0,
level-1, and level-2 backups of rootdbs, and the level-0 and level-1 backups of
dbspace1 and dbspace2 are successfully restored. The database server fails
while restoring the level-1 backup of dbspace2. When you restart the restore,
ON–Bar restores the level-2 backup of dbspace 1, the level-1 and level-2
backups of dbspace2, and the logical logs.
Table 6-2 shows what to expect with different values for BAR_RETRY in a
different restarted restore example.
Table 6-2. Restartable Restore Results with Different BAR_RETRY Values
ON–Bar Command BAR_RETRY = 2 BAR_RETRY = 0
onbar -r dbs1 dbs2 dbs3 restore level-0 dbs1, dbs2, dbs3 restore level-0 dbs1, dbs2, dbs3
restore level-1 dbs1 FAILS restore level-1 dbs1 FAILS
restore level-1 dbs1 RETRY PASSES
restore level-1 dbs2, dbs3
restore level-2 dbs1, dbs2, dbs3
restore logical logs
onbar -RESTART No restart is needed because everything restore level-1 dbs1, dbs2, dbs3
was successfully restored. restore level-2 dbs1, dbs2, dbs3
restore logical logs
onbar -r dbs1 dbs2 dbs3 restore level-0 dbs1, dbs2, dbs3 restore level-0 dbs1, dbs2, dbs3
restore level-1 dbs1 FAILS restore level-1 dbs1 FAILS
restore level-1 dbs1 RETRY FAILS
restore level-1 dbs2, dbs3 onbar -RESTART
restore level-2 dbs2, dbs3 restore level-1 dbs1, dbs2, dbs3
restore logical logs restore level-2 dbs1, dbs2, dbs3
restore logical logs
onbar -r dbs1 dbs2
restore level-1 dbs1
restore level-2 dbs1
restore logical logs
If a failure occurs during a cold logical restore, ON–Bar restarts it at the place
that it failed.
Table 6-3 shows what results to expect when physical restore fails. Assume
that BAR_RETRY > 1 in each case.
Table 6-4 shows what results to expect when logical restore fails.
If you have lost critical dbspaces, you must perform a cold restore. ON–Bar
gathers backup data from the emergency boot file and then restores the
storage spaces and logical logs.
Warm-Restore Sequence on Dynamic Server
Figure 6-3 on page 6-48 describes the ON–Bar warm-restore sequence.
For each storage space, ON–Bar restores the last level-0 backup, the level-1
backup (if it exists), and the level-2 backup (if it exists). After the physical
restore is complete, ON–Bar backs up the logical logs to get the latest
checkpoint and then restores the logical logs. This logical backup allows data
to be restored up to the moment of failure.
For each storage space, ON–Bar restores the last level-0 backup, the level-1
backup (if it exists), and the level-2 backup (if it exists). Finally, ON–Bar
restores the logical logs.
ON-Bar
Salvage
logs? N
Log
restore
N
Y
Logical restore
Update sysutils
ON-Bar
Backup Scheduler
Physical restore, if specified
get event
onbar_w
Logical restore, if specified
event
Restore spaces
Back up logs
Warm restore complete Restore logs
Update sysutils
Update boot file
The onbar-merger utility collects and processes the backup emergency boot
files from each coserver to determine what restores are required. The
onbar-merger then creates the restore boot file and copies it to each coserver
that contains a backup emergency boot file.
For each storage space, ON–Bar restores the last level-0 backup, the level-1
backup (if it exists), and the level-2 backup (if it exists). Finally, it restores the
logical logs.
Backup Scheduler
Salvage logs, if specified
get event
onbar_w
Collect backup boot files
Merge backup boot files Salvage logs
Create restore boot files Restore spaces and logs
Send restore boot files Update sysutils
In This Chapter
This chapter discusses recovering data using external backup and restore.
Dynamic Server
To stop continuous logical-log backup, use the CTRL-C command. To resume
continuous logical-log backup, use the onbar -b -l -C command.
End of Dynamic Server
Important: Because the external backup is outside the control of ON–Bar, you
must track these backups manually. For more information, see
“Tracking an External Backup” on page 7-12.
Performing an External Backup
The database server must be online or in quiescent mode during an external
backup.
Dynamic Server
On Dynamic Server, use the following command:
onmode -c block
On Extended Parallel Server, use the following command to block all the
coservers in cogroup_all:
onutil ebr block;
Dynamic Server
Use the following command:
onmode -c unblock
Warning: Because external backup is not done through ON–Bar, you must
ensure that you have a backup of the current logical log from
the time when you execute the onutil EBR BLOCK or onmode -c
block command. Without a backup of this logical-log file, the
external backup is not restorable.
5. After you perform an external backup, back up the current log.
Dynamic Server
Use the following command to back up the current log on Dynamic
Server:
onbar -b -l -c
Use the following commands to switch to and back up the current log on
Extended Parallel Server:
onmode -l # execute on coservers with external backups
onbar -b -l # back up all used logs
Warning: Because external backup is not done through ON–Bar, you must
ensure that you have a backup of the current logical log from
the time when you execute the onutil EBR BLOCK or onmode -c
block command. Without a backup of this logical-log file, the
external backup is not restorable.
5. After you perform an external backup, back up the current log.
Use the following commands to switch to and back up the current log on
Extended Parallel Server:
onmode -l # execute on coservers with external backups
onbar -b -l # back up all used logs
EBR BLOCK
cogroup SESSION session_name
When a coserver is blocked, all the data on disk is synchronized and users
can access that coserver in read-only mode. When you block a coserver, the
entire coserver is blocked. You can block a list of coservers, cogroups, or the
entire database server. You can assign a session name to the blocking
operation.
To block the entire database server (all the coservers in cogroup_all), use
either of the following commands:
onutil ebr block;
or
onutil ebr block cogroup_all;
After you complete the external backup, unblock the database server. For the
syntax diagram, see “Unblocking Coservers or the Database Server” on page
7-10.
For information on how to create cogroups, see the onutil section of the
IBM Informix: Administrator's Reference. For information on how to monitor
this session, see “Monitoring an External Backup” on page 7-12.
Blocking Coservers
To block specific coservers, either specify them by name or use the cogroup
range identifier with the onutil command. For example, xps.%r(1.4) expands
to coservers xps.1, xps.2, xps.3, and xps.4. The following example blocks
coservers xps.1 and xps.2.
onutil ebr block xps.1, xps.2;
cogroup
Tip: Be sure not to unblock coservers that another user has blocked. (For
information, see “Monitoring an External Backup” on page 7-12.)
or
onutil ebr unblock cogroup_all commit;
If you use the ABORT option, the skip logical replay feature does not work,
even if that dbspace was completely backed up. In this case, use the
COMMIT option to commit those dbspaces that were backed up successfully
and use the ABORT option for the remaining dbspaces.
To monitor the external backup status in the sysmaster database, use the
following SQL queries. For more information, see “Backup Scheduler SMI
Tables (XPS)” on page 10-10.
# to find all blocked coservers
SELECT * FROM sysbuobject
WHERE is_block = 1 AND is_coserver = 1;
Warning: When you perform a cold external restore, ON–Bar does not first
attempt to salvage logical-log files from the database server because
the external backup has already copied over the logical-log data.
In Extended Parallel Server, you must perform a logical restore on the whole
system after an external backup even if all the storage spaces were backed up
Notes:
1 Dynamic Server only
2 Extended Parallel Server only
Dynamic Server
To shut down the database server, use the onmode -ky command.
To restore from an external backup when you use the mirroring support
provided by the database server:
1. Shut down the database server.
Dynamic Server
To shut down the database server, use the onmode -ky command.
End of Dynamic Server
In This Chapter
This chapter discusses the following topics:
v Customizing ON–Bar and storage-manager commands with the onbar
script
v Starting onbar-worker processes manually
v Expiring and synchronizing the backup history
The default onbar script assumes that the currently installed storage manager
is ISM and backs up the ISM catalogs. If you are using a different storage
manager, edit the onbar script, delete the ISM-specific lines, and optionally,
add storage-manager commands.
For background information on the onbar script or batch file, see “ON-Bar
Utilities” on page 2-9 and “Updating the onbar Script” on page 3-8.
Dynamic Server
The archecker temporary files are also removed.
End of Dynamic Server
v End cleanup processing here
Use this section to return onbar_d error codes.
UNIX Only
onbar_d "$@" # receives onbar arguments from command line
return_code = $? # check return code
To print the backup boot files on all coservers, replace the line:
lpr \$INFORMIXDIR/etc/ixbar.$servernum
with:
xctl lpr \$INFORMIXDIR/etc/Bixbar_\’hostname\’.$servernum
If more coservers than hosts are on the system, this script prints the same
boot files twice.
End of Extended Parallel Server
Windows Only
@echo off
%INFORMIXDIR%\bin\onbar_d %*
set onbar_d_return=%errorlevel%
:backupcom
if "%1" == "-b" goto printboot
goto skip
:printboot
REM Set the return code from onbar_d (this must be on the last line of
the script)
:skip
%INFORMIXDIR%\bin\set_error %onbar_d_return%
:end
UNIX Only
onbar_d "$@" # starts the backup or restore
EXIT_CODE=$? # any errors?
fi
done
if ! PHYS_ONLY; then # if logs were backed up, call another
migrate_logs # program to move them to tape
fi
Windows Only
:backupcom
if "%1" == "-b" goto m_log
if "%1" == "-l" goto m_log
:m_log
migrate_log
REM Set the return code from onbar_d (this must be on the last
line of the script)
:skip
%INFORMIXDIR%\bin\set_error %onbar_d_return%
:end
The default start_worker.sh script contains only one line, which calls onbar_w
to start an onbar-worker process.
If the storage manager does not have special requirements for worker
processes that pass data to it, you do not have to change the start_worker.sh
script.
Depending on the command options you supply, the onsmsync utility can
remove the following items from the sysutils database and the emergency
boot file:
v Backups that the storage manager has expired
v Old backups based on the age of backup
v Old backups based on the number of times they have been backed up
The onsmsync utility synchronizes the sysutils database, the storage manager,
and the emergency boot file as follows:
v Adds backup history to sysutils that is in the emergency boot file but is
missing from the sysutils database.
v Removes the records of restores, whole-system restores (IDS), fake backups,
successful and failed backups from the sysutils database.
v Expires old logical logs that are no longer needed.
v Regenerates the emergency boot file from the sysutils database.
Choosing an Expiration Policy
You can choose from the following three expiration policies:
v Retention date (-t) deletes all backups before a particular date and time.
v Retention interval (-i) deletes all backups older than some period of time.
v Retention generation (-g) keeps a certain number of versions of each
backup.
ON–Bar always retains the latest level-0 backup for each storage space. It
expires all level-0 backups older than the specified time unless they are
required to restore from the oldest retained level-1 backup.
Dynamic Server
Tip: To control whether the sysutils database maintains a history for expired
backups and restores, use the BAR_HISTORY configuration parameter.
For information, see “BAR_HISTORY” on page 9-11.
onsmsync
-g generation -s -O -f filename
-t time
-i interval
dbspace_list
-b
The onsmsync utility starts onbar-merger processes that delete backups on all
nodes that contain storage managers.
End of Extended Parallel Server
Dynamic Server
On Dynamic Server, this command saves the old emergency boot file as
ixbar.server_number.system_time and regenerates it as ixbar.server_number.
End of Dynamic Server
On Extended Parallel Server, this command saves the old backup boot file on
each coserver as Bixbar_hostname_system_time.server_number and
regenerates it as Bixbar_hostname.server_number.
End of Extended Parallel Server
Then use the onsmsync utility to re-create the backup and restore data in
sysutils.
Important: If both the sysutils database and emergency boot file are missing,
you cannot regenerate them with onsmsync. Be sure to back up
the emergency boot file with your other operating-system files.
The following example expires all backups older than 18 months (written as 1
year + 6 months):
onsmsync -i "1-6"
When you run onsmsync to expire old backups, onsmsync removes the old
backups from the current timeline, and make sures that the current timeline is
restorable from the backup objects that are retained. All other backups that are
not in the current timeline are also expired but onsmsync does not make sure
that the other timelines are restorable from the objects retained.
Warning: If you use the -O option with the -t, -i, or -g options, you might
accidentally delete critical backups, making restores impossible.
The order of the options does not matter. For example, to destroy the ON–Bar
session, myses10, use one of the following commands:
onbar -d -q myses10
onbar -q myses10 -d
BAR_SM 1
BAR_SM_NAME ism
BAR_WORKER_COSVR 1
BAR_DBS_COSVR 1,2
BAR_LOG_COSVR 1,2
BAR_WORKER_MAX 1
BAR_IDLE_TIMEOUT 5
END
BAR_SM 1
BAR_SM_NAME ism
BAR_WORKER_COSVR 1
BAR_DBS_COSVR 1,2
BAR_LOG_COSVR 1,2
BAR_WORKER_MAX 1
BAR_IDLE_TIMEOUT 5
END
Active workers:
In This Chapter
This chapter describes the ON–Bar configuration parameters that you can set
in the ONCONFIG file and the archecker configuration parameters that you
can set in the AC_CONFIG file.
This table describes the following attributes (if relevant) for each parameter.
Attribute Description
onconfig.std value
The default value that appears in the onconfig.std file in
Dynamic Server or the onconfig.xps file in Extended Parallel
Server
if not present The value that the database server supplies if the parameter is
missing from your ONCONFIG file
If this value is present in onconfig.std, the database server
uses the onconfig.std value. If this value is not present in
onconfig.std, the database server uses this value.
units The units in which the parameter is expressed
range of values The valid values for this parameter
takes effect The time at which a change to the value of the parameter
affects ON–Bar operation
Except where indicated, you can change the parameter value
between a backup and a restore.
refer to Cross-reference to further discussion
AC_CONFIG File
default value
UNIX: $INFORMIXDIR/etc/ac_config.std
Windows: %INFORMIXDIR%\etc\ac_config.std
takes effect
When ON-Bar starts
Set the AC_CONFIG environment variable to the full pathname for the
archecker configuration file (either ac_config.std or user defined). You must
specify the entire path, including the configuration filename in the
AC_CONFIG file or else the archecker utility might not work correctly. The
following are examples of valid AC_CONFIG pathnames:
v UNIX: /usr/informix/etc/ac_config.std and /usr/local/my_ac_config.std
v Windows: c:\Informix\etc\ac_config.std and
c:\Informix\etc\my_ac_config.std
If AC_CONFIG is not set, the archecker utility sets the default location for
the archecker configuration file to $INFORMIXDIR/etc/ac_config.std on
UNIX or %INFORMIXDIR%\etc\ac_config.std on Windows.
AC_MSGPATH
ac_config.std value
UNIX: /tmp/ac_msg.log
Windows: c:\temp\ac_msg.log
takes effect
When ON-Bar starts
When you verify backups with onbar -v, the archecker utility writes summary
messages to the bar_act.log and indicates whether the validation succeeded or
failed. It writes detailed messages to the ac_msg.log. If the backup fails
verification, discard the backup and retry another backup, or give the
ac_msg.log to Technical Support. For sample messages, see “Interpreting
Verification Messages” on page 5-7.
AC_STORAGE
ac_config.std value
UNIX: /tmp
Windows: c:\temp
takes effect
When ON-Bar starts
Table 9-1 lists the directories and files that archecker builds. If verification is
successful, these files are deleted.
Table 9-1. archecker Temporary Files
Directory Files
CHUNK_BM Bitmap information for every backed up storage space.
INFO Statistical analysis and debugging information for the backup.
SAVE Partition pages in the PT.######## file.
To calculate the amount of free space that you need, see “Estimating the
Amount of Temporary Space for archecker” on page 5-5. We recommend that
you set AC_STORAGE to a location with plenty of free space.
Use the shell script, log_full.sh or log_full.bat, for starting automatic log
backups. Modify this script and set it to the full path of ALARMPROGRAM
in the ONCONFIG file.
UNIX Only
Windows Only
Warning: Even if you set the path of BAR_ACT_LOG to some other directory,
check to see if the ON–Bar activity log was placed in the default
directory. When onbar-merger first starts, it writes messages to
/tmp/bar_act.log until it has a chance to read the ONCONFIG file.
Warning: After you change the BAR_BOOT_DIR value, you must kill all
onbar-worker processes on all coservers. When the processes end,
either perform a level-0 backup of all storage spaces or move the
files from the etc subdirectory to the new location.
UNIX Only
For example, the suffix for Solaris is so, so you specify
$INFORMIXDIR/lib/ibsad001.so on a Solaris system.
End of UNIX Only
UNIX Only
Windows Only
If you set the value to 0, onsmsync removes the bar_object, bar_action, and
bar_instance rows for the expired backup objects from the sysutils database.
If you set the value to 1, onsmsync sets the act_type value to 7 in the
bar_action row and keeps the bar_action and bar_instance rows for expired
backup objects in the sysutils database. If you do not set BAR_HISTORY to 1,
the restore history is removed.
You can set this option locally for individual storage managers to override the
default or specified global setting.
BAR_LOG_COSVR (XPS)
onconfig.std value all coservers
range of values A list of unique positive integers greater than
or equal to one
takes effect When the database server starts
To calculate the amount of memory that each onbar_d process requires, use
the following formula. For information on the page size for your system, see
the release notes.
required_memory = (BAR_NB_XPORT_COUNT * BAR_XFER_BUF_SIZE
* page_size) + 5 MB
BAR_PROGRESS_FREQ
onconfig.std value 0
if value not present 0
units Minutes
range of values 0, then 5 to unlimited
takes effect When onbar starts
Too frequent progress messages fill the activity log. So if you set
BAR_PROGRESS_FREQ to 1, 2, 3, or 4, ON–Bar automatically resets it to 5 to
prevent overflow in the ON–Bar activity log. Infrequent progress messages
might make it hard to tell if the operation is progressing satisfactorily.
If ON–Bar cannot determine the size of the backup or restore object, it reports
the number of transfer buffers sent to the database server instead of the
percentage of the object backed up or restored.
BAR_RETRY
The BAR_RETRY configuration parameter specifies how many times onbar
should retry a data backup, logical-log backup, or restore operation if the first
attempt fails.
The Backup Scheduler maintains two retry counts: object retries and
storage-manager retries.
Object retries is the number of times that the Backup Scheduler attempts a
backup or restore operation. If the backup or restore of a particular object gets
an error, the Backup Scheduler retries it BAR_RETRY times. If it continues to
fail, the Backup Scheduler removes the object from the backup session.
You can monitor the storage-manager retry count with onstat -g bus_sm. For
more information, see “Monitoring the Backup Scheduler Status (XPS)” on
page 8-13.
BAR_SM (XPS)
onconfig.std value 1
range of values Positive integer greater than or equal to 1
takes effect When the database server starts
The system uses the BAR_SM to track the location of backups. If you change
the identification number after you use the storage manager to perform a
backup, you invalidate the backups that you have made. Perform a new
level-0 backup of all data.
BAR_SM_NAME (XPS)
onconfig.std value None
range of values Any character string that does not contain
white spaces or the pound sign (#)
takes effect When the database server starts
The list must not overlap with the list of any other storage manager. The
values are coserver numbers, separated by commas. Hyphens indicate ranges.
For example, BAR_WORKER_COSVR 1-3,5,7 specifies a storage manager that
can access onbar-worker processes running on coservers 1, 2, 3, 5, and 7. Do
not put spaces between coserver names.
BAR_WORKER_MAX (XPS)
onconfig.std value 0
if value not present 0
takes effect When the database server starts
Where pagesize is the largest page size used by any of the dbspaces that are
backed up.
To calculate how much memory the database server needs, use the following
formula:
memory = (BAR_XFER_BUF_SIZE * PAGESIZE) + 500
Important: You cannot change the buffer size between a backup and restore.
BAR_XFER_BUFSIZE (XPS)
onconfig.std value 8 pages
To calculate how much memory the database server needs, use the following
formula:
kilobytes = BAR_WORKER_MAX * BAR_XFER_BUFSIZE *
BAR_XPORT_COUNT * PAGESIZE
For example, you would need at least 1600 kilobytes of memory for the
transfer buffers for five onbar-worker processes if the page size is 4 kilobytes.
1600 kilobytes = 5 workers * 8 pages* 10 buffers * 4 kilobytes
To calculate the amount of memory that each onbar_w process requires, use
the following formula:
required_memory = (BAR_XPORT_COUNT * BAR_XFER_BUFSIZE
* page_size) + 5 MB
For information on configuring the page size, see the IBM Informix:
Administrator's Guide.
ISM_DATA_POOL
onconfig.std value ISMData
takes effect When onbar starts
The ISM_DATA_POOL parameter, when listed in the ONCONFIG file for the
database server, specifies the volume pool that you use for backing up storage
spaces. The value for this parameter can be any volume pool that ISM
recognizes. If this parameter is not present, ISM uses the ISMData volume
pool. For details, see the IBM Informix: Storage Manager Administrator's Guide.
For more information, see “Files That ON-Bar, ISM, and TSM Use” on page
9-23.
ISM_LOG_POOL
onconfig.std value ISMLogs
takes effect When onbar starts
The ISM_LOG_POOL parameter, when listed in the ONCONFIG file for the
database server, specifies the volume pool that you use for backing up logical
logs. The value for this parameter can be any volume pool that ISM
recognizes. If this parameter is not present, ISM uses the ISMLogs volume
For more information, see “Files That ON-Bar, ISM, and TSM Use” on page
9-23.
LOG_BACKUP_MODE (XPS)
Use the LOG_BACKUP_MODE configuration parameter to determine how
logical-log files are backed up after they fill.
onconfig.std value MANUAL
range of values
CONT Continuous = an onbar-worker
process backs up each logical-log file
as soon as it fills.
MANUAL
Manual = queues the logical-log files
until you can issue an onbar -b -l
command.
NONE Use the NONE option if you do not
want to back up the logs. The
database server marks the logical logs
as backed up as soon as they are full
so that ON–Bar cannot back them up.
When the database server starts up, it
writes a message to the online.log if
LOG_BACKUP_MODE = NONE.
takes effect When the database server restarts
In This Chapter
This chapter describes the ON–Bar tables that are stored in the sysutils
database. ON–Bar uses these tables for tracking backups and performing
restores. You can query these tables for backup and restore data to evaluate
performance or identify object instances for a restore.
CD = critical dbspace
L = logical log
ND = noncritical dbspace or sbspace
R = rootdbs
B = blobspace (Dynamic Server only)
Dynamic Server
Figure 10-1 maps the ON–Bar tables on Dynamic Server. The gray lines show
the relations between tables. The arrows show that the ins_req_aid value
must be a valid ins_aid value.
Figure 10-2 maps the ON–Bar tables on Extended Parallel Server. The gray
lines show the relations between tables. The arrows show that the ins_req_aid
value must a valid ins_aid value.
sysbuobject
The sysbuobject table provides information about the backup and restore
objects that the Backup Scheduler is processing.
sysbuobjses
The sysbuobjses table lists the backup object name and session ID. You can
join the sysbuobjses table with the sysbusession table on the ses_id column
to get all the backup objects in a specific session.
sysbusession
The sysbusession table shows the status of the current backup or restore
session. The information in the sysbusession table is similar to the onstat -g
bus output.
sysbusm
The sysbusm table provides information about the storage-manager
configuration, storage-manager retry status, and active onbar-worker
processes.
sysbusmdbspace
The sysbusmdbspace table provides information about which storage
manager is configured to back up dbspaces for a coserver. This table contains
the value of the BAR_DBS_COSVR parameter.
sysbusmlog
The sysbusmlog table provides information about which storage manager is
configured to back up logical logs for a coserver. This table contains the value
of the BAR_LOG_COSVR parameter.
sysbusmworker
The sysbusmworker table lists the coserver IDs of onbar-worker processes
that are configured for a storage manager. To display the list of active
onbar-worker processes for a storage manager, join the sysbuworker table
with the sysbusmworker table (sysbusmworker.id = sysbuworker.sm). This
table contains the same list of storage managers as the
BAR_WORKER_COSVR parameter. For example, use the SELECT * FROM
sysbusmworker WHERE id = 1 command to display information on the
storage manager listed in BAR_SM 1.
sysbuworker
The sysbuworker table provides status information about onbar-worker
processes.
In This Chapter
The first part of this chapter describes the ON–Bar activity log file and the
ON–Bar usage messages. Both Dynamic Server and Extended Parallel Server
share common ON–Bar messages.
The second part of this chapter describes the ON–Bar return codes.
Table 11-1 describes each field in the message. No error message numbers
appear in the ON–Bar activity log.
Table 11-1. ON-Bar Message Format
Message Field Description
timestamp Date and time when ON-Bar writes the message.
process_id The number that the operating system uses to identify this
instance of ON-Bar.
parent_process_id The number that the operating system uses to identify the process
that executed this instance of ON-Bar.
message The ON-Bar message text.
Important: You must specify the -b, -v, -r or -m option first in the command
so that ON–Bar can determine whether it is performing a backup,
-43000:
Dynamic Server
For Dynamic Server
-b [-L level] [-w | -f filename | spaces] [-O]
-b -l [-c | -C | -s] [-O]
-v [-p] [-t time] [-f filename | spaces]
-b back up
-c back up current logical log
-C continuous logical-log backup
-f pathname of file containing list of storage spaces
-F fake backup
-l back up full logical logs (no spaces)
-L back up level: 0, 1, or 2; defaults to 0
-O override internal error checks - enforce policy strictly
-w whole-system backup
-s salvage logs
-v verify consistency of most recent backups
spaces list of storage spaces
-b backup
-c backup current logical log
-C continuous logical log backup
-f pathname of file containing list of storage spaces
-F fake backup
-l backup full logical logs (no spaces)
-S skip backing up static dbspaces which are backed up already
-L backup level: 0, 1, or 2, defaults to 0
-O override internal error checks - enforce policy strictly
-p backup spaces only (no logs)
-q name to identify the backup session, default <DBSERVERNAME><pid>
-w whole system backup
-s salvage logs
-v verify consistency of most recent backups
<spaces>, <coservers> list of storage spaces, coserver logs to backup
-43001:
Dynamic Server
For Dynamic Server
-r [-e] [-O] [-f filename | spaces]
-r [-e] [-t time | -n log] [-O]
-r -p [-e] [-t time] [-O] [-f filename | spaces]
-r -l [-t time | -n log]
-r -w [-e] [[-p] [ -t time] | -n log] [-O]
-RESTART
-r -rename -f filename [-w] [-p] [-t time] [-n log] [-f filename | spaces]
-r {-rename -p old path -o old offset -n new path -o new offset...}
[-w] [-p] [-t time] [-n log] [-f filename | spaces]
-e external restore
-f pathname of file containing list of storage spaces
-i index rebuild only (do not restore spaces or logs)
-I do not rebuild indices following roll-forward
-l logical log only restore (no spaces)
-n last logical log to restore
-O override internal error checks - enforce policy strictly
-p physical only restore (no logs)
-q name to identify the restore session
-r restore
-t point in time to stop restore
-w whole system to restore
-RESTART restart an interrupted restore
<spaces> list of storage spaces
-43002:
The session command was entered incorrectly. Revise the command and try
again. The session name is required.
End of Extended Parallel Server
-43003:
onbar_w usage.
The onbar_w command was entered incorrectly. Revise the command and try
again.
End of Extended Parallel Server
-43006:
onsmsync usage.
onsmsync [-g gen | -t time | -i interval] [-O]
[-f filename | spaces]
onsmsync -b
The onsmsync command was entered incorrectly. Revise the command and
try again.
-43007:
Dynamic Server
For IBM Informix Dynamic Server:
-m [lines] [-r[seconds]]
-43357:
Dynamic Server
For IBM Informix Dynamic Server:
{-P} {-n log unique identifier | starting log unique identifier - ending log
unique identifier} [-l] [-q] [-b] [-u username] [-t TBLspace number]
[-x transaction number]
Check the ON–Bar activity log for messages that say what could not be found and try
to resolve that problem. If the problem recurs, contact Technical Support.
104 Adstar Distributed Storage Manager (ADSM) is in generate-password mode.
ON–Bar does not support ADSM running in generate-password mode. For information
on changing the ADSM security configuration, refer to your ADSM manual.
115 A critical dbspace is not in the set of dbspaces being cold-restored.
116 The onsmsync utility is already running.
117 The information contained in the sysutils database and the Emergency Boot File are
inconsistent.
118 An error trying to commit a backup object to the storage manager.
119 The logical log is full on one or more coservers. Perform a logical-log backup.
120 The transport buffer size has changed since this object was last backed up. This object
cannot be restored. Set the transport-buffer size to its original value and retry the
restore.
121 Error occurred on dbslice name expansion (XPS).
The ON–Bar command is contending with another process. Retry the ON–Bar
command.
You cannot perform a cold restore without restoring the root dbspace. To resolve the
problem, try one of the following procedures:
v Bring the database server to quiescent or online mode and restore just the storage
spaces that need to be restored.
v If the database server is offline, issue the onbar -r command to restore all the storage
spaces.
v Make sure that the root dbspace and other critical dbspaces are listed on the
command line or in the -f filename.
124 The buffer had an incomplete page during the backup.
Check the ON–Bar activity log for descriptions of the problem and the emergency boot
file for corruption such as non-ASCII characters or lines with varying numbers of
columns. If the source of the problem is not obvious, contact Technical Support.
127 Could not write to the emergency boot file.
Run onsmsync to synchronize the sysutils database, emergency boot file, and
storage-manager catalogs. For assistance, contact Technical Support.
130 Database server is not responding.
The database server probably failed during the backup or restore. Run the onstat -
command to check the database server status and then:
v If the operation was a cold restore, restart it.
v If the operation was a backup or warm restore, restart the database server and retry
the backup or warm restore.
Verify that you are using the correct XBSA for the storage manager. For information,
consult your storage-manager manual.
133 Failed to load the XBSA library functions.
Verify that you are using the correct XBSA for the storage manager. Ensure that the
BAR_BSALIB_PATH value in the ONCONFIG file points to the correct location of the
XBSA shared library. For information, consult your storage-manager manual.
134 User wants to restore a logical-log file that is too early.
You probably tried a point-in-log restore (onbar -r -l -n) after performing a separate
physical restore. The specified logical log is too old to match the backups used in the
physical restore. Perform either of the following steps:
v Rerun the physical restore from an older set of physical backups.
v Specify a later logical log in the -n option when you rerun the point-in-log restore. To
find the earliest logical log that you can use, look at the emergency boot file. For
assistance, contact Technical Support.
136 ON–Bar cannot warm restore the critical dbspaces.
Verify that you are using the correct XBSA for the storage manager. Also check the
bar_act.log for XBSA error messages. For information, consult your storage-manager
manual.
139 Either the XBSA version is missing from the sm_versions file or the incorrect XBSA
version is in the sm_versions file.
Insert the correct XBSA version into the sm_versions file. For more information, consult
your storage-manager manual.
140 A fake backup failed.
Retry the fake backup using the onbar -b -F command. Only IDS supports fake
backups. If the fake backup fails again, contact Technical Support.
Fix the cause of the interruption and then retry the ON–Bar command.
142 ON–Bar was unable to open a file.
Verify that the named file exists and that the permissions are correct. Check the
ON–Bar activity log for an operating-system error message.
143 ON–Bar was unable to create a child process.
Check the operating-system logs, the ON–Bar activity log, or the console.
144 The log backup was aborted because one or more blobspaces were down.
Attempt to restore the blobspace. If the restore fails, retry the log backup using the
onbar -l -O command. Executing this command might make the blobspace
unrestorable.
145 ON–Bar was unable to acquire more memory space.
Wait for system resources to free up and retry the ON–Bar command.
146 ON–Bar was unable to connect to the database server.
The network or the database server might be down. For assistance, contact Technical
Support.
147 ON–Bar was unable to discover any storage spaces or logical logs to back up or restore.
For example, if you specify a point-in-time restore but use a datetime value from before
the first standard backup, ON–Bar cannot build a list of storage spaces to restore. This
return code also displays if you specify a whole-system restore without having
performed a whole-system backup.
Verify that the database server and the storage spaces are in the correct state for the
backup or restore request. Contact Technical Support.
148 An internal SQL error occurred.
Provide Technical Support with the information from the ON–Bar activity log.
Check the command that you tried against the usage message in the ON–Bar activity
log. If that does not help, then retry the command with quotes around the datetime
value. If your database locale is not English, use the GL_DATE or GL_DATETIME
environment variables to set the date and time format.
150 Error collecting data from the ONCONFIG file.
Check the permissions, format, and values in the ONCONFIG file. Check that the
ONCONFIG environment variable is set correctly.
151 The database server is in an incorrect state for this backup or restore request, or an
error occurred while determining the database server state.
Either you attempted an operation that is not compatible with the database server
mode or ON–Bar is unable to determine the database server state. For example, you
cannot do a physical backup with the database server in recovery mode.
Check the error message in the ON–Bar activity log. If an ASF error occurred, the
following message displays in the ON–Bar activity log:
Fatal error initializing ASF; asfcode = code
To determine the cause of the ASF error, refer to the ASF error code in this message
and repeat the backup or restore command. If an ASF error did not occur, change the
database server state and repeat the backup or restore command.
152 ON–Bar cannot back up the logical logs.
The logical logs are not backed up for either of the following reasons:
v If another log backup is currently running.
v If you perform a logical-log backup with the LTAPEDEV parameter set to /dev/null
(UNIX) or nul (Windows).
You receive this return code when no log backups can be done.
You must be user root or informix or a member of the bargroup group on UNIX or a
member of the Informix-Admin group on Windows to execute ON–Bar commands.
Set the INFORMIXSERVER environment variable to the correct database server name.
157 Error attempting to set the INFORMIXSHMBASE environment variable to -1.
ON–Bar could not set INFORMIXSHMBASE to -1. For assistance, contact either the
system administrator or Technical Support.
158 An internal ON–Bar error occurred.
To determine what went wrong with the external restore, look at the bar_act.log and
the online.log files. Ensure that you already performed the manual part of the external
restore before you retry the onbar-r -e command to complete the external restore. If that
does not work, try the external restore from a different external backup.
161 Restarted restore failed.
Verify that RESTARTABLE_RESTORE is set to ON and try the original restore again.
For more information, check the ON–Bar activity log and database server message logs
(IDS).
162 The ON-Bar log file cannot be a symbolic link.
Remove the symbolic link or change the ONCONFIG file so that the ON-Bar
parameters BAR_DEBUG_LOG or BAR_ACT_LOG point to non-symbolic linked files.
163 The ON-Bar log file must be owned by user informix.
Change the ownership of the log file to be owned by user informix or change the
BAR_ACT_LOG or BAR_DEBUG_LOG values in the ONCONFIG file to point to
different log files.
164 Unable to open file.
The file or its directory permissions prevent it from being created or opened. Verify the
permissions on the file and its directory.
177 An online dbspace was restored. This return code notifies the user that the -O option
overrode the internal checks in ON–Bar.
Examine the data in the blobspace to determine which simple large objects you need to
re-create. These blobspaces might not be restorable. For assistance, contact Technical
Support.
179 ON–Bar created the chunk needed to restore the dbspace. This return code notifies the
user that the -O option overrode the internal checks in ON–Bar.
Create the chunk file manually. Retry the restore without the -O option.
181 ON–Bar expired an object that was needed for a backup or restore.
The onsmsync utility expired an object that might be needed for a restore. You
probably specified onsmsync with the -O option. If you used the -O option by mistake,
contact Technical Support to recover the object from the storage manager.
183 ON-Bar could not obtain the logical-log unique ID from the storage manager.
The backup of the specified logical log is missing. Query your storage manager to
determine if the backup of the specified logical-log file exists and if it is restorable.
247 Merging of emergency boot files timed out because it took too long. (XPS)
On UNIX, look in /tmp/bar_act.log and the file that the BAR_ACT_LOG parameter
points to for clues. (The onbar-merger writes to /tmp/bar_act.log until it has enough
information to read the ONCONFIG file.) Resolve the problems that the bar_act.log
describes and retry the cold restore. If the cold restore still fails, contact Technical
Support.
252 Received the wrong message from onbar_m that merges the emergency boot files.
(XPS)
In This Chapter
This chapter explains how to set the configuration parameters that the ontape
utility uses for backups of storage spaces and logical logs. For a description of
how ontape differs from ON–Bar, see “Comparing ON-Bar and ontape” on
page 1-9.
Chapter 13, “Backing Up with ontape,” on page 13-1 describes how to use the
ontape utility to back up storage spaces and logical-log files.
The first set of ONCONFIG parameters specifies the characteristics of the tape
device and tapes for storage-space backups; the second set specifies the
characteristics of the tape device and tapes for logical-log backups.
The following list shows backup tape devices and their associated tape
parameters.
TAPEDEV is the absolute pathname of the tape device used for
storage-space backups. This is the destination where ontape
will write storage space data during an archive and the source
from which ontape will read data during a restore.
To configure ontape to use stdio, set TAPEDEV to STDIO.
TAPEBLK is the block size of the tapes used for storage-space backups,
in kilobytes.
TAPESIZE is the size of the tapes used for storage-space backups, in
kilobytes.
The following list shows the logical-log tape devices and their associated tape
parameters.
LTAPEDEV is the logical-log tape device.
LTAPEBLK is the block size of tapes used for logical-log backups, in
kilobytes.
LTAPESIZE is the size of tapes used for logical-log backups, in kilobytes.
The following sections contain information about how to set the tape-device,
tape-block-size, and tape-size parameters for both storage-space and
logical-log backups.
Setting the Tape-Device Parameters
You must consider the following points when you assign values to TAPEDEV
and LTAPEDEV:
v Use separate devices, when possible.
v Use symbolic links.
v Specify remote devices.
When the LTAPEDEV and TAPEDEV parameters specify the same device, the
logical log can fill and cause the database server to stop processing during a
backup. When this happens, you face limited options. You can either abort the
backup to free the tape device and back up the logical-log files or leave
normal processing suspended until the backup completes.
Precautions to Take When You Use One Tape Device: When only one tape
device exists and you want to create backups while the database server is
online, take the following precautions:
v Configure the database server with a large amount of logical-log space
through a combination of many or large logical-log files. (See your
IBM Informix: Administrator's Guide.)
v Store all explicitly created temporary tables in a dedicated dbspace and
then drop the dbspace before backing up.
v Create the backup when low database activity occurs.
v Free as many logical-log files as possible before you begin the backup.
The logical log can fill up before the backup completes. The backup
synchronizes with a checkpoint. A backup might wait for a checkpoint to
synchronize activity, but the checkpoint cannot occur until all virtual
processors exit critical sections. When database server processing suspends
because of a full logical-log file, the virtual processors cannot exit their critical
sections and a deadlock results.
You only need to change the symbolic link, as the following example shows:
ln -s /usr/backups /dbfiles/logtape
A user with one tape device could redirect a logical-log backup to a disk file
while using the tape device for a backup.
The following example specifies a tape device on the host computer kyoto:
kyoto:/dev/rmt01
For information on the tape size for remote devices, see “Tape Size for Remote
Devices” on page 12-5.
You can specify /dev/null as a tape device for logical-log backups when you
decide that you do not need to recover transactions from the logical log.
When you specify the tape device as /dev/null, block size and tape size are
ignored. If you set LTAPEDEV either to or from /dev/null, you must use
ON-Monitor or restart the database server for the new setting to take effect.
When you set the tape parameter to /dev/null, the corresponding block size is
ignored.
The ontape utility does not check the tape device when you specify the block
size. Verify that the tape device can read the block size that you specified. If
not, you cannot restore the tape.
Specifying the Tape-Size Parameters
The number of blocks specify tape sizes. TAPESIZE and LTAPESIZE specify
the maximum amount of data that you can write to a tape.
To write or read the tape to the end of the device, set TAPESIZE and
LTAPESIZE to 0. You cannot use this option for remote devices.
When you specify the tape device as /dev/null, the corresponding tape size is
ignored.
The I/O to the remote device completes and the database server frees the
logical-log files before a log-full condition occurs.
Note: If you want to set either the TAPEDEV parameter or the LTAPEDEV
parameter to /dev/null, you must use the ON-Monitor utility to make
this change while the database server is online. If you use any other
method to alter the value of the configuration parameters to or from
/dev/null, you must restart the database server to make the change
effective.
When you set LTAPEDEV to /dev/null, the database server frees the logical
logs without requiring that you back up those logs. The logical logs do not
get marked as free, but the database server can reuse them.
Verifying That the Tape Device Can Read the Specified Block Size
The ontape utility does not check the tape device when you specify the block
size. Verify that the tape device specified in TAPEDEV and LTAPEDEV can
read the block size you specify for their block-size parameters. If not, you
cannot restore the tape.
In This Chapter
This chapter describes how to use ontape to back up storage spaces and
logical-log files, and how to change the database logging status. The ontape
utility can back up and restore the largest chunk files that your database
server supports. The ontape utility cannot back up temporary dbspaces and
temporary sbspaces.
Syntax of ontape
The ontape utility enables you to change the logging status of a database,
back up storage spaces and logical-log files, perform continuous logical-log
backups, and restore data from a backup. The following syntax diagram
illustrates the basic syntax of the ontape utility.
(1)
ontape Changing Database Logging Status
(2)
Creating a Backup
(3)
Requesting a Logical-Log Backup
(4)
Starting Continuous Backups
(5)
Performing a Restore
(6)
Performing an External Physical Restore
Notes:
1 See page 13-3
2 See page 13-4
3 See page 13-4
4 See page 13-14
5 See page 14-4
6 See page 15-5
Starting ontape
When you need more than one tape during a backup, ontape prompts for
each additional tape.
Warning: Do not start ontape in background mode (that is, using the UNIX &
operator on the command line). You could also need to provide
input from the terminal or window. When you execute ontape in
background mode, you can miss prompts and delay an operation.
The ontape utility does not include default values for user
interaction, nor does it support retries. When ontape expects a
yes/no response, it assumes that any response not recognized as a
“yes” is “no”.
Using ontape Exit Codes
The ontape utility has the following two exit codes:
0 indicates a normal exit from ontape.
1 indicates an exception condition.
-N database
-B
-s -U
-A
When you add logging to a database, you must create a level-0 backup before
the change takes effect.
-A directs ontape to change the status of the specified database to
ANSI-compliant logging.
-B directs ontape to change the status of the specified database to
buffered logging.
database is the name of the database. The database name cannot
include a database server name.
-N directs ontape to end logging for the specified database.
-s initiates a backup.
-U directs ontape to change the status of the specified database to
unbuffered logging.
Creating a Backup
This section explains how to plan for and create backups of your database
server data.
Choosing a Backup Level
The ontape utility supports level-0, level-1, and level-2 backups. For
information on scheduling backups, see “Planning a Recovery Strategy” on
page 1-10.
Tip: It is good practice to create a backup schedule that keeps level-1 and
level-2 backups small and to schedule frequent level-0 backups. With
such a backup schedule, you avoid having to restore large level-1 and
level-2 backups or many logical-log backups.
Level-0 Backups
When a fire or flood, for example, completely destroys a computer, you need
a level-0 backup to completely restore database server data on the
replacement computer. For online backups, the data on the backup tape
reflects the contents of the storage spaces at the time the level-0 backup
began. (The time the backup started could reflect the last checkpoint before
the backup started.)
A level-0 backup can consume lots of time because ontape must write all the
pages to tape.
Level-1 Backups
A level-1 backup usually takes less time than a level-0 backup because you
copy only part of the database server data to the backup tape.
Level-2 Backups
A level-2 backup after a level-1 backup usually takes less time than another
level-1 backup because only the changes made after the last level-1 backup
(instead of the last level-0) get copied to the backup tape.
Backing Up After Changing the Physical Schema
You must perform a level-0 backup to ensure that you can restore the data
after you make the following administrative changes. Consider waiting to
make these changes until your next regularly scheduled level-0 backup.
v Changing TAPEDEV or LTAPEDEV from /dev/null
v Adding logging to a database
v Adding a dbspace, blobspace, or sbspace before you can restore it with
anything less than a full-system restore
What Is an Online Backup?: You can use an online backup when you want
your database server accessible while you create the backup.
During an online backup, allocation of some disk pages in storage spaces can
temporarily freeze. Disk-page allocation is blocked for one chunk at a time
until you back up the used pages in the chunk.
Do not use quiescent backups when users need continuous access to the
databases.
A backup could take several reels of tape. When an operator is not available
to mount a new tape when one becomes full, the backup waits. During this
wait, when the backup is an online backup, the physical log space could fill
up, and that causes the database server to abort the backup. Thus, make sure
an operator is available.
Labelling Tapes Created with ontape: When you label tapes created with
ontape, the label must include the following information:
v Backup level
v Date and time
v Tape number that ontape provides
Each backup begins with its first tape reel numbered 1. You number each
additional tape reel consecutively thereafter. You number a five-tape backup 1
through 5. (Of course, it is possible that you could not know that it is a
five-tape backup until it is finished.)
Do not store more than one backup on the same tape; begin every backup
with a different tape. (Often, a backup spans more than one tape.)
Creating a Backup:
-s
-v -L 0 -t STDIO -F
1
2
The ontape utility backs up the storage spaces in the following order: root
dbspaces, blobspaces, sbspaces, and dbspaces.
Backing Up to Tapes
A backup can require multiple tapes. After a tape fills, ontape rewinds the
tape, displays the tape number for labelling, and prompts the operator to
mount the next tape when you need another one. Follow the prompts for
labelling and mounting new tapes. A message informs you when the backup
is complete.
Backup Examples
Execute the following command to start a backup to tape without specifying a
level:
ontape -s
You can use the -L option to specify the level of the backup as part of the
command, as the following example shows:
ontape -s -L 0
When you do not specify the backup level on the command line, ontape
prompts you to enter it. Figure 13-1 illustrates a simple ontape backup
session.
Program over.
The following example shows how to create a level-0 archive of all storage
spaces to standard output, which is diverted to a file named level_0_archive
in the directory /home:
ontape -s -L 0 >/home/level_0_archive
The compress system utility reads from the pipe as input, compresses the
data, and writes the data to the file level_1_archive in the /home/compressed
directory. The ontape information messages are sent to stderr.
When LTAPEDEV and TAPEDEV are identical, assign a different value to the
logical-log tape device (LTAPEDEV) and initiate a logical-log-file backup.
Otherwise, your options are to either leave normal database server processing
suspended until the backup completes or cancel the backup.
You can take steps to prevent this situation. The section “Starting an
Automatic Logical-Log Backup” on page 13-13 describes these steps.
When a Backup Terminates Prematurely
When you cancel or interrupt a backup, sometimes the backup progresses to
the point where you can consider it complete. When listed in the monitoring
information, as described in “Monitoring Backup History Using oncheck” on
page 13-11, you know the backup completed.
Monitoring Backup History Using oncheck
You can monitor the history of your last full-system backup using oncheck.
For example, when no one accessed the database server after Tuesday at 7
P.M., and you create a backup Wednesday morning, the effective date and
time for that backup is Tuesday night, the time of the last checkpoint. In other
words, when there has been no activity after the last checkpoint, the database
server does not perform another checkpoint at the start of the backup.
In addition to backing up logical-log files, you can use ontape to switch to the
next log file, move logical-log files to other dbspaces, or change the size of the
logical log. Instructions for those tasks appear in your IBM Informix:
Administrator's Guide.
Before You Back Up the Logical-Log Files
Before you back up the logical-log files, you need to understand the following
issues:
v Whether you need to back up the logical-log files
v When you need to back up the logical-log files
v Whether you want to perform an automatic or continuous backup
Using Blobspace TEXT and BYTE Data Types and Logical-Log Files
You must keep the following two points in mind when you use TEXT and
BYTE data types in a database that uses transaction logging:
v To ensure timely reuse of blobpages, back up logical-log files. When users
delete TEXT or BYTE values in blobspaces, the blobpages do not become
freed for reuse until you free the log file that contains the delete records. To
free the log file, you must back it up.
v When you must back up an unavailable blobspace, ontape skips it and
makes it impossible to recover the TEXT or BYTE values when it becomes
necessary. (However, blobpages from deleted TEXT or BYTE values do
become free when the blobspace becomes available even though the TEXT
or BYTE values were not backed up.)
When you set LTAPEDEV to /dev/null, the database server marks a logical-log
file as backed up (status B) as soon as it becomes full. The database server can
then reuse that logical-log file without waiting for you to back it up. As a
result, the database server does not preserve any logical-log records.
Fast recovery and rolling back transactions are not impaired when you use
/dev/null as your log-file backup device. For a description of fast recovery, see
your IBM Informix: Administrator's Guide. For information about rolling back
transactions, see the ROLLBACK WORK statement in the IBM Informix: Guide
to SQL Syntax.
When Must You Back Up Logical-Log Files?
You must attempt to back up each logical-log file as soon as it fills. You can
tell when you can back up a logical-log file because it has a used status. For
more information on monitoring the status of logical-log files, see your
IBM Informix: Administrator's Guide.
Starting an Automatic Logical-Log Backup
The database server can operate online when you back up logical-log files. To
back up all full logical-log files, use the -a option of the ontape command.
The -a option backs up all full logical-log files and prompts you with an
option to switch the logical-log files and back up the formerly current log.
When you press the Interrupt key while a backup occurs, the database server
finishes the backup and then returns control to you. Any other full logical-log
files receive a used status.
When you start a continuous backup, the database server automatically backs
up each logical-log file as it becomes full. When you perform continuous
logical-log file backups, the database server protects you against ever losing
more than a partial logical-log file, even in the worst case media failure when
a chunk that contains logical-log files fails.
With continuous backups you also do not need to remember to back up the
logical-log files, but someone must always make media available for the
backup process. Also, you must dedicate the backup device and a terminal to
the backup process.
To start a continuous backup of the logical-log files, use the -c option of the
ontape command.
The database server can operate in online mode when you start continuous
backups. To start continuous logging, execute the following command:
ontape -c
When the tape mounted on LTAPEDEV becomes full before the end of the
logical-log file, the database server prompts the operator for a new tape.
Ending a Continuous Logical-Log Backup
To end continuous logical-log backup, press the Interrupt key (CTRL-C).
When you press the Interrupt key while the database server waits for a
logical-log file to fill (and thus is not backing up any logical-log files), all logs
that were backed up before the interrupt reside on the tape and are marked as
backed up by the database server.
When you press the Interrupt key while the database server performs a
continuous backup to a remote device, any logical-log files that were backed
up during this operation can or cannot reside on the tape, and are not marked
as backed up by the database server (a good reason why you should not do
continuous remote backups).
After you stop continuous logging, you must start a new tape for subsequent
log backup operations.
You must explicitly request logical-log backups (using ontape -a) until you
restart continuous logging.
What Device Must Logical-Log Backups Use?
The ontape utility uses parameters defined in the ONCONFIG file to define
the tape device for logical-log backups. However, consider the following
issues when you choose a logical-log backup device:
v When the logical-log device differs from the backup device, you can plan
your backups without considering the competing needs of the backup
schedule.
v When you specify /dev/null as the logical-log backup device in the
ONCONFIG parameter LTAPEDEV, you avoid having to mount and
maintain backup tapes. However, you can only recover data up to the point
of your most recent backup tape. You cannot restore work done after the
backup. See the warning about setting LTAPEDEV to /dev/null in “Using
/dev/null When You Do Not Need to Recover” on page 13-13.
v When your tape device runs slow, the logical log could fill up faster than
you can copy it to tape. In this case, you could consider performing the
backup to disk and then copying the disk backup to tape.
In This Chapter
This section provides instructions for restoring data using ontape for the
following procedures:
v A full-system restore
v A restore of selected dbspaces, blobspaces, and sbspaces
Before you start restoring data, you must understand the concepts in “What Is
a Restore?” on page 1-5. As explained in that section, a complete recovery of
database server data generally consists of a physical restore and a logical
restore.
When you need to restore any critical dbspace, you must perform a full
system restore to restore all the data that your database server manages. You
must start a full-system restore with a cold restore. See “Choosing a Cold,
Warm, or Mixed Restore” on page 14-3.
Restoring Selected Dbspaces, Blobspaces, and Sbspaces
When your database server does not go offline because of a disk failure or
corrupted data, the damage occurred to a noncritical dbspace, blobspace, or
sbspace.
When you do not need to restore a critical dbspace, you can restore only those
storage spaces that contain a damaged chunk or chunks. When a media
failure occurs in one chunk of a storage space that spans multiple chunks, all
active transactions for that storage space must terminate before the database
server can restore it. You can start a restore operation before the database
The database server is offline when you begin a cold restore but it goes into
recovery mode after it restores the reserved pages. From that point on it stays
in recovery mode until either a logical restore finishes (after which it works in
quiescent mode) or you use the onmode utility to shift it to another mode.
Dynamic Server
You can rename chunks by specifying new chunks paths and offsets during a
cold restore. This option is useful if you need to restore storage spaces to a
different disk from the one on which the backup was made. You can rename
any type of chunk, including critical chunks and mirror chunks. For more
information, see “Renaming Chunks During a Restore” on page 14-13.
A cold restore can be performed after a dbspace has been renamed and a
level-0 backup or a backup of the rootdbs and renamed dbspace is performed.
End of Dynamic Server
A Warm Restore
A warm restore restores noncritical storage spaces while the database server is
in online or quiescent mode. It consists of one or more physical restore
operations (when you restore multiple storage spaces concurrently), a
logical-log backup, and a logical restore.
During a warm restore, the database server replays backed-up logical-log files
for the storage spaces that you restore. To avoid overwriting the current
logical log, the database server writes the logical-log files that you designate
for replay to temporary space. Therefore, a warm restore requires enough
temporary space to hold the logical log or the number of log files being
Warning: Make sure enough temporary space exists for the logical-log portion
of the warm restore; the maximum amount of temporary space that
the database server needs equals the size of all the logical-log files.
Dynamic Server
A warm restore can be performed after a dbspace has been renamed and a
level-0 archive of the rootdbs and renamed dbspace is taken.
End of Dynamic Server
A Mixed Restore
A mixed restore is a cold restore followed by a warm restore. A mixed restore
restores some storage spaces during a cold restore (the database server is
offline) and some storage spaces during a warm restore (the database server is
online). You could do a mixed restore when you perform a full-system restore,
but you need to provide access to a particular table or set of tables as soon as
possible. In this case, perform a cold restore to restore the critical dbspaces
and the dbspaces that contain the important tables.
A cold restore takes less total time to restore all your data than a mixed
restore, even though the database server is online during part of a mixed
restore because a mixed restore requires two logical restores (one for the cold
restore and one for the warm restore). A mixed restore, however, requires the
database server to go offline for less time than a cold restore.
The dbspaces not restored during the cold restore do not become available
until after the database server restores them during a warm restore, even
though a critical dbspace possibly did not damage them.
Performing a Restore
Use the -r option to perform a full physical and logical restore of the database
server data with ontape. Use the -D option to restore selected storage spaces.
Use the -rename option to rename chunks during the restore.
Renaming a Chunk:
Backup Tapes
Before you start your restore, gather together all the tapes from your latest
level-0 backup that contain the storage spaces you are restoring and any
subsequent level-1 or level-2 backups.
Identify the tape that has the latest level-0 backup of the root dbspace on it;
you must use this tape first.
Logical-Log Tapes
Gather together all the logical-log tapes from the backup after the latest
level-0 backup of the storage spaces you are restoring.
Deciding on a Complete Cold or a Mixed Restore
As mentioned in “Choosing a Cold, Warm, or Mixed Restore” on page 14-3,
when you restore your entire database server, you can restore the critical
dbspaces (and any other storage spaces you want to come online quickly)
during a cold restore, and then restore the remaining storage spaces during a
warm restore. Decide before you start the restore if you want a completely
cold restore or a mixed restore.
Verifying Your Database Server Configuration
During a cold restore, you cannot reinitialize shared memory, add chunks, or
change tape devices. Thus, when you begin the restore, the current database
server configuration must remain compatible with, and accommodate, all
parameter values assigned after the time of the most recent backup.
For guidance, use the copies of the configuration file that you create at the
time of each backup. However, do not set all current parameters to the same
values as were recorded at the last backup. Pay attention to the following
three groups of parameters:
v Shared-memory parameters
v Mirroring parameters
v Device parameters
For example, if you drop a dbspace or mirroring for a dbspace after your
level-0 backup, you must make the dbspace or mirror chunk device available
to the database server when you begin the restore. When the database server
attempts to write to the chunk and cannot find it, the restore does not
complete. Similarly, if you add a chunk after your last backup, you must
make the chunk device available to the database server when it begins to roll
forward the logical logs.
Performing a Cold Restore
To perform a cold restore, the database server must be offline.
You must log in as user informix or root to use ontape. Execute the following
ontape command to restore all the storage spaces:
ontape -r
When you perform a mixed restore, you restore only some of the storage
spaces during the cold restore. You must restore at least all the critical
dbspaces, as the following example shows:
ontape -r -D rootdbs llogdbs plogdbs
When you perform a full restore, you can choose not to restore logical-log
files. When you do not back up your logical-log files or choose not to restore
them, you can restore your data only up to the state it was in at the time of
your last backup. For more information, see “Backing Up Logical-Log Files
with ontape” on page 13-12.
Archive Information
...
Initialization Time 05/15/2003 15:41:47
System Page Size 2048
Version 12
Archive CheckPoint Time 06/03/2000 08:32:25
Dbspaces
number flags fchunk nchunk flags owner
name
1 1 1 1 N informix
rootdbs
Chunks
chk/dbs offset size free bpages flags pathname
1 1 50 25000 13512 PO-
/dev/raws/rootdbs
Program over.
When you restore only some of your storage spaces during the cold restore,
you can start a warm restore of the remaining storage spaces after you bring
the database server online.
Backup Tapes
Before you start your restore, gather together all the tapes from your latest
level-0 backup that contain the storage spaces you are restoring and any
subsequent level-1 or level-2 backups.
Logical-Log Tapes
Gather together all the logical-log tapes from the logical-log backup after the
latest level-0 backup of the storage spaces you are restoring.
Ensuring That Needed Device Are Available
Verify that storage devices and files are available before you begin a restore.
For example, when you drop a dbspace or mirroring for a dbspace after your
level-0 backup, you must ensure that the dbspace or mirror chunk device is
When you add a chunk after your last backup, you must ensure that the
chunk device is available to the database server when it rolls forward the
logical logs.
Backing Up Logical-Log Files
Before you start a warm restore (even when you perform the warm restore as
part of a mixed restore), you must back up your logical-log files. See “Backing
Up Logical-Log Files with ontape” on page 13-12.
After the warm restore, you must roll forward your logical-log files to bring
the dbspaces that you are restoring to a state of consistency with the other
dbspaces in the system. Failure to roll forward the logical log after restoring a
selected dbspace results in the following message from ontape:
Partial system restore is incomplete.
You must log in as user informix or root to use ontape. To restore selected
storage spaces, execute the ontape command, with the options that the
following example shows:
ontape -r -D dbspace1 dbspace2
You cannot restore critical dbspaces during a warm restore; you must restore
them as part of a cold restore, described in “Restoring the Whole System” on
page 14-6.
During the restore, ontape prompts you to mount tapes with the appropriate
backup files.
At the end of the warm restore, the storage spaces that were down go online.
The ontape rename chunk restore only works for cold restores. The critical
dbspaces (for example, the rootdbs) must be restored during a cold restore. If
you do not specify the list of dbspaces to be restored, then the server will
restore the critical dbspaces and all the other dbspaces. But if you specify the
list of dbspaces to be restored, then the critical dbspaces must be included in
the list.
For the syntax of renaming chunks with ontape, see “Performing a Restore”
on page 14-4.
Tip: If you use symbolic links to chunk names, you might not need to rename
chunks; you need only edit the symbolic name definitions. For more
information, see the IBM Informix: Administrator's Guide.
Key Considerations
During a cold restore, ontape performs the following validations to rename
chunks:
1. It validates that the old chunk pathnames and offsets exist in the archive
reserved pages.
2. It validates that the new chunk pathnames and offsets do not overlap each
other or existing chunks.
3. If renaming the primary root or mirror root chunk, it updates the
ONCONFIG file parameters ROOTPATH and ROOTOFFSET, or
MIRRORPATH, and MIRROROFFSET. The old version of the ONCONFIG
file is saved as $ONCONFIG.localtime.
4. It restores the data from the old chunks to the new chunks (if the new
chunks exist).
5. It writes the rename information for each chunk to the online log.
If either of the validation steps fails, the renaming process stops and ontape
writes an error message to the ontape activity log.
Warning: Perform a level-0 archive after you rename chunks; otherwise your
next restore will fail.
Important: If you add a chunk after performing a level-0 archive, that chunk
cannot be renamed during a restore. Also, you cannot safely
specify that chunk as a new path in the mapping list.
The following table lists example values for two chunks that are used in the
examples in this section.
Perform a level-0 archive after the rename and restore operation is complete.
Perform a level-0 archive after the rename and restore operation is complete.
Perform a level-0 archive after the rename and restore operation is complete.
You can combine renaming chunks with existing devices and renaming
chunks with nonexistent devices in the same rename operation. This example
shows how to rename a single chunk to a nonexistent device name.
The following table lists example values for the chunks used in this example.
Storage Space Old Chunk Path Old Offset New Chunk Path New Offset
sbspace1 /chunk3 0 /chunk3N 0
When you perform a restore from standard input, ontape does not prompt
you for options or information. If ontape cannot perform the operation with
the information you provided in the restore command, ontape exits with an
appropriate error. Restoring from standard input differs from restoring from
tapes in the following ways:
v No logical restore or logical log salvage occurs.
To perform a logical restore, use the ontape -l command after the physical
restore.
To salvage logical logs, use the ontape -S command prior to the physical
restore.
v You are not prompted to confirm the restore. Informational messages about
the archive are sent to stderr.
If you detect a problem, you can interrupt the restore during the 10 second
delay between the completion of the archive information and starting the
database server.
In the following example, ontape performs a physical restore from the file
level_0_archive, which contains the archive previously performed to standard
output:
cat /home/level_0_archive | ontape -p
When these restores are completed, the database server is left in single-user
mode.
For example, the following command loads data into the secondary server of
an HDR pair (named secondary_host):
ontape -s -L 0 -F| rsh secondary_host "ontape -p"
This command performs a fake level-0 archive of the database server on the
local computer, pipes the data to the remote computer using the rsh system
utility, and performs a physical restore on the remote computer by reading the
data directly from the pipe.
In This Chapter
This chapter discusses recovering data using external backup and restore
using the ontape utility.
The following are typical scenarios for external backup and restore:
v Availability with disk mirroring. If you use hardware disk mirroring, you can
get your system online faster with external backup and restore than with
conventional ontape commands.
v Cloning. You can use external backup and restore to clone an existing
production system for testing or migration without disturbing the
production system.
Important: Because the external backup is outside the control of ontape, you
must track these backups manually. For more information, see
“Tracking an External Backup” on page 7-12.
Warning: Because external backup is not done through ontape, you must
ensure that you have a backup of the current logical log from
the time when you execute the onutil EBR BLOCK or onmode -c
block command. Without a backup of this logical-log file, the
external backup is not restorable.
5. After you perform an external backup, back up the current log.
Use the following command to back up the current log:
ontape -a
If you lose a disk or the whole system, you are now ready to perform an
external restore.
Chapter 15. Performing an External Backup and Restore Using ontape 15-3
Element Purpose Key Considerations
-c Performs a checkpoint and blocks or unblocks the None.
database server
block Blocks the database server from any transactions Sets up the database server for an
external backup. While the database
server is blocked, users can access it in
read-only mode. Sample command:
onmode -c block
unblock Unblocks the database server, allowing data Do not unblock until the external
transactions and normal database server backup is finished. Sample command:
operations to resume onmode -c unblock
Warning: When you perform a cold external restore, ontape does not first
attempt to salvage logical-log files from the database server because
the external backup has already copied over the logical-log data.
Chapter 15. Performing an External Backup and Restore Using ontape 15-5
v If you are restoring critical dbspaces, the database server must be offline.
v If you are restoring the rootdbs, disable mirroring during the restore.
v The external backups of all critical dbspaces of the database server instance
must have been simultaneous. All critical dbspaces must have been backed
up within the same onmode -c block ... onmode -c unblock command
bracket.
Performing an External Restore
This section describes procedures for performing cold external restores.
Chapter 15. Performing an External Backup and Restore Using ontape 15-7
15-8 IBM Informix Backup and Restore Guide
Part 4. Other Data Restore Utilities
In This Chapter
This chapter describes how to use the archecker utility to perform
point-in-time table-level restores that extract tables or portion of tables from
archives and logical logs. For information on using the archecker utility to
verify backups, see Chapter 5, “Verifying Backups,” on page 5-1.
Overview of archecker
IBM Informix servers provide several utilities for recovering data from an
archive. Which utility you should use depends on the situation.
For information on the configuration parameters used in this file, see “The
archecker Configuration Parameter Reference” on page 16-19 later in this
chapter.
The archecker Schema Command File
The archecker utility uses a schema command file to specify the following:
v Source tables
v Destination tables
v Table schemas
v Databases
v External tables
v Point in time the table is restored to
v Other options
If both methods are specified, the -f command line option takes precedence.
Restoring Data
There are two types of restores that archecker performs:
v A physical restore which is based on a level-0 archive.
v A physical restore followed by a logical restore which uses both a level-0
archive and logical logs to restore data to a specific point in time.
When performing a logical restore, archecker uses two processes that run
simultaneously:
v Stager: assembles the logical logs and saves them in tables.
v Applier: converts the log records to SQL statements and executes the
statements.
The Stager
To collect the pertinent logical log records, the stager performs the following
steps:
1. Scans only the backed-up logical logs.
The stager reads the backed-up logical log files and assembles complete
log records.
2. Tests the logical log records
Any log record that is not applicable to the tables being restored is
rejected.
3. Inserts the logical log information into a table
If the logical log record is not rejected, it is inserted into a stage table.
The Applier
The applier reads data from the control table created by the stager. It begins
processing the required transaction and updates the control table to show that
this transaction is in process. Next, it operates on each successive log record,
row by row, until the transaction commits.
All updates to the control table occur in the same transaction as the log record
modification. This allows all work to be completed or undone as a single unit,
maintaining integrity at all times. If an error occurs, the transaction is rolled
back and the error is recorded in the control table entry for this transaction.
Note: When data is being restored and the DBA has elected to include a
logical restore, two additional work columns and an index are added to
the destination table. These columns contain the original rowid and
original part number. These columns provide a unique key which
identifies the location of the row on the original source archive. To
After the applier has finished and the restore is complete, these
columns, and any indexes created on them, are dropped from the
destination table.
Considerations
Consider the following issues when performing a logical restore:
v The table or table fragments being recovered must exist in the level-0
archive. The table or fragment cannot be created or added during the
logical recovery. Tables created or fragments added during the logical
recovery are ignored.
v Because a detached fragment is no longer part of the original table, the
applier does not process the detached fragment log record or any other log
records for this fragment from this point forward. A message in the
archecker message log file indicates a detach occurred.
v If a table is altered, dropped, or truncated during a logical restore, the
restore terminates for that table. Termination occurs at the point that the
alter was performed. A message in the archecker message log file records
that an alter operation occurred.
v During a level-0 archive, there cannot be any open transactions that would
change the schema of the table.
v You cannot perform a logical restore on an external table.
-b Table-Level Restore
(1) -d (2) -v -s
-t -Z page_size
-D
-i
-V
Table-Level Restore:
-f cmd_file ,
-l phys
stage
apply
-D
Notes:
1 Dynamic Server Only
2 Extended Parallel Server Only
Element Description
-b Provides direct XBSA access.
-d Deletes previous archecker restore files, except the archecker
message log.
-D Deletes previous archecker restore files, except the archecker
message log, and then exits.
Examples
The following sections illustrate how to use archecker commands.
After the physical restore is complete, the archecker utility starts the stager.
After the stager has started, the applier is automatically started.
In the following example, the -lstage option starts the archecker stager. The
stager extracts the logical log records from the storage manager and saves the
applicable records to a table.
archecker -bvs -f cmdfile -lstage
The stager should only be started after physical recovery has completed.
For more information on specifying which command file archecker uses, see
“The archecker Schema Command File” on page 16-3.
Syntax
The syntax of the CREATE TABLE used in the archecker schema command
file is identical to the corresponding IBM Informix SQL statement. For a
description of this syntax, see the IBM Informix Guide to SQL: Syntax.
Usage
You must include the schema for the source table in the archecker schema
command file. This schema must be identical to the schema of the source table
at the time the archive was created.
The source table cannot be a synonym or view. The schema of the source table
only needs the column list and storage options. Other attributes such as extent
sizes, lock modes, and so on are ignored. For an ON-Bar archive, archecker
uses the list of storage spaces for the source table to create its list of objects to
retrieve from the storage manager. If the source table is fragmented, you must
list all dbspaces that contain data for the source table. The archecker utility
only extracts data from the dbspaces listed in the schema command file.
You must also include the schema of the target table in the command file. If
the target table does not exist at the time the restore is performed, it is created
using the schema provided.
If the target table already exists, its schema must match the schema specified
in the command file. Data is then appended to the existing table.
Examples
The schema of the source and target tables do not have to be identical. The
following example shows how you can repartition the source data after
performing the data extraction:
CREATE TABLE source (col1 integer, ...) IN dbspace1;
CREATE TABLE target (col1 integer, ...)
FRAGMENT BY EXPRESSION
MOD(col1, 3) = 0 in dbspace3),
MOD(col1, 3) = 1 in dbspace4),
MOD(col1, 3) = 2 in dbspace5);
INSERT INTO target SELECT * FROM source;
Syntax
The syntax of the CREATE EXTERNAL TABLE statement for the archecker
schema file is not identical to the SQL CREATE EXTERNAL TABLE statement.
,
USING ( “filename“ ) ;
, DELIMITED
INFORMIX
Element Description
column The name of the column. Must conform to SQL identifier syntax
rules. For more information, see the IBM Informix: Guide to SQL
Syntax.
Usage
When you use the CREATE EXTERNAL TABLE statement to send data to an
external table, the data is only extracted from a level-0 archive. Logical logs
are not rolled forward on an external table.
You can specify either of the following formats for external files:
v DELIMITED: ASCII delimited file. This is the default format.
v INFORMIX: internal binary representation. To optimize performance, filters
are not applied to external tables. If filters exist, a warning indicates that
they will be ignored.
Syntax
DATABASE dbname ;
MODE ANSI
Element Description
dbname The name of the current database.
Usage
Multiple DATABASE statements can be used. All table names referenced
following this statement are associated with the current database.
If the logging mode of the source database is ANSI and default decimal
columns are used in the table schemas, then the logging mode of the database
must be declared.
Examples
In the following example, both the source and target tables reside in the same
database dbs.
DATABASE dbs;
CREATE TABLE source (...);
CREATE TABLE target (...);
INSERT INTO target SELECT * from source;
You can use multiple database statements to extract a table from one database
into another database.
DATABASE dbs1;
CREATE TABLE source (...) IN dbspace1;
DATABASE dbs2;
CREATE TABLE target (...) IN dbspace2;
INSERT INTO dbs2:target SELECT * FROM dbs1:source;
Syntax
INSERT INTO target_table
,
( target_column )
Examples
The following example demonstrates the simplest form of the INSERT
statement. This statement extracts all rows and columns from the source to the
target table.
INSERT INTO target SELECT * FROM source;
You can also extract a subset of columns. In the following example, only two
columns from the source table are inserted into the destination table.
CREATE TABLE source (col1 integer, col2 integer, col3 integer, col4 integer);
CREATE TABLE target (col1 integer, col2 integer);
INSERT INTO target (col1, col2) SELECT (col3, col4) FROM source;
Element Description
″time″ The date and time the table is to be restored to.
Usage
The TO clause is used to restore the table to a specific point in time, which is
specified by a date and time or the reserved word CURRENT.
Example
RESTORE TO CURRENT WITH NO LOG;
Syntax
SET COMMIT TO number
,
WORKSPACE TO dbspace
(1)
dbslice
Notes:
1 Extended Parallel Server Only
Element Description
number Sets the number of records to insert before committing during a
physical restore. The default is 1000.
The archecker utility creates several tables for the staging of logical log
records during a logical restore. These tables are created in the sysutils
database and stored in the working storage space.
Examples
SET COMMIT TO 20000;
SET WORKSPACE to dbspace1;
SQL Comments
Standard SQL comments are allowed in the archecker utility file and are
ignored during processing.
Schema Command File Examples
The following examples show different command file syntax for different data
recovery scenarios.
Configuration
Parameter Purpose
AC_DEBUG Prints debugging messages in the archecker message log
AC_IXBAR Specifies the pathname to the IXBAR file
AC_LTAPEBLOCK Specifies the ontape block size for reading logical logs.
AC_LTAPEDEV (IDS ) Specifies the local device name used by ontape for reading
logical logs.
AC_MSGPATH Specifies the location of the archecker message log
AC_PAGESIZE (XPS) Specifies the online page size used by the server (can be
overridden by the -Z command line parameter)
AC_SCHEMA Specifies the pathname to the archecker schema command
file
AC_STORAGE Specifies the location of the temporary files that archecker
builds
AC_TAPEBLOCK Specifies the tape block size in kilobytes.
AC_TAPEDEV (IDS) Specifies the local device name used by the ontape utility
AC_VERBOSE Specifies either verbose or terse mode for archecker
messages
BAR_BSALIB_PATH Identical to the BAR_BSALIB_PATH server configuration
parameter.
Dynamic Server
AC_DEBUG
Default value Off
Range 1-16
Dynamic Server
v ontape -t
AC_LTAPEDEV (IDS)
Default value None
Range Any valid pathname or STDIO
Dynamic Server
v ontape -t
When you use ontape, the value of AC_TAPEBLOCK should be the value
that the TAPEBLK ONCONFIG configuration parameter was set to at the
time of the archive. For more information, see “Specifying the
Tape-Block-Size Parameters” on page 12-5.
End of Dynamic Server
AC_TAPEDEV (IDS)
Default value None
Range Any valid pathname or STDIO
During an archive, the database server validates every page before writing it
to the archive device. This validation checks that the elements on the page are
consistent with the expected values. When a page fails this validation, a
message similar to the following is written to the online.log file:
15:06:37 Assert Failed: Archive detects that page 0xc00021 is corrupt.
15:06:37 IBM Informix Dynamic Server Version 9.40.UC1
15:06:37 Who: Session(25, informix@cronus, 67612, 1085259772)
Thread(50, arcbackup1, 40acf758, 4)
File: rsarcbu.c Line: 2549
15:06:37 stack trace for pid 67367 written to /tmp/af.41ad7b9
15:06:37 See Also: /tmp/af.41ad7b9
The page number is printed in hexadecimal. The format for page number is
0xCCCPPPPP where CCC represents the chunk number, and PPPPP represents
the page number. For this example, the corrupted page is in chunk 0xc (12
decimal) and page 0x21 (33 decimal). The archive aborts after detecting 10
corrupt pages. The online.log file displays the full error message, including
the page address, for the first 10 errors. Subsequently, only the count of the
number of corrupt pages is put in to the online.log.
When you receive this message, identify which table the corrupt page belongs
to by examining the output of the oncheck –pe command. To determine the
extent of the corruption, execute the oncheck –cID command for that table.
A corrupt page is saved onto the backup media. During a restore, the corrupt
page is returned in its corrupt form. No errors messages are written to the
online.log when corrupt pages are restored, only when they are archived.
To solve this problem, check if the database server is still running. If it is, shut
down the database server and run the command again.
onstat -d
Use the -d option to display information for chunks in each storage space.
You can interpret output from this option as follows. The first section of the
display describes the storage spaces:
address Is the address of the storage space in the shared-memory
space table
number Is the unique ID number of the storage space assigned at
creation
flags Uses the following hexadecimal values to describe each
storage space:
0x00000000 Mirror not allowed and dbspace is unmirrored
0x00000001 Mirror is allowed and dbspace is unmirrored
0x00000002 Mirror is allowed and dbspace is mirrored
0x00000004 Down
0x00000008 Newly mirrored
0x00000010 Blobspace
0x00000020 Blobspace on removable media
0x00000040 Blobspace is on optical media
0x00000080 Blobspace is dropped
0x00000100 Blobspace is the optical STAGEBLOB
0x00000200 Space is being recovered
0x00000400 Space is fully recovered
0x00000800 Logical log is being recovered
0x00001000 Table in dbspace is dropped
0x00002000 Temporary dbspace
The line immediately following the storage-space list includes the number of
active spaces (the current number of dbspaces in the database server instance
including the rootdbs) and the number of total spaces.
Active spaces refers to the current number of storage spaces in the database
server instance including the rootdbs. Total refers to total allowable spaces for
this database server instance.
The line immediately following the chunk list displays the number of active
chunks (including the root chunk) and the total number of chunks.
onstat -l
Use the -l option to display information about physical and logical logs. You
can interpret output from this option as follows. The first section of the
display describes the physical-log configuration:
buffer Is the number of the physical-log buffer
bufused Is the number of pages of the physical-log buffer that are used
bufsize Is the size of each physical-log buffer in pages
numpages Is the number of pages written to the physical log
numwrits Is the number of writes to disk
pages/io Is calculated as numpages/numwrits
This value indicates how effectively physical-log writes are
being buffered.
phybegin Is the physical page number of the beginning of the log
physize Is the size of the physical log in pages
phypos Is the current position in the log where the next log-record
write is to occur
phyused Is the number of pages used in the log
%used Is the percent of pages used
The database server uses temporary logical logs during a warm restore because
the permanent logs are not available then. The following fields are repeated
for each temporary logical-log file:
address Is the address of the log-file descriptor
number Is logid number for the logical-log file
flags Provides the status of each log as follows:
B Backed up
C Current logical-log file
F Free, available for use
U Used
uniqid Is the unique ID number of the log
begin Is the beginning page of the log file
size Is the size of the log in pages
used Is the number of pages used
%used Is the percent of pages used
active Is the number of active temporary logical logs
In Extended Parallel Server only, ensure that the BAR_SM parameters in the
ONCONFIG file match the new storage-manager definition.
End of Extended Parallel Server
You can switch between certain storage managers more easily than others. For
details, contact Technical Support or your vendor.
To migrate to ON-Bar:
1. Use ontape to perform a full backup.
For details, see Chapter 13, “Backing Up with ontape,” on page 13-1.
2. Take the backup media offline to prevent possible reuse or erasure.
3. Configure the storage manager to be used with ON–Bar.
For details, see Chapter 3, “Configuring the Storage Manager and
ON-Bar,” on page 3-1.
If your script searches for ON–Bar process IDs, be aware that the relationship
between the onbar-driver and the onbar_d child processes or onbar-worker
processes is quite different for Dynamic Server and Extended Parallel Server.
Dynamic Server
If you reuse a private script for Extended Parallel Server on Dynamic Server,
remove the following ON–Bar options that Dynamic Server does not support:
v -q (session name)
v -b -p (physical-only backup)
If you reuse a private script for Dynamic Server on Extended Parallel Server,
remove the following ON–Bar options that Extended Parallel Server does not
support:
v -w (whole-system backup)
v -c (current log backup)
v -C (continuous-log backup)
v -RESTART (restartable restore)
ON–Bar must run on the same computer as the database server. However,
you can run ON–Bar in any locale for which you have the supporting
message and localization files. For example, if the server locale is English and
the client locale is French, you can issue ON–Bar commands in French.
Windows Only
The sysutils database, the emergency boot files, and the storage-manager boot
file are created with the en_us.8859-1 (default English) locale. The ON–Bar
catalog tables in the sysutils database are in English. Change the client and
database locales to en_us.8859-1 before you attempt to connect to the sysutils
database with DB–Access or third-party utilities.
Identifiers That Support Non-ASCII Characters
The IBM Informix: GLS User's Guide describes the SQL identifiers that support
non-ASCII characters. Non-ASCII characters include both 8-bit and multibyte
characters. You can use non-ASCII characters in the database names and
filenames with the ON–Bar, ondblog, and onutil commands, and for
filenames in the ONCONFIG file.
For example, you can specify a non-ASCII filename for the ON–Bar activity
log in BAR_ACT_LOG and a non-ASCII pathname for the storage-manager
library in BAR_BSALIB_PATH.
If you perform a point-in-time restore, enter the date and time in the format
specified in the GL_DATETIME environment variable if it is set.
Point-in-Time Restore Example
For example, the default date and time format for the French locale,
fr_fr.8859-1, uses the format ″%A %.1d %B %iY %H:%M:%S.″ The ON–Bar
command for a point-in-time restore is as follows:
onbar -r -t "Lundi 9 Juin 1997 11:20:14"
You can set GL_DATETIME to a different date and time format that uses the
date, month, two-digit year, hours, minutes, and seconds.
%.1d %B %iy %H:%M:%S
Tip: For more information on how to use GLS and the GL_DATE and
GL_DATETIME environment variables, refer to the IBM Informix: GLS
User's Guide.
Each line starts with a dotted decimal number; for example, 3 or 3.1 or 3.1.1.
To hear these numbers correctly, make sure that your screen reader is set to
read punctuation. All syntax elements that have the same dotted decimal
number (for example, all syntax elements that have the number 3.1) are
mutually exclusive alternatives. If you hear the lines 3.1 USERID and 3.1
SYSTEMID, your syntax can include either USERID or SYSTEMID, but not both.
The dotted decimal numbering level denotes the level of nesting. For example,
if a syntax element with dotted decimal number 3 is followed by a series of
syntax elements with dotted decimal number 3.1, all the syntax elements
numbered 3.1 are subordinate to the syntax element numbered 3.
Certain words and symbols are used next to the dotted decimal numbers to
add information about the syntax elements. Occasionally, these words and
symbols might occur at the beginning of the element itself. For ease of
identification, if the word or symbol is a part of the syntax element, the word
or symbol is preceded by the backslash (\) character. The * symbol can be
used next to a dotted decimal number to indicate that the syntax element
repeats. For example, syntax element *FILE with dotted decimal number 3 is
read as 3 \* FILE. Format 3* FILE indicates that syntax element FILE repeats.
Format 3* \* FILE indicates that syntax element * FILE repeats.
The following words and symbols are used next to the dotted decimal
numbers:
? Specifies an optional syntax element. A dotted decimal number
followed by the ? symbol indicates that all the syntax elements with a
corresponding dotted decimal number, and any subordinate syntax
elements, are optional. If there is only one syntax element with a
dotted decimal number, the ? symbol is displayed on the same line as
the syntax element (for example, 5? NOTIFY). If there is more than one
syntax element with a dotted decimal number, the ? symbol is
displayed on a line by itself, followed by the syntax elements that are
optional. For example, if you hear the lines 5 ?, 5 NOTIFY, and 5
UPDATE, you know that syntax elements NOTIFY and UPDATE are
optional; that is, you can choose one or none of them. The ? symbol is
equivalent to a bypass line in a railroad diagram.
! Specifies a default syntax element. A dotted decimal number followed
by the ! symbol and a syntax element indicates that the syntax
element is the default option for all syntax elements that share the
same dotted decimal number. Only one of the syntax elements that
share the same dotted decimal number can specify a ! symbol. For
example, if you hear the lines 2? FILE, 2.1! (KEEP), and 2.1
(DELETE), you know that (KEEP) is the default option for the FILE
keyword. In this example, if you include the FILE keyword but do not
specify an option, default option KEEP is applied. A default option also
applies to the next higher dotted decimal number. In this example, if
the FILE keyword is omitted, default FILE(KEEP) is used. However, if
you hear the lines 2? FILE, 2.1, 2.1.1! (KEEP), and 2.1.1 (DELETE),
the default option KEEP only applies to the next higher dotted
decimal number, 2.1 (which does not have an associated keyword),
and does not apply to 2? FILE. Nothing is used if the keyword FILE is
omitted.
* Specifies a syntax element that can be repeated zero or more times. A
dotted decimal number followed by the * symbol indicates that this
syntax element can be used zero or more times; that is, it is optional
and can be repeated. For example, if you hear the line 5.1*
data-area, you know that you can include more than one data area or
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give
you any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
The following paragraph does not apply to the United Kingdom or any
other country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY
OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow
disclaimer of express or implied warranties in certain transactions, therefore,
this statement may not apply to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those
Web sites. The materials at those Web sites are not part of the materials for
this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the
purpose of enabling: (i) the exchange of information between independently
created programs and other programs (including this one) and (ii) the mutual
use of the information which has been exchanged, should contact:
IBM Corporation
J46A/G4
555 Bailey Avenue
San Jose, CA 95141-1003
U.S.A.
The licensed program described in this information and all licensed material
available for it are provided by IBM under terms of the IBM Customer
Agreement, IBM International Program License Agreement, or any equivalent
agreement between us.
All IBM prices shown are IBM’s suggested retail prices, are current and are
subject to change without notice. Dealer prices may vary.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include
the names of individuals, companies, brands, and products. All of these
names are fictitious and any similarity to the names and addresses used by an
actual business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Each copy or any portion of these sample programs or any derivative work,
must include a copyright notice as follows:
© (your company name) (year). Portions of this code are derived from IBM
Corp. Sample Programs. © Copyright IBM Corp. (enter the year or years).
All rights reserved.
If you are viewing this information softcopy, the photographs and color
illustrations may not appear.
Notices F-3
Trademarks
AIX; DB2; DB2 Universal Database; Distributed Relational Database
Architecture; NUMA-Q; OS/2, OS/390, and OS/400; IBM Informix®;
C-ISAM®; Foundation.2000™; IBM Informix ® 4GL; IBM
Informix®DataBlade®Module; Client SDK™; Cloudscape™; Cloudsync™; IBM
Informix®Connect; IBM Informix®Driver for JDBC; Dynamic Connect™; IBM
Informix®Dynamic Scalable Architecture™(DSA); IBM Informix®Dynamic
Server™; IBM Informix®Enterprise Gateway Manager (Enterprise Gateway
Manager); IBM Informix®Extended Parallel Server™; i.Financial Services™;
J/Foundation™; MaxConnect™; Object Translator™; Red Brick™; IBM
Informix® SE; IBM Informix® SQL; InformiXML™; RedBack®; SystemBuilder™;
U2™; UniData®; UniVerse®; wintegrate®are trademarks or registered
trademarks of International Business Machines Corporation.
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Sun Microsystems, Inc. in the United States and other countries.
Other company, product, and service names used in this publication may be
trademarks or service marks of others.
Index X-3
Configuration parameter (continued) CREATE EXTERNAL TABLE statement 16-11
BAR_BSALIB_PATH 3-9, 3-11, 3-13, 9-8 syntax 16-11
BAR_DBS_COSVR 3-13, 9-10 CREATE TABLE statement
BAR_DEBUG 2-14, 3-11 syntax 16-10
BAR_HISTORY 3-11, 3-13, 9-11 Critical dbspaces
BAR_IDLE_TIMEOUT 3-13, 4-31, 9-11 defined 4-3
BAR_LOG_COSVR 3-14, 9-12 cron command 2-9
BAR_MAX_BACKUP 3-12, 4-29, 9-13 Current log, backup 2-3
BAR_NB_XPORT_COUNT 3-12, 9-13
BAR_PROGRESS_FREQ 3-12, 3-14, 9-14 D
BAR_RETRY 3-12, 3-14, 9-14 Data
BAR_SM 3-14, 9-16 filtering
BAR_SM_NAME 3-14, 9-16 example, schema command file 16-17
BAR_WORKER_COSVR 3-14, 9-16 migration tools C-1
BAR_WORKER_MAX 3-16, 9-17 recovery
BAR_XFER_BUF_SIZE 3-12, 9-18 See also Restoring.
BAR_XFER_BUFSIZE 3-14, 9-19, 9-20 defined 1-2
BAR_XPORT_COUNT 3-14, 9-20 usage 1-11
BAR-WORKER_MAX 3-14 verifying consistency 4-8
DBSERVERNAME 6-36, 10-8 Database logging
Extended Parallel Server 3-12, 3-15 backups 4-12
multiple storage managers 3-12 log backups 4-20
global 3-13 Database logging status
ISM_DATA_POOL 3-9, 3-12, 3-15, 9-20 ontape, changing 13-3
ISM_LOG_POOL 3-9, 3-12, 3-15, 9-20 Database server
LOG_BACKUP_MODE 3-15 blocking 7-3, 15-2, 15-3
LTAPEDEV 3-12, 9-21 evaluating 1-12, 3-21
OFF_RECVRY_THREADS 6-16 imported restore 6-34
ON_RECVRY_THREADS 6-16 migration C-1
RESTARTABLE_RESTORE 3-12, 6-41, 9-22 storage-manager communication 2-7
SERVERNUM 6-36 unblocking 7-8, 15-4
setting, ontape 12-2 upgrading 6-34, C-1
storage-manager section 3-13 versions with ON-Bar 2-1
Configuring DATABASE statement
ISM 3-2 declaring logging mode 16-12
third-party storage manager 3-2 syntax 16-12
TSM 3-3 DBSERVERNAME configuration parameter 5-4, 10-8
Contact information xxx DBSERVERNAME Configuration parameter 6-36
Continuous log backup Dbslice
example 2-3 backing up 1-2
pausing 8-11 restoring 6-17
specifying 1-3, 4-20, 4-27 Dbspace
using ALARMPROGRAM 9-5 backing up 1-2
Controlling ON-Bar sessions 8-11 critical 4-3
Conventions rename
command-line xx cold restore 14-3
documentation xvi warm restore 14-3
sample-code xxii restore selected, ontape 14-2
syntax diagrams xviii Debug log, location of 2-14
syntax notation xviii Debugging
typographical xvii ON-Bar, levels of 2-14
Cooked chunks Default locale xiii
backing up 4-8 Deferring index rebuild
restoring 6-27 logical restore 6-28
Copying data 16-2 Dependencies, software xii
Index X-5
Informix-Admin group 4-2
G INFORMIXDIR/bin directory xiv
GL_DATE environment variable D-2
INFORMIXSQLHOSTS environment variable 10-8
GL_DATETIME environment variable D-2
Initializing
Global Language Support xiii, D-1
High-Availability Data Replication with
GLS.
ON-Bar 6-37
See Global Language Support.
INSERT statement
H physical-only restores 16-14
syntax, examples 16-13
Hardware resources, evaluating 1-12
HDR. Installation Guides xxiii
ISA.
See also High-Availability Data Replication.
external backup and restore 7-19 See IBM Informix Server Administrator.
Help xxvi ISM catalog
High-Availability Data Replication backing up 4-13
directory path 4-4
imported restore 6-34
ism_catalog command 4-13, 6-8
initializing 6-34, 6-38
initializing with external restore 7-19, 15-7 ism_chk.pl command 6-6
ISM_DATA_POOL configuration parameter 3-9, 3-12,
I 3-15, 9-20
I/O ISM_LOG_POOL configuration parameter 3-9, 3-12,
simultaneous backup and restore 14-17 3-15, 9-20
environment variables 14-17 ism_startup command 3-6
example 14-17 ism_watch command 4-13, 6-8
secondary host 14-17 ISM.
IBM Informix Server Administrator See IBM Informix Storage Manager.
backing up data 4-13 ISMData volume pool 3-8, 3-9
restoring data 6-15 ISMLogs volume pool 3-8, 3-9
IBM Informix Storage Manager ISO 8859-1 code set xiii
backup requests 4-13 ixbar.servernum file.
configuring 3-2 See Emergency boot file.
devices supported 3-20
ISM catalog 4-4, 4-13
J
overview 2-9 Jukebox 3-19
requirements 3-19
sm_versions file 3-5
K
Keywords
upgrading C-1
in syntax diagrams xxi
Version 2.2 support xii
volume pool names 3-9
Imported restore
L
Large files, backup
ixbar.51 6-36
ON-Bar 4-3
ixbar.52 6-36
Level-0 archive, open transactions during 16-6
performing 6-36 Level-0 backup 1-2
set up 6-35 Level-1 backup 1-2, 4-6
Importing
Level-2 backup 1-2, 4-7
restore Light appends 6-40
defined 6-5 Locale
initializing HDR 6-38
defined xiii
Importing a restore 6-35
using GLS with ON-Bar D-1
Incremental backup LOG_BACKUP_MODE configuration parameter 3-15
defined 1-12 log_full.sh script 4-21
example 4-14
Logging activity, evaluating 1-13, 3-21
level 1 4-6 Logging mode, declaring in DATABASE
level 2 4-7
Statement 16-12
Industry standards, compliance with xxix
Informix Dynamic Server documentation set xxvi
Index X-7
Mirroring configuration ON-Bar (continued)
backup state 14-8 planning recovery strategy
setting 14-8 creating backup plan 1-9
Mixbar.hostname.servernum file 2-13 data loss 1-11
Mixed restore data usage 1-10
defined 1-5, 1-6 failure severity 1-11
ON-Bar 6-4 pre-recovery checklist 6-6
ontape 14-4 progress feedback 2-14
point-in-time 6-25 renaming chunks 6-5
restoring data 6-21 restore
strategies for using 6-22 -e option 6-9
Moving data -f option 6-9
See Migrating data. -i option 6-9
MPP system 3-21 -I option 6-10
Multiple tables, restoring 16-18 -n option 6-10
-O option 6-11
O -q option 6-11
OFF_RECVRY_THREADS configuration -r option 6-9
parameter xiv -t option 6-11
Offline storage spaces, restoring 6-16 -w option 6-12
ON_RECVRY_THREADS configuration examples 6-15
parameter 6-19 starting and stopping sessions 8-11
On-Bar types of restores 6-2
restore usage messages 11-2, 11-6
processes 6-46 warm restore sequence 6-46, 6-49
ON-Bar where to find information 2-2
-O option whole-system restore 6-4
backup 4-10, C-2 XBSA interface 2-11
restore 4-18, 6-14, 6-26 ON-Bar activity log.
whole-system restore 6-11 See Activity log.
See also Configuration parameter ON-Bar return codes 11-7
activity log ON-Bar tables xi
See Activity log. bar_action 10-2
archecker bar_instance 10-4
setting configuration parameters 9-2 bar_ixbar 10-7
backup and restore time 1-12 bar_object 10-8
backup sequence bar_server 2-12
Dynamic Server 4-29 defined 10-10
Extended Parallel Server 4-29 map xi, 10-9
cold restore sequence 6-15, 6-50 onbar script
compared to ontape 1-9, 6-48 defined 8-2
components on usage and examples 10-2
Dynamic Server 2-4 onbar_d utility
Extended Parallel Server 2-4 See also onbar-driver
configuration parameters 3-11, 3-15, 9-5, 9-22 purpose xi
database servers supported 2-1, 2-2 onbar_m utility
debugging defined 2-9
specifying the level of 2-14 ON-Bar diagram 2-8
migrating from ontape C-2 purpose 4-30
mixed restore 6-4 onbar_w utility
monitoring restores default setting 8-12
onstat -d 6-7 defined 2-4, 2-9, 8-12
operations supported 1-9 ON-Bar diagram 2-8
parallel restore 6-4 purpose 4-30
starting onbar-workers 8-5
Index X-9
Physical backup 4-6 Release Notes xxiv
defined 4-10 Remote device, ontape
example 2-3, 4-18 interrupt key 13-15
Physical restore syntax to specify 12-4
See also Restoring. tape size 12-5
defined 2-3 Rename chunks restore 2-3
ON-Bar, example 6-18 defined 6-5, 14-3
ON-Bar, procedure 14-2 ontape syntax 14-13
ontape, cold restore 1-7 overview with ON-Bar 6-30
types xi overview with ontape 14-13
Physical schema, backing up 4-11 Replaying log records 1-4, 6-16
Planning a backup system 1-12 Restartable restore
pload utility 4-19 example 2-3, 6-41
Point-in-log restore 2-3, 6-26 overview 6-5
Point-in-time using 6-41
cold restore 6-24 RESTARTABLE_RESTORE configuration
stages 6-25 parameter 3-12, 6-41, 9-22
mixed restore 6-25 Restore
Point-in-time restore Bringing the database back online 14-11
-r option 6-26 considerations 14-13
-t option 6-26 dropped storage space 6-28
creating timelines 6-26 failed restore 6-44
defined 6-4 resolving 6-44
example 2-3, 6-24, D-2 scenarios 6-45
expiring bad backup 6-26 from an older backup 6-26
restore from older backup 6-26 logical 16-5
specifying timelines 6-26 manually controlling 16-8
warm restore 6-26 mounting tapes 14-9
Printed manuals xxvi multiple storage managers 16-9
Processes, ON-Bar 4-29, 4-30 new-chunk requirements 6-31
Progress, backup or restore 2-14 nonlogging databases and tables 6-39
physical 16-4
R point-in-time
Raw chunks cold 6-24
backing up 4-8 rename chunk restore with cold restore 6-5
restoring 6-27 restarting 6-42
Raw table standard input 14-16
backing up 4-19 example 14-16
ontape 13-10 RESTORE statement
restoring 6-40, 14-12 syntax 16-14
Recovering data Restore, logical
See Restoring data. See Logical restore.
Recovery Restoring
tables or fragments added or created during 16-6 ON-Bar
Recovery strategy planning -O option 2-3, 6-14, 6-21, 6-26
creating backup plan 1-11 cold, example 6-24
data loss 1-10 cold, setting mode 6-19
data usage 1-11 cooked chunks 6-27
failure severity 1-10 dbslices 6-9
Recovery system external 2-4, 7-14, 7-19
comparing ON-Bar and ontape 1-1, 1-9 imported 6-5
defined 1-2 logical, example 6-16
invalid tools 6-18 mixed 6-21
Recreating monitoring progress 2-14
chunk files 6-27 offline storage spaces 6-9
Index X-11
Stager, manually controlling logical restore using 10-8, Syntax diagram (continued)
16-5 starting and stopping sessions 8-11
Standard backup storage-space backup 4-2
See Backup. summary 4-2
Standard input Syntax diagrams
restoring 14-16 conventions 4-10
example 14-16 conventions for xviii
single-user mode 14-16 keywords 4-2
Standard table keywords in xxi
backing up xi reading in a screen reader E-1
restoring 4-19 variables in xxi
start_worker.sh script 8-5 Syntax segment xx
Static table sysbuobject table 10-10
backing up 6-40 sysbuobjses table 10-10
restoring 4-19 sysbusession table 10-10, 10-11
Storage devices sysbusm table 10-10, 10-12
backups 5-2, 6-16, 6-28 sysbusmdbspace table 10-11, 10-12
continuous log backups xvi, 4-21 sysbusmlog table 10-11, 10-12
ISM support 4-13 sysbusmworker table 10-11, 10-13
ISM volume pools 3-20, 4-13 sysbuworker table 10-11, 10-13
requirements 3-19 System requirements
Storage manager xi database 10-13
communication with ON-Bar 2-7, 3-20 software 10-14
configurations 2-7, 3-17 sysutils
migration 3-15, C-1 database
onbar -P regenerating 8-9
viewing backed-up logical logs 4-22 sysutils database
requirements 4-5, C-1 error messages 9-23
role in ON-Bar system 3-19
sm_versions file 3-2 T
Storage space Table
backing up a list 4-8 altering
backing up all 4-8, 4-14 raw 6-40
defined 2-4 standard 9-23
offline, restoring 4-14 altering or adding during logical restore 16-6
physical-only backup 4-18 Table type
restartable restore 6-41 backing up 4-12, 4-19
restoring 1-7, 6-16, 6-41 operational 4-12, 6-40
skipping during backup 6-17 raw 4-19, 6-40
when to back up 4-16, 13-13 restoring 6-39, 6-40
stores_demo database xiii scratch 4-19, 6-40
superstores_demo database xiii standard 4-19, 6-40
Symbolic links static 4-19, 6-40
ontape temp 4-19, 6-40
specify tape devices 12-3 Tables and fragments
Syntax diagram not processed by applier during logical
backup verification 5-3 restore 16-6
external backup Tables or fragments
IDS 7-4 added or created during logical recovery 16-6
XPS 7-3 Tables, restoring to user-specified point in time
external restore 7-8, 7-14, 15-3 See Logical restore.
logical-log backup 4-2, 4-8 Tape autochangers 3-19
onbar_w 8-12 Tape block size
onsmsync 8-7 ON-Bar 16-5
restore 6-12, 6-31 ontape 16-20
Index X-13
Warm restore (continued)
ontape (continued)
part of a mixed restore 14-3
performing 14-7
steps to perform 14-4
overview 14-12
Whole-system backup
defined 4-5
specifying 4-11, 4-29
Whole-system restore
-O option 6-14, 6-21
defined 6-4
example 4-11, 6-20
ON-Bar 6-4
restartable restore 6-20
syntax 6-6, 6-21
Windows
copying files 7-6, 7-7
Informix-Admin group 4-2
X
XBSA interface
described 2-11
XBSA shared library
default location 3-9, 4-1
specifying location 2-11, 3-9
xcfg file 3-9
Printed in USA
G251-2269-00
Spine information:
IBM DB2 IBM Informix Version 10.0/8.5 IBM Informix Backup and Restore Guide