CDP Administration Guide
CDP Administration Guide
CDP Administration Guide
Administration Guide
9/10/10
Contents
Introduction
Concepts and components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
Hardware/software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
Web Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
Getting Started
Data Protection in Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
Install DiskSafe for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
Uninstall DiskSafe for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
Silent Installation 10
License management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
Install and configure FalconStor Snapshot Agents . . . . . . . . . . . . . . . . . . . . . . . . .12
Prepare host connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
Enable iSCSI target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
Enable Fibre Channel target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
Set QLogic ports to target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
Prepare physical storage for use with CDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
Present storage to the CDP appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
Rescan adapters (initiators) for the assigned storage . . . . . . . . . . . . . . . . . . . . . . .15
Prepare physical disks for virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
Create storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
Set Storage Pool properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
Virtualize storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
Create a virtual device SAN Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
Prepare your client machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19
Pre-installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
Windows client installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
Linux client installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
Prepare the AIX host machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
Install AIX FalconStor Disk ODM Fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
Install the AIX SAN Client and Filesystem Agent . . . . . . . . . . . . . . . . . . . . . . . . . .23
Prepare the CDP and HP-UX environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25
Install the SAN Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26
Install the HP-UX file system Snapshot Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26
Data Protection
Data Protection in a Windows environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
Use DiskSafe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
Protect a disk or partition with DiskSafe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29
Protect a group of disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40
Suspend or resume protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42
Data protection in a Linux environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44
Installing DiskSafe for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44
Data Management
Verify snapshot creation and status in DiskSafe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120
Browse the snapshot list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120
Check DiskSafe Events and the Windows event log . . . . . . . . . . . . . . . . . . . . . . .121
Check Microsoft Exchange snapshot status . . . . . . . . . . . . . . . . . . . . . . . . . . . . .123
Check Microsoft SQL Server snapshot status . . . . . . . . . . . . . . . . . . . . . . . . . . . .124
Check Oracle snapshot status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .125
Check Lotus Notes/Domino snapshot status . . . . . . . . . . . . . . . . . . . . . . . . . . . . .126
Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .126
CCM Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .126
CDP Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .127
CDP Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .127
CDP Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .129
Global replication reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130
Data Recovery
Restore data using DiskSafe for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132
Restore a file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132
CDP Appliance This is a dedicated storage server. The storage appliance is attached to the physical
SCSI and/or Fibre Channel storage device. The job of the appliance is to
communicate data requests between the clients and the SAN Resources via Fibre
Channel or iSCSI.
Central Client FalconStor CCM allows you to monitor and manage application server activity by
Manager displaying status and resource statistics on a centrailized console for FalconStor
(CCM) CDP clients.
DiskSafe™ This host-side backup software is installed on each Windows machine to capture
every write and journal them on the CDP Appliance using the iSCSI protocol.
DynaPath® A load balancing/path redundancy application that ensures constant data availability
and peak performance across the SAN by performing Fibre Channel and iSCSI HBA
load-balancing, transparent failover, and fail-back services. DynaPath creates
parallel active storage paths that transparently reroute server traffic without
interruption in the event of a storage network problem.
FalconStor The administration tool for the CDP storage network. It is a Java application that can
Management be used on a variety of platforms and allows administrators to create, configure,
Console manage, and monitor the storage resources and services on the storage network.
FileSafe™ This software application protects your file by backing up files and folders to another
location.
Logical Logical resources consists of sets of storage blocks from one or more physical hard
Resource disk drives. This allows the creation of virtual devices that contain a portion of a
larger physical disk device or an aggregation of multiple physical disk devices.
CDP has the ability to aggregate multiple physical storage devices (such as JBODs
and RAIDs) of various interface protocols (such as SCSI or Fibre Channel) into
logical storage pools. From these storage pools, virtual devices can be created and
provisioned to application servers and end users. This is called storage virtualization
which offers the added capability of disk expansion. Additional storage blocks can
be appended to the end of existing virtual devices without erasing the data on the
disk.
Logical resources are all of the logical/virtual resources defined on the storage
appliance, including SAN Resources (virtual drives, and service-enabled devices),
and Snapshot Groups.
Near-line mirror Allows production data to be synchronously mirrored to a protected disk that resides
on a second CDP server. With near-line mirroring, the primary disk is the disk that is
used to read/write data for a SAN Client and the mirrored copy is a copy of the
primary. Each time data is written to the primary disk, the same data is
simultaneously written to the mirror disk. TimeMark or CDP can be configured on the
near-line server to create recovery points. The near-line mirror can also be
replicated for disaster recovery protection.
NIC Port Allows you to use multiple network ports in parallel to increase the link speed
Bonding beyond the limit of a single port and improve redundancy for higher availability. The
appliance must have at least two NIC ports to create one bond group and at least
four NIC ports to create two bond groups.
If you choose 1 Bond Group, all ports as the bond type, all discovered NIC ports will
be combined into a single group. If you choose 2 Bond Groups, half of the ports in
each as the bond type, each group will contain half of the discovered NIC ports.
Round-Robin mode (mode 0) transmits data in a sequential, round-robin order and
is the default mode. For a more dedicated mode where the NIC ports work in
concert with switches using the 802.1AX standard for traffic optimization, select Link
Aggregation mode.
Physical These are the actual physical LUNs as seen by the RAID controller/storage HBA
Resource within the CDP appliance used to create Logical Resources. Clients do not gain
access to physical resources; they only have access to Logical Resources. This
means that an administrator must reserve Physical Resources for use as either
virtual devices or service-enabled devices before creating Logical Resources.
Storage pools can be used to simplify Physical Resource allocation/management
before creating Logical SAN Resources.
SAN Clients These are the actual file and application servers used to communicate with the CDP
appliance. FalconStor calls them SAN Clients because they utilize the storage
resources via the CDP appliance. The storage resources appear as locally attached
devices to the SAN Clients' operating systems (Windows, Linux, Solaris, etc.) even
though the SCSI devices are actually located at the CDP appliance.
SAN SAN Resources provide storage for file and application servers (SAN Clients). When
Resources a SAN Resource is assigned to a SAN Client, a virtual adapter is defined for that
client. The SAN Resource is assigned a virtual SCSI ID on the virtual adapter. This
mimics the configuration of actual SCSI storage devices and adapters, allowing the
operating system and applications to treat them like any other SCSI device. A SAN
Resource can be a virtual device or a service-enabled device.
Service- Hard drives with existing data that are protected by CDP.
enabled
devices
Snapshot The concept of performing a snapshot is similar to taking a picture. When we take a
photograph, we are capturing a moment in time and transferring it to a photographic
medium. Similarly, a snapshot of an entire device allows us to capture data at any
given moment in time and move it to either tape or another storage medium, while
allowing data to be written to the device. The basic function of the snapshot engine
is to allow point-in-time, "frozen" images to be created of data volumes (virtual
drives) using minimal storage space. By combining the snapshot storage with the
source volume, the data can be recreated exactly at it appeared at the time the
snapshot was taken. For added protection, a snapshot resource can also be
mirrored through CDP. You can create a snapshot resource for a single SAN
Resource or you can use the batch feature to create snapshot resources for multiple
SAN Resources. Refer to the CDP Reference Guide for additional information.
Snapshot Snapshot agents collaborate with NTFS volumes and applications in order to
Agents guarantee that snapshots are taken with full application level integrity for fastest
possible recovery. A full suite of Snapshot Agents is available so that each snapshot
can later be used without lengthy chkdsk and database/email consistency repairs.
Snapshot Agents are available for Oracle®, Microsoft® Exchange, Lotus Notes®/
Domino®, Microsoft® SQL Server, IBM® DB2® Universal Database, Sybase® and
many other applications.
Storage pools Groups of one or more physical devices. Creating a storage pool enables you to
provide all of the space needed by your clients in a very efficient manner. You can
create and manage storage pools in a variety of ways, including tiers, device
categories, and types.
For example, you can classify your storage by tier (low-cost, high-performance,
high-redundancy, etc.) and assign it based on these classifications. Using this
example, you may want to have your business critical applications use storage from
the high-redundancy or high-performance pools while having your less critical
applications use storage from other pools.
Thin This feature allows you to use your storage space more efficiently by allocating a
provisioning minimum amount of space for each virtual resource. Then, when usage thresholds
are met, additional storage is allocated as necessary.
Hardware/software requirements
Component Requirement
Linux Server Red Hat Enterprise Linux 5 Update 3, kernel 2.6.18-128.el5 (64-bit)
CentOS Linux version 5.3, kernel 2.6.18-128.el5 (64-bit)
Oracle Enterprise Linux 5.3, kernel 2.6.18-128.el5 (64-bit)
Supported HBAs • CDP Appliance: QLogic FC HBAs with a minimum of two available ports
• Linux Server: FC HBA supported by Linux
The following target mode HBAs are supported:
• QLogic 23xx HBA
• QLogic 24xx HBA
• QLogic 256x HBA
Consult the FalconStor Certification Matrix for a complete list of supported
HBAs.
CPU Dual-core AMD Opteron and Intel Xeon EM64T are supported.
Network Interface Card Gigabit Ethernet network cards that are supported by Linux
FalconStor Management A virtual or physical machine that supports the Java 2 Runtime Environment
Console (JRE).
Logical Volume Manager (LVM) The appropriate LVM for your operating system. For example:
or DiskSafe Solaris Volume Manager (SVM)
AIX Logical Volume Manager
Web Setup
Once you have physically connected the appliance, powered it on, and performed
the following steps via the Web Setup installation and server setup, you are ready to
begin using CDP.
• Add Storage Capacity - for extra storage capacity, you can connect
additional storage via Fibre Channel or iSCSI
• Disable Web Services - for businesses with policies requiring web services
to be disabled.
If you encounter any problems while configuring your appliance, contact FalconStor
EZStart technical support via the web at: www.falconstor.com/supportrequest.
(Additional contact methods are available in each step by clicking the EZStart
Technical Support link.
To determine the appropriate snapshot agents, refer to the Snapshot Agent User
Guide. For most desktops and laptops, you would install the Snapshot Agent for File
Systems. For application servers, you would install the appropriate agent for that
application, such as the Snapshot Agent for Microsoft Exchange or the Snapshot
Agent for Oracle.
Note: You can only take snapshots if you use a remote mirror, and only if TimeMark
or the Snapshot Service is licensed on the storage server.
The DiskSafe for Windows installation process intelligently detects the client host
operating system and installs the appropriate installation package. You will need to
install DiskSafe on each host that you want to protect.
DiskSafe can be installed from the CDP Server Web Setup feature or through an
admistrative share that contains the management and client software.
To install DiskSafe:
1. Log on as an administrator and use the Web Setup utility to install DiskSafe. Use
a web browser to connect to the CDP server via http using its primary IP
address.
• The default user name is fsadmin
• The default password is IPStor101
If you are not using the Web Setup utility, launch the installation media and click
Install Products --> DiskSafe. From CDP, click Install Products --> Install Host-
Based Applications --> Install DiskSafe for Windows.
Note: To be able to remotely boot, you must install DiskSafe on the first
system partition (that is, where Windows is installed).
2. When you have finished installing DiskSafe, you will be prompted to restart your
computer. You must restart your computer before running DiskSafe.
Once you have restarted the machine and launched DiskSafe, you will be
prompted to enter your license key code. For all operating systems, a DiskSafe
license keycode must be provided within 5 days of installation. If a license
keycode is entered but the license is not activated (registered with FalconStor)
immediately, the product can be used for 30 days (the grace period).
Note: If you do not enter a keycode, you will only have five days to use
DiskSafe.
If you must uninstall DiskSafe for any reason, you can do so by navigating to
Programs --> FalconStor --> DiskSafe Uninstall. This will remove DiskSafe along
with all associated applications. You can also remove DiskSafe from the Control
Panel --> Add/Remove programs, but this only removes DiskSafe. SDM will remain
installed. You will need 20 MB of free disk space to uninstall DiskSafe.
Silent Installation
To install the DiskSafe installation package in silent mode, follow the steps below:
The system automatically restarts after DiskSafe installation. If you do not want to
restart after DiskSafe installation, use the following command:
setup.exe /s /v“/qn REBOOT=suppress /log c:\dsInstall.log”
In a Cluster environment the Add Storage Server for Cluster Protection message
displays during DiskSafe installation prompting you to add the CDP server
information for cluster protection policies.
License management
When you install DiskSafe, you are prompted to license the product. If you do not
enter a license, you are only given five days to use DiskSafe. If you subsequently
need to add a license or change the license—for example, to upgrade from a trial
license to a standard license—you can do so through the License Manager.
Changing a Changing the trial license to a standard license does not remove protection, but it
license does temporarily stop protection until a new license is added.
To change the license:
2. Click Enter a new key code, enter the new key code, and then click OK.
Activating a Your DiskSafe license must be activated (registered with FalconStor). Once
license activated, you can select License Manager and see the message This product is
licensed.
If your computer has an Internet connection, the license is activated as soon as you
add it. However, if your Internet connection is temporarily down or if your computer
has no Internet connection, your license will not be activated. You must activate your
license within 30 days.
If your Internet connection is temporarily down, your license will be activated
automatically the next time DiskSafe is started, assuming you have an Internet
connection then. Or, you can add your license through the SAN Disk Manager.
If your computer has no Internet connection, you must perform offline activation. To
do this:
5. When you receive an e-mail response, save the returned signature file.
If you have SAN Clients (application hosts) that need to access the CDP appliance
via iSCSI (IP based SAN), you will need to enable the iSCSI target mode.
This step can be done in the console’s configuration wizard. To do this afterward,
right-click on your storage server and select Options --> Enable iSCSI.
As soon as iSCSI is enabled, a new SAN client called Everyone_iSCSI is
automatically created on your storage server. This is a special SAN client that does
not correspond to any specific client machine. Using this client, you can create
iSCSI targets that are accessible by any iSCSI client that connects to the storage
server.
Before an iSCSI client can be served by a CDP appliance, the two entities need to
mutually recognize each other. You need to register your iSCSI client as an initiator
to your storage server to enable the storage server to see the initiator. To do this,
you will need to launch the iSCSI initiator on the client machine and identify your
storage server as the target server.
Refer to the documentation provided by your iSCSI initiator for detailed instructions
about how to do this.
If you are using external storage and Fibre Channel protocol, you will need to enable
Fibre Channel target mode. This step can be done in the console’s configuration
wizard. To do this afterward, right-click the CDP server that has the FC HBAs and
select Options --> Enable FC Target Mode.
An Everyone_FC client will be created under SAN Clients. This is a generic client
that you can assign to all (or some) of your SAN Resources. It allows any WWPN
not already associated with a Fibre Channel client to have read/write non-exclusive
access to any SAN Resources assigned to Everyone.
By default, all QLogic point-to-point ports are set to initiator mode, which means they
will initiate requests rather than receive them. Determine which ports you want to
use in target mode and set them to become target ports so that they can receive
requests from your Fibre Channel Clients.
It is recommended that you have at least four Fibre Channel ports per server in
initiator mode, one of which is attached to your storage device.
You need to switch one of those initiators into target mode so your clients will be
able to see the CDP Server. You will then need to select the equivalent adapter on
the secondary server and switch it to target mode.
Note: If a port is in initiator mode and has devices attached to it, that port cannot
be set for target mode.
To set a port:
Follow your vendor-specific instructions to install storage and present the disk to
your CDP appliance. Typically, you have to create an entity to represent the CDP
appliance within your storage unit and you need to associate the CDP appliance’s
initiator name with the storage unit.
You will also need to zone the CDP appliance with the target of the storage unit.
Once this is done, storage can be presented to the CDP appliance for use.
If you only want to scan a specific adapter, right-click on that adapter and select
Rescan. Make sure to select Scan for New Devices.
3. If you want to set up load balancing, you can use the NIC Port Bonding feature
via Web Setup or via the FalconStor Management Console as described in the
CDP Reference Guide.
You can specify the purpose of each storage pool as well as assign it to specific
users or groups. The assigned users can create virtual devices and allocate space
from the storage pools assigned to them.
For proper CDP Appliance operation, you need to have at least one storage pool
with the following roles selected: Storage, Virtual Headers, Snapshot, Configuration
Repository (if you intend to set up an HA pair), Journal (if you intend to use CDP
Journal), and CDR (if you intend to use continuous mode replication).
You also need to ensure that the Security tab is used to enable at least one IPStor
User to have access rights to this pool. This is the IPStor User account with which
you will authenticate during DiskSafe setup operations.
If you intend to use HP-UX or AIX Auto-LVM scripts or Windows clients, you must
enable the Protection user for the storage pool. As a best practice, you should use a
separate Storage Pool for the various roles. For example, one for “Storage”, one for
“TimeMark” (Snapshot), one for “CDP”, etc.
Virtualize storage
Once you have prepared your storage, you are ready to create Logical Resources to
be used by your CDP clients. This configuration can be done entirely from the
console.
3. Select the storage pool or physical device(s) from which to create this SAN
Resource.
You can create a SAN Resource from any single storage pool. Once the
resource is created from a storage pool, additional space (automatic or manual
expansion) can only be allocated from the same storage pool.
You can select List All to see all storage pools, if needed.
4. Depending upon the resource type, select Use Thin Provisioning for more
efficient space allocation.
You will have to allocate a minimum amount of space for the virtual resource.
When usage thresholds are met, additional storage is allocated as necessary.
You will also have to specify the fully allocated size of the resource to be
created. The default initial allocation is 1GB.
From the client side, it appears that the full disk size is available.
6. (Express and Custom only) Enter a name for the new SAN Resource.
The name is not case sensitive.
7. Confirm that all information is correct and then click Finish to create the virtual
device SAN Resource.
Pre-installation
CDP provides client software for many platforms and protocols. Please check the
certification matrix on the FalconStor website for the versions and the patch levels (if
applicable) that are currently supported.
Notes:
1. Make sure that the CDP appliances that the client will use are all up and running.
2. To install the client software, log into your system as the root user.
3. Mount the installation CD to an available or newly created directory and copy the
files from the CD to a temporary directory on the machine.
The software packages are located in the /client/linux/ directory off the CD.
4. Type the following command to install the client software:
rpm -i <full path>/ipstorclient-<version>-<build>.i386.rpm
For example:
rpm -i /mnt/cdrom/Client/Linux/ipstorclient-4.50-0.954.i386.rpm
5. Log into the client machine as the root user again so that the changes in the user
profile will take effect.
6. Add the CDP Servers that this client will connect to for storage resources by
typing the following command from /usr/local/ipstorclient/bin:
./ipstorclient monitor
7. Select Add from the menu and enter the server name, login ID and password.
After this server is added, you can continue adding additional servers.
8. To start the Linux client, type the following command from the /usr/local/
ipstorclient/bin directory:
./ipstorclient start
1. Ensure that a FalconStor storage appliance configured for CDP is available. This
appliance may be factory-shipped from FalconStor or can be built using the
EZStart USB key on a supported hardware appliance, such as HP DL 38x/58x,
IBM x365x, or Dell 29xx family servers with supported QLogic FC HBA ports and
SATA/SAS/FC storage.
3. Use the console to create a user account with the username "Protection" and the
type "IPStor User". Set the password to "Protection".
These credentials will be used by the AIX scripts for CDP operations.
4. Create a storage pool with the name "Protection" and add a disk to the pool with
sufficient size. For instructions on creating a storage pool, refer to the Create
storage pools section
This storage pool will be used by the script as a mirror for the Volume Group on
AIX.
5. Select the Security tab and check the user named "Protection”.
3. Confirm that the package installation was successful by listing system installed
packages.
lslpp -l | grep ipstordisk
4. If system configuration involves HACMP Cluster, repeat the process above for
other cluster nodes.
3. Confirm that the package installation was successful by listing system installed
packages.
lslpp -l | grep IPStorclient
5. Use the "cd" command to change the directory to the temporary directory where
the package was downloaded.
6. Install the AIX filesystem Snapshot Agent with the following command:
installp -aXd filesystemagent-1.00-1136.rte all
7. Confirm the package installation was successful by listing the system installed
packages:
lslpp -l | grep jfsagt
a. If this is the first time running "ipstorclient monitor", you will be prompted to
select Fibre Channel (FC) or iSCSI protocol.
b. Enter y to enable Fibre Channel protocol support if you are using FC or enter
y to enable iSCSI protocol support if you are using iSCSI.
g. Enter "n" when it asks if you would like to add more servers.
9. If system configuration involves HACMP Cluster, repeat the process above for
other cluster nodes.
1. Ensure that a FalconStor IPStor appliance configured for CDP is available. This
appliance may be factory-shipped from FalconStor or can be built using the
EZStart USB key on a supported hardware appliance, such as HP DL 38x/58x,
IBM x365x, or Dell 29xx family servers with supported QLogic FC HBA ports and
SATA/SAS/FC storage.
3. Use the console to create a user account with the username "Protection" and the
type "IPStor User". Set the password to "Protection".
These credentials will be used by the HP-UX scripts for CDP operations.
4. Create a storage pool with the name "Protection" and add a disk to the pool with
sufficient size. For instructions on creating a storage pool, refer to the Create
storage pools section.
Notes:
• Physical devices must be prepared (virtualized, service enabled, or
reserved for a direct device) before they can be added into a storage
pool.
• For best results, create multiple storage pools with different roles (i.e.
mirror storage, snapshot storage, and CDP journaling storage).
• Each storage pool can only contain the same type of physical devices.
Therefore, a storage pool can contain only virtualized drives or only
service-enabled drives or only direct devices.
• A storage pool cannot contain mixed types. Physical devices that have
been allocated for a logical resource can still be added to a storage pool.
5. Select the Security tab and check the user named "Protection”
6. Edit the IPStor fshba.conf parameters for improved support with HP-UX 11.23.
• vi $ISHOME/etc/fshba.conf
• Add "reply_invalid_device=63" to the bottom of the fshba.conf file.
• Save the file and restart IPStor services by typing: ipstor restart all
Warning: The ipstor restart all command includes restarting fshba which will
break the FC connection and take the storage server offline.
2. Navigate (use the "cd" command) to the temporary directory where the package
was downloaded.
4. Confirm that the package installation was successful by listing system installed
packages:
swlist | grep VxFSagent
FalconStor® CDP enables you to protect business-critical data and provide rapid
data recovery in the event of a system crash or disk failure. DiskSafe™, a host-
based replication software agent that delivers block-level data protection for a broad
base of software and hardware platforms, is able for both Windows and Linux
environments. FileSafe™ is available for file-level data protection in Windows and
Linux environments.
This chapter contains instructions for protecting your data using DiskSafe. For
additional information regarding file-based protection, refer to the FileSafe™ User
Guide.
In addition, there are Logical Volume Manager (LVM) scripts available for Unix
platforms. Refer to following sections for more information regarding data protection
in your environment:
• Data Protection in a Windows environment
• Data protection in a Linux environment
• Data Protection in Red Hat Linux
• Data Protection in Solaris
• Data Protection in SuSE Linux
• Data Protection in an AIX environment
Use DiskSafe
Once installed, you can access the DiskSafe application in three ways:
• Via the desktop, double-click the DiskSafe Icon.
• Via the Start menu (Start --> Programs --> FalconStor --> DiskSafe)
• Via Computer Management (Start --> Settings --> Control Panel -->
Administrative Tools --> Computer Management --> Storage --> DiskSafe)
The DiskSafe application window is divided into two panes. The left pane contains a
navigation tree with nodes that you can click, expand, and collapse. When you click
a node in the navigation tree, the right pane displays associated information. For
example, when you click the Disks node in the navigation tree, the right pane
displays a list of all protected disks and partitions, including their name, size, mirror
mode, current activity, and status information.
Accessing the The menus at the top of the application window provide access to several functions
menus that are common to all Microsoft® Management Console-based applications, such
as exiting the application. In Windows 2008, Vista, 2003, and XP, the common
functions are available via the File, Action, View, Window, and Help menus.
Note: In Windows 2000, the common functions are available via the Console,
Action, View, Window, and Help menus.
Functions that are specific to DiskSafe typically appear in the Action menu. The
Action menu is dynamic; the items that appear here change, depending on which
element of the user interface (UI) has focus. For example, when you click the Disks
node, the Action menu displays Protect. When you click the Events node, the Action
menu displays Set Filter.
You can also access DiskSafe functions by right-clicking the elements on the
screen. For example, to protect a disk or partition, you can either click the Disks
node and click Protect from the Action menu, or you can right-click the Disks node
and click Protect from the pop-up menu. (All procedures in this guide help system
describe how to perform the functions using the pop-up menus.)
Showing, You can determine which columns display in the right pane. For example, when you
hiding, or re- click the Disks node, the right pane displays the Primary, Capacity, Mode, Current
ordering Activity, Status, and Mirror columns by default. You can add and remove columns by
columns selecting View from the main menu. For example, in Windows 2008, Vista, 2003,
and XP, if you don’t want the Capacity column to display, you can remove it from the
screen by right-clicking the Disks node. Then click View --> Add/Remove Columns,
click Capacity in the Displayed columns list, and then click Remove and OK. In
Windows 2000, you can click View --> Choose Columns.
To restore the Capacity column, in Windows 2008, Vista, 2003, and XP, you would
click View --> Add/Remove Columns, highlight Capacity click it in the Available
columns list and then click Add. You can also restore the right pane to its default
state by clicking Restore Defaults. In Windows 2000, you can click View --> Choose
Columns, highlight Capacity click it in the Hidden columns list and then click Add.
You can also reset to your previously-set state by clicking Reset.
In addition, you can change the order of the columns. For example, to move the
Status column to the left of the Current Activity column, you would click Status in the
Displayed columns list and then click Move Up. To move it back to the right of the
Current Activity column, you would click Status and then click Move Down.
Sorting data To quickly find the information that you want, you can click the column headings in
the right pane to sort the information in that column alphanumerically. For example,
when you click the Disks node, you can click the Capacity column heading to sort
the listed disks by size, or you can click the Mode column heading to sort them by
mirror mode (Continuous or Periodic).
Selecting items In the right pane, most functions (such as viewing the properties of an item) can be
performed on only one item at a time. You can select an item by clicking anywhere in
the row. However, some functions (such as removing protection) can be performed
on multiple items simultaneously.
To select multiple contiguous items, click the first item, press and hold down the Shift
key, and then click the last item. All items between the first and last item are
selected. To select multiple non-contiguous items, hold down the Ctrl key as you
click each item.
FalconStor DiskSafe provides an easy and efficient way to protect entire disks or
selected partitions by capturing all system and data changes and journalling them
on the CPD appliance without impacting application performance.
To protect your server:
1. From the Start menu, select Programs --> FalconStor --> DiskSafe.
2. Expand DiskSafe --> Protected Storage, right-click Disks, and then select
Protect.
The Disk Protection Wizard launches.
5. From the Mirror Storage Selection page, select the disk or partition where the
primary storage disk or partition is to be mirrored and click Next. To select your
new CDP appliance, click New Disk.
6. On the Allocate Disk page, select the registered CDP appliance and then click
the Options button next to Disk Size to enable Continuous Data Protection.
If this is the first time you are protecting a disk, you will need to add the new CDP
server first by clicking the Add Server button.
• Enter the name or IP address of the storage server in the Server name text
box. (If the storage server is in a Windows domain, select the Windows
Domain Authentication check box and type the domain name in the
Domain Name text box. If the storage server is not in a Windows domain,
clear the Windows Domain Authentication check box.)
• Enter a user name (ipstoruser) and password (IPStor101) for accessing
the server or domain.
• Select the communication protocol(s) to use (iSCSI and/or Fibre Channel
• Click OK on the Add Server dialog box.
The Snapshot Advanced Settings dialog screen displays allowing you to enable
CDP, specify the percentage of the Snapshot resource size and specify the size
of the Journal resource.
10. Click OK on both screens to create the new mirror disk on the CDP appliance.
You should now see the new mirror disk in the Eligible mirror disks list. If you do
not, click Refresh.
11. Select the mirror mode and initial synchronization options and click Next.
Select Continuous mode to have the mirror updated simultaneously with the
local disk. There are four options to set for Continuous mode. You can leave the
default setting to balance performance and mirror synchronization or choose to
change the control parameters to stay in sync at the expense of performance or
vice versa. The options are as follows:
• Minimize performance impact to primary I/O - Select this option if you want
to stay in sync even if there is an impact on performance. The maximum
number of mirror buffers will be set at 64 and the wait time when the
maximum buffer is reached will be set at zero seconds.
• Optimize data mirror coverage - Select this option if you want to maintain
performance even at the expense of breaking the mirror synchronization.
The maximum number of mirror buffers will be set at 8 and the wait time
when the maximum buffer is reached will be set at ten seconds.
• Balance performance and coverage - The default setting. A balance is
maintained between performance and mirror synchronization. The
maximum number of mirror buffers will be set at one and the wait time
when the maximum buffer is reached will be set at ten seconds.
• Advanced custom settings - Select this option to change the default
values. You can change the maximum number of mirror buffers as well as
the wait time when the maximum buffer is reached (break mirroring state)
after exceeding configured memory-buffer.
Select Periodic mode to update the mirror at regularly scheduled intervals if you
have low network bandwidth on the CDP appliance.
Specify what data to copy during the initial synchronization by selecting or
clearing the Copy only sectors used by file system check box.
If your disk is formatted with a file system, select the Copy only sectors used by
file system option. Only the sectors used by the file system are copied to the
mirror. If you are using a database or other application that uses raw space on
the disk (without a file system), clear this option. If you clear this option, all
sectors on the entire disk are copied to the mirror.
Select the Optimize data copy check box to have DiskSafe scan both the local
disk and its mirror for changes in 4-KB blocks, and then copy the blocks to the
mirror. This uses minimal network bandwidth and speeds up synchronization.
Clear this check box to skip the local and mirror disk scan for changes and
simply copy all the data from the local disk to the mirror. This would be
appropriate if you have never used the selected mirror before, or if you used it
for another disk.
Note: This option is selected by default if you have selected a target disk that
was mirrored before
For each schedule, specify the date and time to start. You can also specify an
end date. Click the start date or end by field to display a calendar.
Scheduling options are described below.
• Click the Hourly radio button to synchronize the local disk and mirror every
specified number of hours and minutes. Enter the number of hours in the
first text box, and then specify the number of minutes in the second text
box.
• Click the Daily radio button to synchronize the local disk and mirror every
specified number of days.
• Click the Weekly radio button to synchronize the local disk and mirror every
specified number of weeks and then specify the day of the week the
synchronization is to occur.
• Click the Monthly radio button to synchronize the local disk and mirror every
specified number of months and specify the day of the month.
• Click the Advanced button from the Task Creation screen to further
customize your synchronization schedule.
For example, you can define and exclude holidays from the synchronization
schedule.
Click Test to determine the optimum throughput setting for the disk
where the mirror resides. It is recommended that you do not set this
value higher than the value displayed by the test to ensure DiskSafe
trigger a synchronization pause when needed.
For example, you might set the acceptable throughput to 10240 KB/s,
the deterioration threshold to 75%, and the interval to resume
synchronization to 10 minutes. In this case, if the throughput to the
mirror falls to 7680 KB/s (10240 x .75), DiskSafe will temporarily pause
synchronization and then resume again after 10 minutes.
• Deterioration threshold to suspend I/O - This option allows you to
select the percentage of the acceptable throughput at which
synchronization will pause.
• Interval to try resuming I/O - This option allows you to select the
interval to try resuming synchronization when using periodic mode.
Choose from 10 seconds to one hour.
• Encrypt mirror disk - Allows you to specify an encryption key to protect
data against unauthorized access of the mirror disk. Encryption must be
enabled and added while you are protecting the mirror disk; you cannot
add encryption after the disk has been protected. In addition, you cannot
remove encryption unless you remove the protection.
16. Specify the snapshot options in the Advanced Snapshot Options screen or click
Next to accept the defaults. This screen only displays if you are mirroring to a
remote disk and TimeMark or Snapshot is licensed on the storage server.
Note: If you have snapshot agents but do not select this option, your agents
will not be invoked, and there might be problems with the integrity of your
snapshots, particularly for hosts running very active databases.
• For hourly snapshots, define the minute of the hour to keep (0-59).
• For daily snapshots, define the hour of the day to keep (0-23).
• For weekly snapshots, define the day of the week to keep (Mon - Sun).
• For monthly snapshots, define the day of the month to keep (1 -31)
The snapshot consolidation feature allows you to save a pre-determined
number of snapshots and delete the rest independently from being
scheduled or manually taken. The snapshots that are preserved are the
result of the pruning process. This method allows you to keep only
meaningful snapshots.
Every time a snapshot is created, DiskSafe checks to determine which
snapshots to purge. Outdated snapshots are deleted unless they are
needed for a larger granularity. The smallest unit snapshot is used. Then
subsequent snapshots are selectively left to satisfy the Daily, Weekly, or
Monthly specification.
When defining the snapshot preserving pattern, you need to specify the
offset of the moment to keep. For example, for daily snapshots, you are
asked which hour of the day to use for the snapshot. For for weekly
snapshots, you are asked which day of the week to keep. If you set an
offset for which there is no snapshot, the closest one to that time is taken.
The default offset values correspond to typical usage based on the fact
that the older the information, the less valuable it is. For instance, you can
take snapshots every 20 minutes, but keep only those snapshots taken at
the minute 00 each hour for the last 24 hours, and also keep 7 snapshots
representing the last 7 days taken at midnight, 4 snapshots representing
the last 4 weeks by those taken on Mondays, and 12 snapshots
representing the last 12 months, taken the first day of the month.
17. On the Completing the Disk Protection Wizard page, review your selections and
then click Finish.
Your data is now protected. You can check the status of each disk immediately.
You can view information about the synchronization mode, current activity, and
synchronization status.
After protecting a disk, the mirror will appear in Disk Management. For details
about the information displayed in this screen, see the DiskSafe User Guide.
Once you have protected two or more disks or partitions, you can put them into
groups. Groups offer synchronization advantages and ensure data integrity amongst
multiple disks.
For example, if your database uses one disk for its data and a separate disk for its
transaction logs and control files, protecting both disks and putting them into a group
causes snapshots of both disks to be taken at the same time, ensuring the overall
integrity of the database in case you need to restore it.
Likewise, if you are using a dynamic volume that spans multiple physical disks,
protecting all the related disks and putting them in a group ensures that they can be
reliably protected and restored.
Follow the steps below to protect a group of disks:
4. On the Group Mirror Mode page, enter a Group name (up to 64 letters or
numbers). Then select Continuous mode or Periodic mode and click Next.
7. On the Advanced Snapshot Options page, keep all of the default settings.
8. On the Completing the Disk Protection Wizard page, review your selections and
then click Finish.
The group is created. You will be prompted to add members into your newly
created group. To add members into the new group, click Yes.
9. On the Add Member page, select all disks that should be added into the group.
You cannot add disks while the following activities are taking place: initial
synchronization, analysis of data, taking of a snapshot, or restoration.
You can add a disk or partition to a group at any time, as long as the group is in
one of the following states:
Empty
Waiting for synchronization
Synchronizing
Suspended
• To add a disk or partition to a group, expand DiskSafe --> Protected
Storage --> Groups.
• In the right pane, right-click on the group to which you want to add a disk
or partition and click Join.
• From the Protected disks list, select the disks you want to add to the group
and click OK.
Policies configured for a group will apply to all members of that group. For
example, Disk 0 Partition 1 was configured for periodic mode. After it has been
added into a group configured for continuous mode, the mirror mode for the disk
is immediately changed to continuous mode.
To test the group snapshot function, right-click on the group and select Advanced
-->Take Snapshot.
You can monitor the activity of the disks in the group. The Current Activity of all the
disks should be Taking Snapshot and the time created of this snapshot should be
the same for all the disks.
Once you have enabled protection for a disk or partition, you can suspend it at any
time. For example, if several hosts are mirroring continuously to a remote disk, and
the network is experiencing temporary bandwidth problems, you might want to
suspend protection for one or more hosts until full network capacity is restored.
When you suspend protection, data is written only to the local disk, not to the mirror.
As a result, the local disk and its mirror become out of sync. When you later resume
protection, the disks are synchronized automatically.
Note: If the disk or partition is part of a group, you cannot suspend protection for
that individual member. You can only suspend or resume protection for the entire
group.
To suspend protection:
1. Expand DiskSafe --> Protected Storage and then click Disks or Groups.
2. In the right pane, right-click the disk, partition or group for which you want to
suspend protection, and then click Suspend.
The Current Activity column displays Suspended.
To resume protection:
1. Expand DiskSafe --> Protected Storage and then click Disks or Groups.
2. In the right pane, right-click the disk, partition, or group for which you want to
resume protection, and then click Resume.
If the disk, partition, or group uses continuous mode, synchronization occurs
immediately. If it uses periodic mode, the local disk and its mirror are
synchronized at the regularly scheduled time.
DiskSafe for Linux is distributed as an rpm for each distribution and version.
DiskSafe is installed in the /usr/local/falconstor/disksafe directory and
can only be installed and used by root user.
Install DiskSafe by using the dsinstall.sh script.This script performs the following
functions:
2. Upgrades the iSCSI Initiator with FalconStor iSCSI Initiator. Support files are
provided with the release.
5. Adds new storage server, if necessary. You will be prompted to enter the IP
address, account credentials, enable iSCSI or Fibre Channel protocol if
supported. If not enabled during installation, a message displays informing you
to enable the protocol manually after the DiskSafe installation.
During the protection process, you will specify which local or remote disk to use as a
mirror. When specifying the mirror, keep in mind that a mirror must be an entire disk;
it cannot be a partition. However, when you protect a partition, a corresponding
partition is created on the mirror disk.
When creating a mirrored disk on the storage server, TimeMark is enabled on the
device and a snapshot resource is created that is 20% of the size of the original disk.
The snapshot resource is configured with an automatic expansion policy. If needed,
you can manually expand the snapshot resource from the storage server console.
Other configuration options include specifying whether to write data to the mirror
continuously or periodically, and other options discussed in this section.
Some rules to remember when protecting your data:
• If you protect an entire disk, you cannot subsequently protect an individual
partition of that disk. Likewise, if you protect only an individual partition of a
disk, you cannot later protect the entire disk. However, you can protect all
the other partitions of the disk. (To switch from protecting an entire disk to
protecting just a partition or vice versa, you must remove and then re-
protect the affected disks or partitions.
• It is recommended that you do not change the size of the primary disk/
partition once protection is created. If you protect a disk, the size of the
mirror disk is the same size as the primary disk. If you protect a partition, the
mirror size is at least one MB larger than the primary partition. Therefore, if
the size of the primary disk or partition is changed, the mirror disk image
may be corrupted. If this occurs, you will need to remove the protection then
recreate it.
• If the host already exists on the storage server, it must use CHAP
authentication. Otherwise, authentication errors will occur.
A disk can be protected by DiskSafe on a mirror disk that is the same size as the
primary disk. Protecting a partition requires mirror disk with at least one MB greater
in size. Both local disks and IPStor/CDP disks can be used as primary and/or mirror
disks. However features such as snapshots are available only when a mirror disk is
a CDP disk.
The mirror disk ID is an optional parameter. If not specified, a disk the same size as
the primary disk is automatically assigned using SDM/IMA from an already
configured storage server and will be used as a mirror disk during disk protection.
During partition protection, the automatically assigned mirror disk size is one MB
more than the primary partition size.
If the size of the disk or partition to be protected is more than 10 GB, you will be
prompted to use thin provisioning for mirror disk allocation.
Notes: Thin Provisioning allows you to use your storage space more efficiently by
allocating a minimum amount of space for the virtual resource. Then when usage
thresholds are met, additional storage is allocated as necessary. The maximum
size of a disk with thin provisioning is limited to 16,777,146 MB. The minimum
permissible size of a thin disk is 10 GB. Thin Provisioning is available for CDP, or
IPStor version 6.0 or later.
Syntax:
dscli disk protect primary=<DiskID> [mirror=<DiskID>]
[-mode:continuous | periodic <daily [-days:<#>] [-time:<H:M>] | hourly
[-hours:<H:M>]>] [-starttime:<Y-M-D*H:M>]
[-microscan]
[-force]
[-fsscan]
[-umappath:path]
Option Description
-mode The default protection mode is continuous. In
continuous mode, all write operations on a protected
disk are performed on both the primary and mirror disk
at the same time.
If you select Periodic mode, you can specify the
synchronization schedule to update the mirror at
regularly scheduled intervals.
-starttime:<Y-M- The default start time is right now. The start time
D*H:M> parameter can be used to specify the protection starting
time. Initial synchronization is done when the start time
is reached. A start time less than the current time is
treated as current time.
-microscan You can use this option to analyze each
synchronization block on-the-fly during synchronization
and transmit only changed sectors in the block.
-force You can use this option when the specified mirror disk
has partitions.
-fsscan You can use this option to copy only the sectors used
by the file system during initial synchronization. Any
unallocated space or partition with a file system which
is not supported by disksafe will be treated as different
data. The current supported list of file system is ext2
and ext3.
-umappath:path You can use this option to appoint the location for
storing the Umap. Note: For a system disk to be
protected when the Linux logical volume is root, the
umappath should be on a separate path.
Notes:
• If mirror=<DiskID> is not specified, DiskSafe will allocate a new disk. The
following are the defaults:
• -- default protection mode will continuous
• -- default path for UMAP will be /usr/local/falconstor/disksafe
• -- default days will be 1 for daily
• -- default hours will be 1 for hourly
• -- default start time will be right now
• Protecting and unprotecting system directories other than root (/), i.e.
disks or partitions mounted at /usr, /var, /etc, /tmp, etc. require you to
reboot to enable protection. In addition, these disks can join a group but
group stop and group rollback are not allowed. They can only be restored
using Disksafe Recovery CD. If you would like to manually mount a
protected disk/partition, the device name must include the DiskSafe
layer. For example: /dev/disksafe/sdb or /dev/disksafe/sdc1.
Scheduled disk protection is used when you do not want your disk write operations
to slow down due to continuous mirroring. Select Periodic mode if you want to select
a daily or hourly based predefined times synchronization points. Once periodic
protection is enabled, the I/O operations are only performed on the primary disk and
all blocks updated between one synchronization and the next are flagged. When a
synchronization point is reached, all flagged blocks are synchronized or copied from
the primary to the mirror disk.
A schedule can be specified for periodic protection while protecting a disk or by
changing the mode of a protected disk. Refer to the DiskSafe User Guide for
additional information.
In Linux, root can either be an LVM logical volume or a native disk partition. Root (/)
or any busy system partition protection such as /etc, /usr, /var, etc. requires the
system to reboot to enable or disable protection. Hence after a successful
protection, synchronization does not start until the system is rebooted. The same is
applicable for unprotection, where DiskSafe devices are not removed until the
system is rebooted following an unprotect operation.
Protection must be set at the LVM PV (physical volume) level. All PVs in a logical
volume or a volume group can be joined together in a group to enable consistent
data on all PVs for snapshots.
Note: Protection of root logical volume is not supported at this time and cannot be
restored using the Recovery CD.
Protecting a system disk or partition when root is a native disk or partition requires a
reboot after enabling or disabling protection. System disk or partition protection
when root is an LVM logical volume requires an update of the LVM configuration so
that the volume group and logical volume uses the DiskSafe disk instead of native
disk. The steps required to protect or unprotect a system disk or partitions when root
is a logical volume are described in this section.
The following limitations are applicable for root or any system partition protection
that is in continuous use:
• Restore operations are not allowed when disks are online.
• Stop protection is not allowed. Use suspend and unprotect operations only.
• Protection cannot join a group until the system reboots and protection is
fully enabled.
• If protection is part of a group, then group stop and group rollback is not
allowed.
DSRecPE Recovery CD must be used to restore from a mirror or a snapshot.
Recovery using Recovery CD requires the following conditions. For additional
information on using Recovery CD, refer to Chapter 6.
• iSCSI protocol must be enabled on the Linux host.
• The recovery password must be set using the following command.
#dscli server recoverypwd server=<#> passwd=<#>
Notes:
• The Disksafe uninstall operation is not allowed while a root or a busy
system disk is protected. Any such protections must be unprotected and
the machine rebooted before proceeding with uninstalling disksafe.
• The LVM configuration filter can be specified in different ways as
required. Use the information specified below as a guideline.
Protect LVM To protect a root logical volume, follow the steps below:
Root PV(s)
1. Prepare the umap disk.
Root PV protection requires the umap to be on a disk other than the root logical
volume. The umap path should be specified during protecting the PV.
Alternatively a separate disk can be used for storing umap, sample steps for
which are given below.
Example:
• #mkfs /dev/sdb
• #mount /dev/sdb /mnt/umappath
• add the mount point entry in /etc/fstab to make sure it is mounted
automatically.
/dev/sdb /mnt/umppath ext2 defaults 0 0
2. Protect the root PV with a specified path for storing the umap.
Example: dscli disk protect primary=sda mirror=sdd -
umappath:/mnt/umappath
4. Reboot.
5. Check to make sure the DiskSafe device is in use as a PV instead of the native
device.
Example:
#pvdisplay
Unprotect LVM To unprotect the physical volume of a root logical volume, follow the steps below:
Root PV(s)
1. Unprotect the physical volume of the root logical volume LVM (system disk).
3. Reboot.
Pre-configuration
To use this out-of-band CDP solution, you will have to use internal storage, allocated
from a non-IPStor resource, as your primary disk. For your mirror resource, you will
have to allocate a suitably-sized resource from an IPStor CDP appliance using the
iSCSI or FC protocol.
The example used in this section assumes the following:
• The source disk is /dev/sdb and is 2000 MB in size
• The IPStor disk is /dev/sdc and is 2000 MB in size
Note: The source disk should not contain any volume that shares physical disk
with another volume; otherwise restore affects all volumes on the physical disk.
Hardware/software requirements
Component Requirement
You will need to log in as root or equivalent to perform the following steps.
3. From the list, select the DOS Segment Manager, and then click Next.
2. Select Action --> Create --> Segment to open the DOS Segment Manager.
2. Select Action --> Create --> Region to open the Create Storage Region dialog
box.
3. Specify "MD RAID 1 Region Manager" as the type of software RAID you want to
create.
4. From the list of storage objects, select the ones to use for the RAID device.
IMPORTANT: The order of the objects in the RAID is implied by their order in the
list.
5. Click Create to create the RAID device under the /dev/evms/md directory.
The device has a name such as md0 and EVMS mount location
/dev/evms/md/md0.
3. Select the RAID-1 mirror device that you created above, such as /dev/evms/md/
md0.
5. Click Done.
2. Select Action --> File System --> Make to view a list of file system modules.
3. Select the type of file system you want to create, such as ReiserFS or Ext2/3FS.
4. Select the RAID-1 mirror device that you created above, such as /dev/evms/
md/md0.
5. Specify a name to use as the Volume Label and then click Make.
The name must not contain any spaces or it will fail to mount later.
3. Specify the location where you want to mount the device, such as /home.
4. Click Mount.
Recover from Sometimes a disk can have a temporary problem that causes the disk to be marked
an out-of-sync faulty and the RAID region to become degraded. For instance, a loose drive cable
mirror can cause the MD kernel driver to think the disk has disappeared. When the cable is
plugged back in, the disk should be available for normal use. However, the MD
kernel driver and the EVMS MD plug-in might continue to indicate that the disk is a
faulty object because the disk might have missed some writes to the RAID region
and will therefore be out of sync with the rest of the disks in the region.
In order to correct this situation, the faulty object needs to be removed from the
RAID region and added back to the RAID region as a spare. When the changes are
saved, the MD kernel driver will activate the spare and sync the data and parity.
When the sync is complete, the RAID region will be operating in its original, normal
configuration.
This procedure can be accomplished while the RAID region is active and in use.
1. Remove the out-of-sync mirror from the RAID, which should have been marked
as a faulty disk.
• In EVMS, select Actions --> Remove --> Faulty Object from a Region.
• Select the RAID device you want to manage from the list of regions and
click Next.
• Select the failed disk.
• Click Remove.
Roll a disk back 1. Manually mark the local disk of the mirror as faulty.
to a previous
In EVMS, use the markfaulty plug-in function for RAID-1. This command can be
TimeMark
used while the RAID region is active and in use.
3. Unmount all file systems and volumes from the mirror region.
4. Roll back the CDP appliance from the FalconStor Management Console using
the CDP journal or snapshots.
You may want to create a TimeView first to identify the appropriate time. A
TimeView allows you to verify the data before converting the primary disk.
Information about how to mount a TimeView and roll back from a snapshot or
from the CDP journal can be found in the CDP Reference Guide.
Hardware/software requirements
Component Requirement
4. Use the devfsadm command to perform a device scan on Solaris and then use
the format command to verify that the client claimed the device.
6. Create two stripes for the two sub-mirrors as d21 and d22:
7. Specify the primary disk that is to be mirrored by creating a mirror device (d20)
using one of the sub-mirrors (d21):
#metainit d20 -m d21
When you want to perform a roll back, the primary disk and the mirror disk (the
IPStor virtual device), will be out of sync and the mirror will need to be broken. In
Solaris SVM this can be achieved by placing the primary and mirror device into a
logging mode.
1. Disable the remote mirror software and discard the remote mirror:
rmshost1# sndradm -dn -f /etc/opt/SUNWrdc/rdc.cf
2. Edit the rdc.cf file to swap the primary disk information and the secondary
disk information. Unmount the remote mirror volumes:
rmshost1# umount mount-point
3. When the data is de-staged, mount the secondary volume in read-write mode so
your application can write to it.
5. Fix the "failure" at the primary volume by disabling logging mode using the re
synchronization command.
7. Roll back the secondary volume to its original pre-disaster state to match the
primary volume by using the sndradm -m copy or sndradm -u update
commands.
Keep the changes from the updated secondary volume and resynchronize so
that both volumes match using the sndradm -m r reverse copy or by using
sndradm - u r reverse update commands.
Hardware/software requirements
Component Requirement
Supported kernels
The patches subdirectory (from the above link) also includes up-to-date device-
mapper kernel patches for 2.4.26-rc1 and old patches for 2.4.20, 2.4.21 and 2.4.22
onwards. The 2.6 kernels already contain the device-mapper core, but you need to
apply development patches if you want additional functionality.
Initialize a disk
If your disks have not already been created and formatted, you will need to initialize
them. Before you can start using LVM2, you will need to allocate the primary storage
device from your internal hard drive or storage provisioned from another storage
system (not from an IPStor appliance).
3. After initializing the primary disk, you have to initialize the mirror disk that is
being provisioned by IPStor.
Use ISCSI or FC to assign the resource to the client you will be protecting.
Follow the fdisk instructions provided above to format the mirror disk. Make a
note of the path. As you will see later, the order of usage for these drive paths is
very important.
2. Create a new physical volume for the mirror disk from IPStor.
3. Create a logical Volume Group with both the primary physical volume and mirror
physical volume in it.
4. Create the relationship between the primary disk and the mirror disk.
If you need to recover from the mirror disk, you will need to remover the mirror
relationship and then recreate the relationship with the resources reversed.
The table below gives you a LVM2 tool description:
Logical Volume Manager 2 Tool Descriptions
Use the pvcreate command to create a new physical volume from the partition
created from the disk:
pvcreate <disk>
Replace <disk> with the device name of the hard drive partition.
You will need to do this for both the primary and the mirror disk.
Create a logical A Volume Group can be created from one or more physical volumes. To scan the
Volume Group systems for all the physical volumes, use the pvscan command as root.
To create a logical Volume Group, execute the vgcreate command as root.
vgcreate <vgname> <pvlist>
Replace <vgname> with a unique name for the Volume Group.
Use <pvlist> for the paths of the physical devices that you want in the Volume
Group. For example:
vgcreate mirGroup /dev/sda1/ dev/sdb1
This example will create a mirror group mirGroup with primary disk partition
/dev/sda1 and mirror disk /dev/sdb1.
Recovery
This section explains how to roll back or recover data to a previous point in time
using CDP with Snapshot and journal. If you want to revert the primary disk to a
previous point in time you will have to do the following:
4. Roll back the disk from the FalconStor Management Console using the CDP
journal or snapshots. Information about how to roll back from a snapshot or from
the CDP journal can be found in the CDP Reference Guide.
Convert the Use the following command to convert the existing mirror volume to a linear volume:
mirror group to
linear lvconvert -m1 --corelog <mirror logical volume>
For example:
lvconvert -m1 --corelog /dev/vg/mirrorlv volume
This example breaks the mirror group /dev/vg/mirrorlv, converting it from a
mirrored logical volume to a linear logical Volume Group.
Remove the 1. Run the command to remove the logical volume and then remove the Volume
mirror group Group.
The command to remove the logical volume is:
lvremove <name of mirror Volume Group>
For example:
lvremove mirrorLV
This example removes the mirror logical Volume Group called mirrorLV.
Switch Repeat the procedure to 'Create a logical Volume Group' and 'Create the mirror' as
resources to mentioned above but be sure to switch the resources.
create a new
mirror You will need to confirm that what was originally your mirror resource (the resource
relationship from the FalconStor Management Console) is not your primary disk, and that your
original primary disk is now your mirror resource.
Use the lvs command to confirm that 100% of the copy is complete and then return
back to the original state of mirroring.
Download and configure the protection and recovery scripts for HP-UX LVM
FalconStor provides HP-UX scripts to simplify and automate the protection and
recovery process of logical volumes on HP-UX platforms.
ssh_setup The ssh_setup script is used to create a ssh public/private key between the HP-UX
script and the CDP server.
Protection The protect_vg script is used to establish the LVM mirror relationship with the
script FalconStor CDP Appliance. The protect_vg script will:
• Create an HP-UX SAN client to represent this HP-UX host machine on the
CDP appliance, if one does not already exist.
• Create mirror LUNs on the CDP appliance, using disks in the "Protection"
storage group.
• Assign those mirror LUNs to the HP-UX SAN client.
• Establish a logical volume mirror between the local primary logical Volume
and the CDP provisioned disk acting as the mirror.
As a result, each and every volume found within the specified Disk Group will be
mirrored to a CDP-provisioned LUN. If necessary, you will need to use the
FalconStor Management Console to associate sets of the mirror LUNs to form a
snapshot group. The console is also used to enable TimeMark or CDP journaling for
protection against corruption and remote Replication for disaster recovery.
Recovery The recover_vg, mount_vg, and umount_vg scripts are used to recover data
scripts given different scenarios of primary disk failure (physical failure or logical
corruption).
Follow the instructions below to download and configure the scripts:
Download and configure the protection and recovery scripts for HP-UX VxVM
FalconStor provides HP-UX scripts to simplify and automate the protection and
recovery process of Veritas Volume Manager on HP-UX platforms.
ssh_setup The ssh_setup script is used to create ssh public/private key between the HP-UX
script and the storage server.
3. Configure ssh public/private key authentication between HP-UX and the storage
server.
ssh_setup <storage server IP address>
• Enter the file in which to save the key. Use default and click enter.
• Enter the passphrase or hit enter for empty passphrase.
• Enter the same passphrase again.
• Are you sure you want to continue? Type "yes" and click Enter to continue.
• Enter the password for the storage server.
• Enter the password for the storage server again to append authorized key.
Protection The protect_dg script is used to establish the VxVM mirror relationship with the
script FalconStor CDP Appliance. The protect_dg script will:
• Create an HP-UX SAN client to represent this HP-UX host machine on the
CDP appliance, if one does not already exist.
• Create mirror LUNs on the CDP appliance, using disks in the "Protection"
storage group.
• Assign those mirror LUNs to the HP-UX SAN client.
• Establish VxVM mirrors between the local primary Veritas Volume and the
CDP provisioned disk acting as the mirror.
As a result, each and every volume found within the specified Disk Group will be
mirrored to a CDP-provisioned LUN. If necessary, you will need to use the
FalconStor Management Console to associate sets of the mirror LUNs to form a
snapshot group. The console is also used to enable TimeMark or CDP journaling for
protection against corruption and remote Replication for disaster recovery.
Recovery The recover_dg, mount_dg, and umount_dg scripts are used to recover data in
scripts the event of primary disk failure (physical failure or logical corruption). Follow the
instructions below to download and configure the scripts:
Establish the The following procedure describes how to use the protect_vg script to establish
HP-UX Logical mirror relationships between the HP-UX Volume Group's Logical Volumes and the
Volume mirror LUNs from the CDP appliance.
Manager (LVM)
mirror As an example, let’s assume that we have a Volume Group named vg01 that we
want to protect.
1. Display Volume Group information for vg01 to confirm that no mirrors exist.
• Display Volume Group information and list available logical volume by
running vgdisplay -v vg01.
• Display logical volume information by running lvdisplay -v /dev/
vg01/<logical volume name> and confirm the "Mirror copies" section
is "0".
3. Confirm mirrors are created and in sync for all logical volume on vg01.
vdisplay -v /dev/vg01/<logical volume name> | more
• The section "Mirror copies" should now be "1", which means there is one
mirror for that logical volume.
• The section "LV Status" should also be "available/syncd", which means
mirrors are synchronized; otherwise it will display "available/stale".
You are now ready to enable TimeMark, CDP Journaling, and/or Replication for
virtual disk etlhp2-vg01-Protection on Volume Group vg01 mirror.
Establish the Let’s assume that we have a Disk Group named dg01 that we want to protect.
HP-UX VxVM
mirror 1. Display Disk Group information for dg01 to confirm that no CDP VxVM mirrors
already exist.
Execute the command vxprint -g dg01 -p | grep ipstor_pl to make
sure that there are no IPStor plex existing on disk group dg01.
3. Confirm mirrors are created and in sync for all volumes in disk group dg01.
vxprint -g dg01 -p|grep ipstor_pl
Each of the IPStor plex should be "ENABLED" and "ACTIVE".
You are now ready to create TimeMark, CDP Journaling, and/or Replication for
virtual disk etlhp4-dg01-Protection on Disk Group dg01 mirror.
Refer to your CDP Reference Guide for information about configuring TimeMarks,
CDP Journaling, and Replication using the FalconStor Management Console. The
administration guide also provides details about how to create a snapshot group and
how to enable the above protection services at the group level. Generally, for a
database or E-mail system, all of the data files and transaction logs for the same
application should be grouped into a snapshot group in order to achieve transaction-
level integrity for the application.
Download and configure the protection and recovery scripts for AIX LVM
FalconStor provides AIX scripts to simplify and automate the protection and
recovery process of logical volumes on AIX platforms.
ssh_setup script
The ssh_setup script is used to create a ssh public/private key between the AIX and
the CDP server.
3. Configure ssh public/private key authentication between AIX and the CDP
server.
ssh_setup <IP address>
• Enter the file in which to save the key. Use default and click Enter.
• Enter the passphrase or click Enter to the passphrase field empty.
• Enter the same passphrase again.
• Are you sure you want to continue? Type Yes and press Enter to continue.
• Enter the password for the CDP Server.
• Enter the password for CDP Server again to append the authorized key.
ssh_setup_ha The ssh_setup_ha script is used to create ssh public/private key for each
script HACMP cluster node. This process is only required if the volume groups to be
protected are configured as part of HACMP cluster.
Make sure to perform this process from each node to each other (node1 -> node2 &
node2 -> node1). As of version 1.4, two node cluster is supported for protection and
recovery.
Protection The protect_vg script is used to establish the mirror relationship with the FalconStor
script CDP Appliance and protect_vg_ha if the system is configured with HACMP cluster.
The protect_vg and protect_vg_ha script will:
• Create an AIX SAN client to represent this AIX host machine on the CDP
appliance, if one does not already exist. If the system is configured with
HACMP cluster, both local and remote HACMP node will be created to
represent them on the CDP appliance.
• Create mirror LUNs on the CDP appliance, using disks in the "Protection"
storage group.
• Assign the mirror LUNs to the AIX SAN client and both local and remote
HACMP node (if configured).
• Establish a logical volume mirror between the local primary logical volume
and the CDP provisioned disk acting as the mirror.
As a result, each and every Logical Volume found within the specified Volume Group
will be mirrored to a CDP-provisioned LUN. If necessary, you can use the
FalconStor Management Console to associate sets of the mirror LUNs to form a
snapshot group. The console is also used to enable TimeMark or CDP journaling for
protection against corruption and remote Replication for disaster recovery.
Recovery The recover_vg, mount_vg, and umount_vg scripts are used to recover data given
scripts different scenarios of primary disk failure (physical failure or logical corruption). Use
recover_vg_ha, mount_vg_ha, and umount_vg_ha scripts if HACMP cluster is
configured on the system.
Establish the AIX Logical Volume Manager (LVM) mirror on a volume group
As an example, let’s assume that we have a Volume Group named vg01 (with total
size 10240mb and used logical volume size of 500 MB) that we want to protect.
1. Display the Volume Group information for vg01 to confirm that the number of
mirrors has not reached the maximum of three.
• Display Volume Group information and list available logical volume by
running vgdisplay -v vg01
• Display logical volume information by running lslv <logical volume
name> and confirm the "copies" section ihas not reached the maximum of
three.
3. Confirm mirrors are created and in sync for all logical volume on vg01.
lslv <logical volume name>
• The section "Copies" should be incremented by one, which means there is
an additional physical volume that has been added to the logical volume.
Let's assume we have a Logical Volume named fslv05 with a size of 300mb but with
total Volume Group size of 10240mb that we would like to protect.
1. Display Logical Volume information for fslv05 to confirm that the number of
mirrors hasn't reached the maximum number of "3".
Display logical volume information by running:
lslv <logical volume name> and confirm the "COPIES" section
hasn't reached the maximum number of "3".
It will create a CDP Virtual Disk with a total size of 10240 MB. However since it uses
thin provisioning, only 300 MB of space will be used.
3. Confirm mirrors are created and in sync for all logical volume on vg00.
lslv <logical volume name>
• The section "COPIES" should be incremented by one, which means there's
an additional physical volume that has been added to the logical volume.
• The section "LV STATE" should also be "closed/syncd" or "opened/syncd",
which means mirrors are synchronized; otherwise it will display "closed/
stale" or "opened/stale".
You are now ready to enable CDP Journaling, and/or Replication for virtual disk
etlaix3-vg01-Protection on Volume Group vg01 mirror.
Let's assume we have a shared HACMP Volume Group named vg01 that we would
like to protect and it is active on remote HACMP node vaix3.
Note: Make sure that HACMP node name resolves properly to an IP address which
can be added through /etc/hosts.
1. Display Volume Group information for vg01 to confirm that no mirrors already
exist.
• Display Volume Group information and list available logical volume by
running lsvg -l sharevg_01.
• Display logical volume information by running: lslv <logical volume
name> and confirm the "COPIES" section is "1".
3. Confirm mirrors are created and in sync for all logical volume for sharevg_01 on
node vaix3.
lslv <logical volume name>
• The section "COPIES" should now be "2", which means there are 2 physical
volume that makes up that logical volume.
• The section "LV STATE" should also be "closed/syncd" or "opened/syncd",
which means mirrors are synchronized; otherwise it will display "closed/
stale" or "opened/stale".
You are now ready to enable TimeMark, CDP Journaling, and/or Replication for
virtual disk Terayon-sharevg_01-Protection on Volume Group vg01 mirror.
Refer to the CDP Reference Guide for information about configuring TimeMarks,
CDP Journaling, and Replication using the FalconStor Management Console. The
Administration guide also provides details on how to create a snapshot group and
how to enable the above protection services at the group level. Generally, for a
database or E-mail system, all of the data files and transaction logs for the same
application should be grouped into a snapshot group in order to achieve transaction-
level integrity for the application.
The FalconStor Management Console is the administration tool for the storage
storage network. It is a Java application that can be used on a variety of platforms
and allows administrators to create, configure, manage, and monitor the storage
resources and services on the storage server network as well as run/view reports,
enter licensing information, and add/delete administrators.
The FalconStor Management Console software can be installed on each machine
connected to a storage server. In addition to installing the FalconStor Management
Console, the console is also available via download from your storage server
appliance.
cd /usr/local/ipstorconsole
./ipstorconsole
Note:
• If your screen resolution is 640 x 480, the splash screen may be cut off
while the console loads.
• The console might not launch on certain systems with display settings
configured to use 16 colors.
• The console needs to be run from a directory with “write” access.
Otherwise, the host name information and message log file retrieved
from the storage server will not be able to be saved to the local directory.
As a result, the console will display event messages as numbers and
console options will not be able to be saved.
Note: The FalconStor Management Console remembers the servers to which the
console has successfully connected. If you close and restart the console, the
servers will still be displayed in the tree but you will not be connected to them.
You will need to restart the server if you change the hostname.
Note: Do not change the hostname if you are using block devices. If you do, all
block devices claimed by CDP will be marked offline and seen as foreign devices.
Search for The console has a search feature that helps you find any physical device, virtual
objects in the device, or client on any storage server. To search:
tree
1. Highlight a storage server in the tree.
3. Select the type of object to search for and the search criteria.
Once you select an object type, a list of existing objects appears. If you highlight
one, you will be taken directly to that object in the tree.
Alternatively, you can type the full name, ID, ACSL (adapter, channel, SCSI,
LUN), or GUID (Global Unique Identifier). Once you click the Search button, you
will be taken directly to that object in the tree.
Storage server The console displays the configuration and status of the storage server.
status and Configuration information includes the version of the CDP software and base
configuration operating system, the type and number of processors, amount of physical and
swappable memory, supported protocols, and network adapter information.
The Event Log tab displays system events and errors.
Alerts The console displays all critical alerts upon login to the server. Select the Display
only the new alerts next time if you only want to see new critical alerts the next time
you log in. Selecting this option indicates acknowledgement of the alerts.
CDP can automatically discover all storage servers on your storage subnet. Storage
servers running CDP will be recognized as storage servers. To discover the
servers:
Continuously You can create a configuration repository that maintains a continuously updated
save version of your storage system configuration. The status of the configuration
configuration repository is displayed on the console under the General tab. In the case of a failure
of the configuration repository, the console displays the time of the failure along with
the last successful update. This feature works seamlessly with the FalconStor
Failover option to provide business continuity in the event that a storage server fails.
For additional redundancy, the configuration repository can be mirrored to another
disk.
To create a configuration repository:
Auto save You can set your system to automatically replicate your system configuration to an
configuration FTP server on a regular basis. Auto Save takes a point-in-time snapshot of the
storage server configuration prior to replication. To use Auto Save:
2. Select the Auto Save Config tab and enter information automatically saving your
storage server system configuration.
For detailed information about this dialog, refer to the ’Auto Save Config’ section.
Manually save You can manually save your system configuration any time you make a change in
configuration as the console, including any time you add/change/delete a client or resource, assign a
needed client, or make any changes to your failover/mirroring/replication configuration. If
you add a server to a client from the Client Monitor (or via command line for Unix
clients), you should also re-save your configuration.
To do this:
Restore You can restore a storage server configuration from a file that was created using the
configuration Save Configuration option. This is for disaster recovery purposes and should not be
used in day-to-day operation of the server. Changes made since the configuration
was last saved will not be included in this restored configuration.
Warning: Restoring a configuration will overwrite existing virtual device and client
configurations for that server. Storage server partition information will not be
restored. This feature should only be used if your configuration is lost or corrupted,
as lost virtual devices can result in lost data for the clients using those virtual
devices.
To restore the configuration:
1. Import the disk(s) that were recovered from the damaged storage server to your
new storage server.
Refer to ‘Import a disk’ for more information.
5. If any physical or virtual devices were changed after the configuration was
saved, you must rescan (right-click on Physical Resources and select Rescan)
to update the newly restored configuration.
Licensing
The License Summary window is informational only and displays a list of the
options supported for this server. You can enter keycodes for your purchased
options on the Keycode Detail window.
3. Press the Add button on the Keycodes Detail window to enter each keycode.
Note: If multiple administrators are logged into a storage server at the same time,
license changes made from one console will take effect in other console only
when the administrator disconnects and then reconnects to the server.
4. If your licenses have not been registered yet, click the Register button on the
Keycodes Detail window.
You can register online if you have an Internet connection.
To register offline, you must save the registration information to a file on your
hard drive and then E-mail it to FalconStor’s registration server. When you
receive a reply, save the attachment to your hard drive and send it to the
registration server to complete the registration.
The tabs you see will depend upon your storage server configuration.
2. If you have multiple NICs (network interface cards) in your server, enter the IP
addresses using the Server IP Addresses tab.
If the first IP address stops responding, the CDP clients will attempt to
communicate with the server using the other IP addresses you have entered in
the order they are listed.
Note:
• In order for the clients to successfully use an alternate IP address, your
subnet must be set properly so that the subnet itself can redirect traffic to
the proper alternate adapter.
• You cannot assign two or more NICs within the same subnet.
• The client becomes aware of the multiple IP addresses when it initially
connects to the server. Therefore, if you add additional IP addresses in
the console while the client is running, you must rescan devices
(Windows clients) or restart the client (Linux/Unix clients) to make the
client aware of these IP addresses.
3. On the Activity Database Maintenance tab, indicate how often the SAN data
should be purged.
The Activity Log is a database that tracks all system activity, including all data
read, data written, number of read commands, write commands, number of
errors etc. This information is used to generate SAN information for the CDP
reports.
5. On the iSCSI tab, set the iSCSI portal that your system should use as default
when creating an iSCSI target.
If you have multiple NICs, when you create an iSCSI target, this IP address will
be selected by default for you.
The settings on this tab affect system performance. The defaults should be
optimal for most configurations. You should only need to change the settings for
special situations, such as if your mirror is remotely located.
Mirror Synchronization: Use [n] outstanding commands of [n] KB - The number
of commands being processed at one time and the I/O size. This must be a
multiple of the sector size.
Synchronize Out-of-Sync Mirrors - Determine how often the system should
check and attempt to resynchronize active out-of-sync mirrors, how often it
should retry synchronization if it fails to complete, and whether or not to include
replica mirrors. These setting will only be used for active mirrors. If a mirror is
suspended because the lag time exceeds the acceptable limit, that
resynchronization policy will apply instead.
Replication: TCP or RUDP - TCP is the default replication protocol for all new
installations of IPStor 6.0 or higher, unless the connecting server does not
support TCP. If you want to update the protocol for existing replication jobs or for
an entire server, click the Update Protocol button.
Timeout replication after [n] seconds - Timeout after inactivity. This must be the
same on both the primary and target replication servers.
Throttle - The maximum amount of bandwidth that will be used for replication.
This is a global server parameter and affects all resources using either remote or
local replication. Throttle does not affect manual replication scans; it only affects
actual replication. It also does not affect continuous replication, which uses all
available bandwidth. Leaving the Throttle field set to 0 (zero) means that the
maximum available bandwidth will be used. Besides 0, valid input is 10-
1,000,000 KB/s (1G).
Enable Microscan - Microscan analyzes each replication block on-the-fly during
replication and transmits only the changed sections on the block. This is
beneficial if the network transport speed is slow and the client makes small
random updates to the disk. This global Microscan option overrides the
Microscan setting for each individual virtual device.
7. Select the Auto Save Config tab and enter information automatically saving your
storage server system configuration.
You can set your system to automatically replicate your system configuration to
an FTP server on a regular basis. Auto Save takes a point-in-time snapshot of
the storage server configuration prior to replication.
The target server you specify in the Ftp Server Name field must have FTP server
installed and enabled.
The Target Directory is the directory on the FTP server where the files will be
stored. The directory name you enter here (such as ipstorconfig) is a directory
on the FTP server (for example ftp\ipstorconfig). You should not enter an
absolute path like c:\ipstorconfig.
The Username is the user that the system will log in as. You must create this
user on the FTP site. This user must have read/write access to the directory
named here.
In the Interval field, determine how often to replicate the configuration.
Depending upon how frequently you make configuration changes to CDP, set
the interval accordingly. You can always save manually in between if needed. To
do this, highlight your storage server in the tree, select File menu --> Save
Configuration.
In the Number of Copies field, enter the maximum copies to keep. The oldest
copy will be deleted as each new copy is added.
8. On the Location tab, you can enter a specific physical location of the machine.
You can also select an image (smaller than 500 KB) to identify the server
location. Once the location information is saved, the new tab displays in the
FalconStor Management Console for that server.
Manage accounts
Only the root user can manage users and groups or reset passwords. You will need
to add an account for each person who will have administrative rights in CDP. You
will also need to add a user account for clients that will be accessing storage
resources from a host-based application (such as FalconStor DiskSafe or FileSafe).
To make account management easier, users can be grouped together and handled
simultaneously.
To manage users and groups:
A list of all existing users and administrators are listed on the Users tab and a list
of all existing groups is listed on the Groups tab.
3. Enter a password for this user and then re-enter it in the Confirm Password field.
For iSCSI clients and host-based applications, the password must be between
12 and 16 characters. The password is case sensitive.
Set a quota You can set a quota for a user on the Users tab and you can set a quota for a group
on the Groups tab.
The quota limits how much space is allocated to each user. If a user is in a group,
the group quota will override the user quota.
Reset a To change a password, select Reset Password. You will need to enter a new
password password and then re-type the password to confirm.
You cannot change the root user’s password from this dialog. Use the Change
Password option below.
This option lets you change the root user’s CDP password if you are currently
connected to a server.
2. Enter your old password, the new one, and then re-enter it to confirm.
You can check if the console can successfully connect to the storage server. To
check connectivity, right-click on a server and select Connectivity Test.
By running this test, you can determine if your network connectivity is good. If it is
not, the test may fail at some point. You should then check with your network
administrator to find out what the problem is.
As a root user, you can add, delete or reset the CHAP secret of an iSCSI User or a
mutual CHAP user. Other users (i.e. IPStor administrator or IPStor user) can also
change the CHAP secret of an iSCSI user if they know the original CHAP secret.
To add an iSCSI user or Mutual CHAP User from an iSCSI server:
1. Right-click on the server and select iSCSI Users from the menu.
2. Select Users.
From this screen, you can select an existing user from the list to delete the user
or reset the Chap secret.
The iSCSI Mutual CHAP User Management screen displays allowing you to delete
users or reset the Mutual CHAP secret.\
You can apply patches to your storage server through the console.
1. Download the patch onto the computer where the console is installed.
Rollback patch To remove (uninstall) a patch and restore the original files:
System maintenance
Network If you need to change storage server IP addresses, you must make these changes
configuration using Network Configuration. Using YaST or other third-party utilities will not update
the information correctly.
Network Time Protocol - Allows you to keep the date and time of your storage
server in sync with Internet NTP servers. Click Config NTP to enter the IP
addresses of up to five Internet NTP servers.
If you select Static, you must add addresses and net masks.
MTU - Set the maximum transfer unit of each IP packet. If your card supports it,
set this value to 9000 for jumbo frames.
Note: If the MTU is changed from 9000 to 1500, a performance drop will occur. If
you then change the MTU back to 9000, the performance will not increase until
the server is restarted.
Set hostname Right-click on a server and select System Maintenance --> Set Hostname to change
your hostname. You must restart the server if you change the hostname.
Note: Do not change the hostname if you are using block devices. If you do, all
block devices claimed by CDP will be marked offline and seen as foreign devices.
Restart IPStor Right-click on a server and select System Maintenance --> Restart IPStor to restart
the Server processes.
Restart network Right-click on a server and select System Maintenance --> Restart Network to
restart your local network configuration.
Reboot Right-click on a server and select System Maintenance --> Reboot to reboot your
server.
Halt Right-click on a server and select System Maintenance --> Halt to turn off the server
without restarting it.
IPMI Intelligent Platform Management Interface (IPMI) is a hardware level interface that
monitors various hardware functions on a server.
If CDP detects IPMI when the server boots up, you will see several IPMI options on
the System Maintenance --> IPMI sub-menu, Monitor, and Filter.
Monitor - Displays the hardware information that is presented to CDP. Information is
updated every five minutes but you can click the Refresh button to update more
frequently.
You will see a red warning icon in the first column if there is a problem with a
component.
In addition, you will see a red exclamation mark on the server . An Alert tab will
appear with details about the error.
Filter - You can filter out components you do not want to monitor. This may be useful
for hardware you do not care about or erroneous errors, such as when you do not
have the hardware that is being monitored. You must enter the Name of the
component being monitored exactly as it appears in the hardware monitor above.
Physical Resources
Physical resources are the actual devices attached to this storage server. The SCSI
adapters tab displays the adapters attached to this server and the SCSI Devices tab
displays the SCSI devices attached to this server. These devices can include hard
disks, tape libraries, and RAID cabinets. For each device, the tab displays the SCSI
address (comprised of adapter number, channel number, SCSI ID, LUN) of the
device, along with the disk size (used and available). If you are using FalconStor’s
Multipathing, you will see entries for the alternate paths as well.
The Storage Pools tab displays a list of storage pools that have been defined,
including the total size and number of devices in each storage pool.
The Persistent Binding tab displays the binding of each storage port to its unique
SCSI ID.
When you highlight a physical device, the Category field in the right-hand pane
describes how the device is being used. Possible values are:
• Reserved for virtual device - A hard disk that has not yet been assigned to a
SAN Resource or Snapshot area.
• Used by virtual device(s) - A hard disk that is being used by one or more
SAN Resources or Snapshot areas.
• Reserved for direct device - A SCSI device, such as a hard disk, tape drive
or library, that has not yet been assigned as a SAN Resource.
• Used in direct device - A directly mapped SCSI device, such as a hard disk,
tape drive or library, that is being used as a direct device SAN Resource.
• Reserved for service enabled device - A hard disk with existing data that
has not yet been assigned to a SAN Resource.
• Used by service enabled device - A hard disk with existing data that has
been assigned to a SAN Resource.
• Unassigned - A physical resource that has not been reserved yet.
• Not available for IPStor - A miscellaneous SCSI device that is not used by
the storage server (such as a scanner or CD-ROM).
• System - A hard disk where system partitions exist and are mounted (i.e.
swap file, file system installed, etc.).
• Reserved for Striped Set - Used in a disk striping configuration.
The following table describes the icons that are used to describe physical resources:
Icon Description
The V icon indicates that this disk has been virtualized or is reserved for
a virtual disk.
The D icon indicates that this is a direct device or is reserved for a direct
device.
Icon Description
The a icon indicates that this device is used in the logical resource that is
currently being highlighted in the tree.
The red arrow indicates that this Fibre Channel HBA is down and cannot
access its storage.
The V icon to the left of the device indicates that this storage pool is
comprised of virtual devices. An S icon would indicate that it is comprised
of service enabled devices and a D icon would indicate that it is
comprised of direct devices.
The C icon to the right of the device indicates that this storage pool is
designated for SafeCache resources.
The G icon to the right of the device indicates that this is a general
purpose storage pool which can be used for any type of resource.
The H icon to the right of the device indicates that this storage pool is
designated for HotZone resources.
The J icon to the right of the device indicates that this storage pool is
designated for CDP journal resources.
The N icon to the right of the device indicates that this storage pool is
designated for snapshot resources.
The R icon to the right of the device indicates that this storage pool is
designated for continuous replication resources.
The S icon to the right of the device indicates that this storage pool is
designated for SAN resources and their corresponding replicas.
The physical disk appearing in color indicates that it is local to this server.
The V indicates that the disk is virtualized for this server. If there were a
Q on the icon, it would indicate that this disk is the quorum disk that
contains the configuration repository.
You can use one of FalconStor’s disk preparation options to change the category of
a physical device. This is important to do if you want to create a logical resource
using a device that is currently unassigned.
• The storage server detects new devices when you connect to it. When they
are detected you will see a dialog box notifying you of the new devices. At
this point you can highlight a device and press the Prepare Disk button to
prepare it.
The Physical Devices Preparation Wizard will help you to virtualize, service-
enable, unassign, or import physical devices.
• At any time, you can prepare a single unassigned device by doing the
following: Highlight the device, right-click, select Properties and select the
device category. (You can find all unassigned devices under the Physical
Resources/Adapters node of the tree view.)
• For multiple unassigned devices, highlight Physical Resources, right-click
and select Prepare Disks. This launches a wizard that allows you to
virtualize, unassign, or import multiple devices at the same time.
If you have an IDE drive that you want to virtualize and use as storage, you must
create a block device from it. To do this:
1. Right-click on Block Devices (under Physical Devices) and select Create Disk.
2. Select the device and specify a SCSI ID and LUN for it.
The defaults are the next available SCSI ID and LUN.
Note: Do not change the hostname if you are using block devices. If you do, all
block devices claimed by CDP will be marked offline and seen as foreign devices.
Rescan adapters
If you want to discover new devices without scanning existing devices, click the
Discover New Devices radio button and then check the Discover new devices
only without scanning existing devices checkbox. You can then specify
additional scan details.
Import a disk
You can import a ‘foreign’ disk into a CDP appliance. A foreign disk is a virtualized
physical device containing FalconStor logical resources previously set up on a
different storage server. You might need to do this if a storage server is damaged
and you want to import the server’s disks to another storage server.
When you right-click on a disk that CDP recognizes as ‘foreign’ and select the
Import option, the disk’s partition table is scanned and an attempt is made to
reconstruct the virtual drive out of all of the segments.
If the virtual drive was constructed from multiple disks, you can highlight Physical
Resources, right-click and select Prepare Disks. This launches a wizard that allows
you to import multiple disks at the same time.
As each drive is imported, the drive is marked offline because it has not yet found all
of the segments. Once all of the disks that were part of the virtual drive have been
imported, the virtual drive is re-constructed and is marked online.
Importing a disk preserves the data that was on the disk but does not preserve the
client assignments. Therefore, after importing, you must either reassign clients to
the resource or use the File menu --> Restore Configuration option.
Notes:
• The GUID (Global Unique Identifier) is the permanent identifier for each
virtual device. When you import a disk, the virtual ID, such as SANDisk-
00002, may be different from the original server. Therefore, you should
use the GUID to identify the disk.
• If you are importing a disk that can be seen by other storage servers, you
should perform a rescan before importing. Otherwise, you may have to
rescan after performing the import.
SCSI aliasing
Repair is the process of removing one or more physical device paths from the
system and then adding them back. Repair may be necessary when a device is not
responsive which can occur if a storage controller has been reconfigured or if a
standby alias path is offline/disconnected.
If a path is faulty, adding it back may not be possible.
To repair paths to a device:
If all paths are online, the following message will be displayed instead: “There
are no physical device paths that can be repaired.”
Logical Resources
Logical resources are all of the resources defined on the storage server, including
SAN Resources, and Groups.
SAN SAN logical resources consist of sets of storage blocks from one or more physical
Resources hard disk drives. This allows the creation of logical resources that contain a portion
of a larger physical disk device or an aggregation of multiple physical disk devices.
Clients do not gain access to physical resources; they only have access to logical
resources. This means that an administrator must configure each physical resource
to one or more logical resources so that they can be assigned to the clients.
When you highlight a SAN Resource, you will see a small icon next to each device
that is being used by the resource.
In addition, when you highlight a SAN Resource, you will see a GUID field in the
right-hand pane.
The GUID (Global Unique Identifier) is the permanent identifier for this virtual device.
The virtual ID, SANDisk-00002, is not. You should make note of the GUID, because,
in the event of a disaster, this identifier will be important if you need to rebuild your
system and import this disk.
Groups Groups are groups of drives (virtual drives and service enabled drives) that will be
grouped together for SafeCache or snapshot synchronization purposes. For
example, when one drive in the group is to be replicated or backed up, the entire
group will be snapped together to maintain a consistent image.
The following table describes the icons that are used to show the status of logical
resources:
Icon Description
Write caching
You can leverage a third party disk subsystem's built-in caching mechanism to
improve I/O performance. Write caching allows the third party disk subsystem to
utilize its internal cache to accelerate I/O.
To write cache a resource, right-click on it and select Write Cache --> Enable.
Replication
The Incoming and Outgoing objects under the Replication object display information
about each server that replicates to this server or receives replicated data from this
server. If the server’s icon is white, the partner server is "connected" or "logged in". If
the icon is yellow, the partner server is "not connected" or "not logged in".
When you highlight the Replication object, the right-hand pane displays a summary
of replication to/from each server.
For each replica disk, you can promote the replica or reverse the replication. Refer
to the Replication chapter in the CDP Reference Guide for more information about
using replication.
SAN Clients
SAN Clients are the actual file and application servers that utilize the storage
resources via the storage server.
These SAN Clients access their storage resources via iSCSI initiators (for iSCSI) or
HBAs (for Fibre Channel or iSCSI). The storage resources appear as locally
attached devices to the SAN Clients’ operating systems (Windows, Linux, Solaris,
etc.) even though the devices are actually located at the storage server site.
When you highlight a specific SAN client, the right-hand pane displays the Client ID,
type, and authentication status, as well as information about the client machine.
The Resources tab displays a list of SAN Resources that are allocated to this client.
The adapter, SCSI ID and LUN are relative to this CDP SAN client only; other clients
that may have access to the SAN Resource may have different adapter SCSI ID and
LUN information.
You can change the ACSL (adapter, channel, SCSI, LUN) for a SAN Resource
assigned to a SAN client if the device is not currently attached to the client. To
change, right-click on the SAN Resource under the SAN Client object (you cannot
do this from the SAN Resources object) and select Properties. You can enter a new
adapter, SCSI ID, or LUN.
By default, only the root user and IPStor admins can manage SAN resources,
groups, or clients. While IPStor users can create new SAN Clients, if you want an
IPStor user to manage an existing SAN Client, you must grant that user access. To
do this:
1. Open a web browser on the machine you wish to install the console.
If Java is not detected, you will be prompted to install the appropriate version of
JRE. Once detected, the FalconStor Management Console automatically
installs.
Notes:
• A security warning may display regarding the digital signature for the java
applet. Click run to accept the signature.
• If you are using Windows Server 2008 or Vista and plan to use the
FalconStor Management Console, use the Java console (with Java 1.6)
for best results.
If you plan to reuse the console on this machine, repeat steps 1 and 2 above or
create a shortcut from Java when prompted.
To make sure you are prompted to create a shortcut to the java Webstart, follow the
steps below:
2. Click on the Advanced tab and select the prompt user or Always allow radio
button under Shortcut Creation.
Console options
To set options for the console:
You can create a menu in the FalconStor Management Console from which you can
launch external applications. This can add to the convenience of FalconStor’s
centralized management paradigm by allowing your administrators to start all of their
applications from a single place. The Custom menu will appear in your console
along with the normal menu (between Tools and Help).
To create a custom menu:
2. Click Add and enter the information needed to launch this application.
Menu Label - The application title that will be displayed in the Custom menu.
Command - The file (usually an.exe) that launches this application.
Command Argument - An argument that will be passed to the application. If you
are launching an Internet browser, this could be a URL.
Menu Icon - The graphics file that contains the icon for this application. This will
be displayed in the Custom menu.
When you have finished taking snapshots, you can expand Snapshots in the left
pane to see the following two additional nodes:
• Disks - Expand this node to view a list of all protected disks and partitions. If
a disk or partition is part of a group, the name of that group displays in
brackets after the disk or partition name.
• Groups - Expand this node to view a list of all groups. Snapshots taken of a
group display for all members of the group.
If you take a snapshot of an individual disk or partition, and click that disk or partition
name within the Snapshots --> Disks node, the right panel displays the following
information about the snapshot:
• The snapshot number
• The date the snapshot was taken
• The time the snapshot was taken
• The status of the snapshot - Yes if it has been mounted or No if it has not.
• The name of the group (if the snapshot was taken of a group rather than of
an individual disk or partition.
If you take a snapshot of a group and click that group’s name within the Snapshots -
-> Groups node, the right panel displays the following information about the
snapshot:
• The snapshot number
• The date the snapshot was taken
• The time the snapshot was taken
Depending on the amount of changed data, it might take several minutes for the
snapshot to appear. If the snapshot does not appear automatically, right-click the
group node and then click Refresh to update the screen.
For group snapshots: If the group uses periodic mode and is configured to take a
snapshot automatically after each synchronization, taking a snapshot manually
actually generates two snapshots. The first is the result of the disks being
synchronized; the second is the result of the snapshot being taken manually.
Events You can also get a status of snapshots by highlighting Events in DiskSafe console.
You'll be able to browse all events, including the scheduled snapshot creation status.
If the DiskSafe events list is too long, you can right-click Events and select Set Filter.
There, you can set a Time Range, display the events by Category, Type or Owner,
or use the Description search to find specific information.
Windows event You can also view snapshot status from the Windows event log.
log
End
Start
You will see the following in the Windows event log to detail the snapshot process:
1. DiskSafe starts the snapshot and passes the command to the FalconStor
Intelligent Management Agent (IMA). IMA gets the drive information of the
protected disk or partition.
2. SDM/IMA sends the drive information to the proper snapshot agent, after which
time you will see the logs describing the application process.
3. The snapshot agent successfully puts the database into backup mode and then
tells the CDP appliance to take the snapshot on the DiskSafe mirror disk.
4. DiskSafe confirms the snapshot creation and then reports the snapshot
information.
To confirm the Exchange snapshot process, you can check the Windows Event log.
The Snapshot Agent will send the backup command to each storage group on the
protected disk. You can find these logs after the appctrl event:
1. The snapshot agent sends the full backup command to the storage group and
then Exchange Extensible Storage Engine (ESE) starts the full backup process.
3. Exchange ESE processes the log files. The snapshot agent will not request to
truncate the log to affect other Exchange backup process.
4. Exchange ESE completes the backup process on a storage group. You may see
the same process on another storage group.
You can see numerous events from the SQL server in the Windows Event log. The
SQL snapshot agent will send the checkpoint command to each of the SQL
databases on the protected disk; you can check whether the agent sent the
checkpoint commands to all database successfully. For example, you will see
events similar to the ones in the pictures below.
You can also get detailed information from the agent trace log. You can find
agttrace.log under the IMA installation folder. It is usually installed in C:\Program
Files\FalconStor\IMA. There, you can see the connection between the SQL instance
and the SQL database list that was successfully created by the checkpoint.
You can get detailed information from the agent trace log. You can find agttrace.log
under the IMA installation folder. It is usually installed in C:\Program
Files\FalconStor\IMA. There, you can see the connection to Oracle system and the
ALTER tablespace begin backup and ALTER tablespace end backup command to
all tablespaces on the protected disk.
You can get the detailed information from the agent trace log. You can find
agttrace.log under the IMA installation folder. It is usually installed in C:\Program
Files\FalconStor\IMA. There, you can see the connection to the Domino system and
the backup command to all NFS databases on the protected disks. You can also
check the snapshot agent communication from the log of the Domino server.
Reports
The CDP appliance retains information about the health and behavior of the physical
and virtual storage resources on the Server. CDP provides an event log and reports
that offer a wide variety of information.
CCM Reports
CDP Reports
An Event Log is maintained to record system events and errors. In addition, the
appliance maintains performance data on the individual physical storage devices
and SAN Resources. This data can be filtered to produce various reports through
the FalconStor Management Console.
The Event Log details significant occurrences during the operation of the storage
server. The Event Log can be viewed in the FalconStor Management Console when
you highlight a Server in the tree and select the Event Log tab in the right pane.
The columns displayed are:
Event This is a text description of the event describing what has occurred.
Message
When you initially view the Event Log, all information is displayed in chronological
order (most recent at the top). If you want to reverse the order (oldest at top) or
change the way the information is displayed, you can click on a column heading to
re-sort the information. For example, if you click on the ID heading, you can sort the
events numerically. This can help you identify how often a particular event occurs.
By default, all informational system messages, warnings, and errors are displayed.
To filter the information that is displayed, right-click on a Server and select Event Log
--> Filter.
Select a category of
messages to display.
You can refresh the current Event Log display by right-clicking on the Server and
selecting Event Log --> Refresh.
You can print the Event Log to a printer or save it as a text file. These options are
available (once you have displayed the Event Log) when you right-click on the
Server and select the Event Log options.
CDP Reports
The FalconStor reporting feature includes many useful reports including allocation,
usage, configuration, and throughput reports. A description of each report follows.
• Client Throughput Report - displays the amount of data read/written
between this Client and SAN Resource. To see information for a different
SAN Resource, select a different Resource Name from the drop-down box
in the lower right hand corner.
• Delta Replication Status Report - displays information about replication
activity, including compression, encryption, microscan and protocol. It
provides a centralized view for displaying real-time replication status for all
disks enabled for replication. It can be generated for an individual disk,
multiple disks, source server or target server, for any range of dates. This
report is useful for administrators managing multiple servers that either
replicate data or are the recipients of replicated data.
• Disk Space Usage Report - displays the amount of disk space being used
by each SCSI adapter.
• Disk Usage History Report - allows you to create a custom report with the
statistical history information collected. You must have “statistic log” enabled
to generate this report. The data is logged once a day at a specified time.
The data collected is a representative sample of the day. In addition, if
servers are set up in as a failover pair, the “Disk usage history” log must be
enabled on the both servers in order for data to be logged during failover. In
a failover state, the data logging time set on the secondary server is
followed.
• Fibre Channel Configuration Report - displays information about each
Fibre Channel adapter, including type, WWPN, mode (initiator vs. target),
and a list of all WWPNs with client information.
• Physical Resources Configuration Report - lists all of the physical
resources on this Server, including each physical adapter and physical
device. To make this report more meaningful, you can rename the physical
adapter. For example, instead of using the default name, you can use a
name such as “Target Port A”.
• Physical Resources Allocation Report - displays the disk space usage
and layout for each physical device.
• Physical Resource Allocation Report - displays the disk space usage and
layout for a specific physical device.
• Resource IO Activity Report - displays the input and output activity of
selected resources. The report options and filters allow you to select the
SAN Resource and Client to report on within a particular date/time range.
• SCSI Channel Throughput Report - displays the data going through each
SCSI channel on the Server. This report can be used to determine which
SCSI bus is heavily utilized and/or which bus is under utilized. If a particular
bus is too heavily utilized, it may be possible to move one or more devices
to a different or new SCSI adapter. If a SCSI adapter has multiple channels,
each channel is measured independently.
You can run a global replication report ib CDP by highlighting the Servers object and
selecting Replication Status Reports --> New. Then follow the instructions in the
Report wizard. For additional information, refer to the Reports chapter in the CDP
Reference Guide.
Once you have protected a disk or partition, you can restore data either to your
original disk or to another disk. The best method to use depends on your restore
objectives.
This chapter discusses data recovery using DiskSafe for Windows and DiskSafe for
Linux. For additional details regarding DiskSafe, refer to the DiskSafe User Guide.
Available recovery methods include the following:
• Restore selected folders or files
If you are using snapshots and accidentally deleted a folder or file that you
need, or if you want to retrieve some older information from a file that has
changed, you can access the snapshot that contains the desired data and
copy it to your local disk.
This procedure can also be used to try different “what if” scenarios—for
example, changing the format of the data in a file—without adversely
affecting the data on your local disk.
• Restore an entire local data disk or partition
If you protected a data disk or partition—that is, a disk or partition that is not
being used to boot the host, has no page file on it - and your system hasn’t
failed, you can restore that disk or partition using DiskSafe. You might need
to do this if the disk has become corrupted or the data has been extensively
damaged. The entire disk or partition will be restored from the mirror or a
snapshot, and can be restored to either your original disk or another disk.
This technique can also be used to copy a system disk or partition to
another disk as long as it is not a disk from which you are currently booting.
You can continue to use your computer while the data is being restored,
although you cannot use any applications or files located on the disk or
partition being restored.
Keep in mind that when you restore a local disk or partition to a new disk,
the protection policy refers to the new disk instead of the original local disk.
• Restore an entire local system disk or partition
If you need to restore your system disk or partition—that is, the disk you
typically boot from—you can do so using the Recovery CD. This is
particularly useful if the hard disk or operating system has failed and been
repaired or replaced. The entire disk or partition will be restored from either
the mirror or from a selected snapshot, and can be restored to either your
original disk or another disk. However, you won’t be able to use your
computer until all the data is restored.
In addition to allowing you to restore data, DiskSafe also enables you to boot from a
remote mirror or snapshot and continue working while your failed hard disk is being
repaired or replaced. Once the hard disk is available again, you can restore your
data using either DiskSafe or the Recovery CD. For more information, refer to
‘Accessing data after system failure’ on page 142.
Note: If you are using a remote mirror with the Fibre Channel protocol, and the
hard disk or operating system fails, you must remotely boot the host using your
Fibre Channel HBA and then restore the data using DiskSafe. The Recovery CD
does not currently support the Fibre Channel protocol. For more information, refer
to ‘Accessing data after system failure’ on page 142.
Caution: Do not restore a protected remote virtual disk. This can adversely
affect the storage server’s performance.
Note: If you restore the data partition before the system partition has completed
initial synchronization, a warning message displays after restarting to alert you
that the disk needs to be checked. This warning appears every time a disk is not
consistent with the file system. You can click Ignore to bypass the system check.
Restore a file
2. In the right pane, right-click the disk or partition that you want to restore, and
then click Restore.
The DiskSafe Restore Wizard launches to guide you through the restore
process.
4. Select File to restore a file from a backup on your storage server and click Next.
5. Select the snapshot from which you want to restore your file and click Next and
then Finish.
The snapshot is mounted to the local file system with a new drive letter assigned
allowing you to select the file to restore.
1. Expand DiskSafe --> Protected Storage and then click Disks or Groups.
If the disk or partition whose data you want to restore is part of a group, expand
Groups and click the group that contains the disk or partition that you want to
restore.
2. In the right pane, right-click the disk or partition that you want to restore, and
then click Restore.
The DiskSafe Restore Wizard launches to guide you through the restore
process.
5. Select the Mirror image or snapshot from which you want to restore and click
Next.
A progress window displays as data from the mirror or snapshot is copied to the
specified location.
You can cancel this operation by clicking Cancel. However, this will leave the
disk or partition in an incomplete state, and you will have to restore it again
before you can use it.
Once complete, a screen displays indicating a successful or failed restore.
8. If you are restoring a dynamic volume that spans multiple disks, repeat the
above steps for each affected disk.
Make sure that no data is written to the dynamic volume while you are restoring
each disk.
You cannot restore an entire group; however, you can restore each group member
individually. If the disk or partition whose data you want to restore is part of a group,
expand Groups and click the group that contains the disk or partition that you want
to restore. Only snapshots from the selected group display on the snapshot list.
When you restore any member of a group, protection for the group continues
automatically. The group member being restored automatically leaves the group.
You will need to make sure the mirror disks from the storage server are consistent
with the client.
Mount a snapshot
Unmounting a snapshot
You can unmount the snapshot using its timestamp. The corresponding TimeView is
unassigned and deleted.
Syntax:
dscli snapshot unmount <DiskID> timestamp=<#>
Example:
# dscli snapshot unmount sdc timestamp=1219159421
Restore a disk
This command is used to restore from a mirror disk to a primary disk or new target
disk. You can perform a complete restore or a restore to a particular snasphot using
a timestamp. The target disk list can be found using the “restoreeligible” option of
the disk list command.
Syntax:
dscli disk restore <MirrorDiskID> <TargetDiskID>
[timestamp=<#>][-force]
Example:
# dscli disk restore sdb sdd
Note: The force option is used for restoring a partition protection to a new disk
has not been previously partitioned and which has a file system on it.
Group Restore
Refer to Restore a disk command. The members in a group must leave the group
and restore individually.
Group Rollback
The group rollback command is used to rollback the primary disks in the group to a
selected snapshot. A rollback to the selected snapshot is done on each mirror disk
and subsequently a full restore is performed from the mirror disk to the primary disk.
Protections are resumed automatically after a successful rollback.
Syntax:
dscli group rollback <groupname> timestamp=<#>
Example:
# dscli group rollback dsgroup1 timestamp=1224479576
Stop Group The stop group rollback command is used to stop an active rollback. This command
Rollback must be used with caution as the primary disk will be left in an inconsistent state.
Syntax:
dscli group stoprollback <groupname>
Example:
# dscli group stop rollback dsgroup1
Recovery CD
When you cannot start your Windows computer, another option is to use the
FalconStor DiskSafe Recovery CD to restore a disk or partition. You can obtain the
Recovery CD from Technical Support in the event of a computer disaster.
Using the Recovery CD, you can restore data using a recovery point from within the
FalconStor DSRecPE Recovery Wizard. You can restore both your system disk and
data disks, and you can restore them to the original hard disk or another disk. You
can restore either the mirror itself or a snapshot (i.e. a point-in-time image) of the
data.
You can also perform device management, network configuration, or access the
command console. The Device Management option allows you to load any device
driver (with an .inf extension). The Network Configuration option allows you to set up
your Ethernet adapter configuration.
The only limitations of the Recovery CD are that you cannot use it to restore data
from a local disk or with Fibre Channel connections.
1. Launch DiskSafe
Make sure your computer is configured to boot from the CD-ROM drive. If you are
restoring the system partition, it is recommended that you restore to the original
partition ID. If the disk is protected with encryption, un-mount any snapshots before
using the Recovery CD. It is also recommended that you restore to similar media
(i.e. IDE to IDE or SATA to SATA). If you need to start your computer using this
recovery tool, following the instructions below:
4. While the computer is starting, watch the bottom of the screen for a prompt that
tells you how to access the BIOS. Generally, you will need to press Del, F1, F2,
or F10.
Note: The term boot refers to the location where software required to start the
computer is stored. The Symantec Recovery Disk contains a simple version
of the Windows operating system. By changing the boot sequence of your
computer to your CD drive, the computer can then load this version of
Windows. Boot is also used synonymously with start.
6. Change the CD or DVD drive to be the first bootable device on the list.
8. As soon as you see the prompt Press any key to boot from CD appear, press a
key to start the Recovery CD.
Once you successfully restart your system, an End User License Agreement
displays.
9. Accept the end user license agreement to launch the Recovery CD.
Note: If you do not accept the license agreement, the system reboots.
To restore a disk or partition using the Recovery CD, select the Recovery Wizard
option. The Recovery Wizard guides you through recovering your data from a
remote storage server. You will be asked to select the remote storage server on
which your disk image is located along with any snapshots. You will then be able to
select the local disk or partition to use as your recovery destination.
Enter your storage server IP address, client name (i.e. computer name),
recovery password and click Connect.
After you have successfully connected to the storage server, the selection
screen displays with the available source and destination disks.
If all disks are not displayed, click the Rescan Disk button to refresh the list. You
can also click the Create Partition button to manage your partition layout.
3. Once you have selected the source and destination pair, click Restore.
Note: For Windows Vista and 2008: If you have previously flipped the disk
signature for the current mirror, you may need to insert the Windows Vista or
2008 operating CD after restoring the system via DiskSafe Recovery CD to
repair the system before boot up.
All selected pairs will be restored in the sequence selected via the Clone Agent.
Notes:
• Certain combinations of HBAs and controllers do not support booting
remotely. For more information, refer to the DiskSafe support page on
www.falconstor.com.
• If Windows wasn’t installed on the first partition of the first disk in the
system, you can remotely boot only if you protected the entire first disk in
the system. Although Windows might reside on other disks or partitions,
certain files required for booting reside only on the first partition of the
first disk.
• Booting from a snapshot rather than the mirror itself is recommended
when booting using a Fibre Channel HBA, as the image will be complete
and intact. If the system failure occurred during synchronization, the
mirror might not be a complete, stable image of the disk.
When the failed hard disk is repaired or replaced, you can either restore all the data
to it using the Recovery CD (as described in ‘Recovery CD’ on page 137), or you
can run DiskSafe while remotely booting and restore all the data using that
application.
Caution: When you boot remotely, do not use DiskSafe for any operation other
than restoring.
2. Use the appropriate procedure for your system to remotely boot using PXE.
For example, you might press F12 when the boot menu appears.
1. If you plan to boot from a snapshot, mount the snapshot and assign it to the host
using the storage server software.
Note: To boot remotely from Windows Vista or 2008, you must switch the disk
signature by running the following CLI command on the storage server for the
mirror or TimeView disk prior to boot (and when the host is powered off):
iscli setvdevsignature –s <server-name> -v <vdevid> -F .
2. At the host, physically disconnect the failed hard disk from the system.
For more information, refer to the documentation for your system.
3. Boot the host using the HBA, and then use the appropriate procedure for your
HBA to connect to the mounted snapshot or mirror on the storage server.
For more information, refer to the documentation for your HBA.
5. If you protected other disks or partitions in addition to the system disk, assign
drive letters to those disks or partitions.
Notes:
• If you boot from a mounted snapshot, do not dismount that snapshot
either via the storage server or by removing protection for the disk via
DiskSafe and then clicking Yes when prompted to dismount any
mounted snapshots. If you do, your system will no longer function,
and you will have to repeat this procedure in order to boot from the
storage server once more.
• In Windows Vista and 2008, do not put the local system disk and the
mounted snapshot together during boot up. Otherwise, you may not
be able to remotely boot again.
1. Shut down the host and install the repaired or replaced hard disk.
Notes:
• If you replaced the original hard disk, the new disk must be the same
size as or larger than the mirror.
• If you are restoring a system disk, the system to which you are
restoring the data must be identical to the original system. For
example, if the original system had a particular type of network
adapter, the system to which you are restoring the data must have the
exact same type of network adapter. Otherwise, the restored files will
not operate properly.
• In Windows Vista and 2008, format the hard disk before installing it.
3. Run DiskSafe and restore the protected data (as described in ‘Restore data
using DiskSafe for Windows’ on page 132).
If you need to restore the whole system to the point-in-time snapshot, run
DiskSafe and restore the data. If you need to restore the whole system that is
currently running in remote boot, remove the existing system protection, and
then create a new protection. The primary will be the disk that is currently
booting up and the mirror is the local hard disk.
4. After the recovery is complete, shut down the host and then use your storage
server software to unassign the mirror from the host.
For more information, refer to the CDP Reference Guide. If you don’t have
access to the storage server, contact your system administrator.
Note: If you are using Vista or Windows 2008, you will need to remotely boot
from a TimeView, and then restore from a Snapshot to a new/original disk with
checked Restore disk signature/GUID. Otherwise, after restoration, the
system will not boot.
5. Start the host, go to the BIOS and disable boot from HBA.
For more information, refer to the documentation for your HBA.
6. Start the host, start DiskSafe, remove protection for the disk or partition that you
just restored, and then shut down the host again.
Note: After starting the host, if you are prompted to restart it, do so before
starting DiskSafe.
8. Start the host, start DiskSafe, and protect the disk or partition once more (as
described in ‘Protect a disk or partition with DiskSafe’ on page 29), using the
existing mirror on the storage server as the mirror once again.
Scenario 2: In the event of a catastrophic logical volume error that results in total
volume data loss, the TimeMark Rollback method should be used. Generally, this
scenario is used when it is decided that the current primary logical volume is
useless, and a full "restore" is necessary to reset it back to a known good point-in-
time.
Scenario 3: In the event of a minor data loss, such as inadvertent deletion of a file
or directory, it is NOT desirable to use TimeMark Rollback because all of the "good
changes" are also rolled back. Therefore, for this case, it is more desirable to create
a virtual view of the protected Volume Group as of a known good point-in-time and
then mount the logical volume in this view in order to copy back the deleted file(s).
This virtual view is called a TimeView, which is created using a specified TimeMark.
This TimeView is an exact representation of the entire Volume Group, and contains
every logical volume inside the group. The Volume Group name of the TimeView is
identical to the primary Volume Group except with a "_tv" appended to the name.
Once the data has been copied back, the TimeView is discarded. Because the
TimeView is virtual, there is no massive copying of data (no extra storage is
required) and the time to mount the TimeView Volume Group is fast.
1. Run the recover_vg script to recover a volume using the FalconStor CDP
rollback feature. Ex. "recover_vg -rb vg01"
1. Run the recover_lv script to recover a logical volume using the FalconStor CDP
rollback feature. Ex. "recover_lv <IP address> -rb fslv05"
Recover a Volume Group using TimeMark Rollback for AIX HACMP LVM
1. Run the recover_vg_ha script to recover a volume group using the FalconStor
CDP rollback feature. Ex. "recover_vg <IP address> <Other Node> -rb
sharevg_01"
1. Run the recover_vg script to recover a disk using the FalconStor CDP TimeMark
feature. Ex. "recover_vg -tv vg01"
2. Select the TimeMark timestamp for the TimeView to be created. (See the usage
example below.)
Usage example of the recover_vg script using a TimeView for AIX LVM:
You may now verify the contents of each mount point to confirm the data is valid.
1. Run the recover_lv script to recover a logical volume using the FalconStor CDP
TimeMark feature. Ex. "recover_lv -tv vg01"
You may now verify the contents of each mount point to confirm the data is valid.
1. Run the recover_vg script to recover a disk group using the FalconStor CDP
TimeMark feature. Ex. "recover_vg vaix3 -tv vg01"
2. Select the TimeMark timestamp for the TimeView to be created. (See the usage
example below.)
An example of the recover_vg_ha script using a TimeView for the AIX HACMP
Shared Volume Group:
.# recover_vg_ha 192.168.15.96 vaix3 -tv sharevg_01
• ### AVAILABLE TIMEVIEW TIMESTAMP FOR VOLUME GROUP
sharevg_01 ###
• 20081016220547 (10/16/2008 22:05:47) 64.00 KB valid test 1
• 20081016220555 (10/16/2008 22:05:55) 64.00 KB valid test 2
• 20081016220601 (10/16/2008 22:06:01) 2.30 MB valid test 3
• 20081016221000 (10/16/2008 22:10:00) 64.00 KB valid
• Enter TimeMark timestamp for TimeView on Terayon-sharevg_01-Protection
• 20081016221000
• Creating TimeView Terayon-sharevg_01_tv1-Protection with timestamp (10/
16/2008 22:10:00)...
• Assigning Virtual Disk Terayon-sharevg_01_tv1-Protection vid 262 to vaix3...
• Rescanning DynaPath Devices on Node vaix3...
• Creating Volume Group for TimeView Terayon-sharevg_01_tv1-Protection on
hdisk7...
• Running fsck on /dev/tv1_lv00 on Node vaix3...
Note: The script will only list the TimeMark timestamps that do not yet have a
TimeView.
You may now verify the contents of each mount point to confirm the data is valid.
Note: The script will only list the TimeMark timestamps that do not yet have a
TimeView.
You may now verify the contents of each mount point to confirm the data is valid.
Note: The script will only list the TimeMark timestamps that do not yet have a
TimeView.
You may now verify the contents of each mount point to confirm the data is valid.
Let's assume we want to pause the CDP mirror on volume group vg01.
Pause the CDP mirror for volume group vg01
set_mirror_vg <IP address> -p vg01
Usage example of set_mirror_vg to pause the CDP mirror feature:
Now let’s assume we want to resume the mirror on volume group vg01.
Pause the CDP mirror for volume group vg01
set_mirror_vg <IP address> -r vg01
Usage example of set_mirror_vg to resume the CDP mirror:
4. Install the special device file for the new IPStor disk.
insf -eC disk
Scenario 2: If there is a catastrophic logical volume error that results in total volume
data loss, the TimeMark Rollback method should be used. Generally, this scenario is
used whenever it is decided that the current primary logical volume is useless, and a
full "restore" is necessary to reset it back to a known good point-in-time.
Scenario 3: For minor data loss, such as inadvertent deletion of a file or directory, it
is NOT desirable to use TimeMark Rollback because all of the "good changes" are
also rolled back. Therefore, for this case, it is more desirable to create a virtual view
of the protected Volume Group as of a known good point-in-time and then mount the
logical volume in this view in order to copy back the deleted file(s). This virtual view
is called a TimeView, which is created using a specified TimeMark. This TimeView is
an exact representation of the entire Volume Group, and contains every logical
volume inside the group. The Volume Group name of the TimeView is identical to
the primary Volume Group except with a "_tv" appended to the name. After copying
back the data, the TimeView is then discarded. Because the TimeView is totally
virtual, there is no massive copying of data (no extra storage is required) and the
time to mount the TimeView Volume Group is fast.
1. Run the recover_vg script to recover a volume group using the FalconStor CDP
rollback feature. Ex. "recover_vg -rb vg01"
Unmounting /mnt/vg01_lvol1...
Unmounting /mnt/vg01_lvol2...
Unmounting /mnt/vg01_lvol3...
1. Run the recover_dg script to recover a disk group using the FalconStor CDP
rollback feature. Ex. "recover_vg <IP Address> -rb vg01"
Unmounting /mnt/dg01_lvol2...
Unmounting /mnt/dg01_lvol3...
1. Run the recover_vg script to recover a volume group using the FalconStor CDP
TimeMark feature. Ex. "recover_vg -tv vg01"
You may now verify the contents of each mount point to confirm the data is valid.
1. Run the recover_dg script to recover a disk group using the FalconStor CDP
TimeMark feature. Ex. "recover_dg -tv vg01"
2. Select the TimeMark timestamp for the TimeView to be created. (See the usage
example below.)
Usage example of the recover_vg script using a TimeView for HP-UX
VxVM:
You may now verify the contents of each mount point to confirm the data is valid.
Note: The script will only list the TimeMark timestamps that do not yet have a
TimeView.
You may now verify the contents of each mount point to confirm the data is valid.
4. Install the special device file for the new IPStor disk.
insf -eC disk
The first step to recovery is to have set up a mirror for the Solaris machine.
2. Add the Solaris machine as a client and assign the client to the SAN Resource.
3. Use the devfsadm command to perform a device scan on Solaris and then use
the format command to verify that the client claimed the device.
5. Create two stripes for the two sub-mirrors as d21 and d22:
#metainit d21 1 1 c2t6d0s2
#metainit d22 1 1 c2t7d0s2
6. Specify the primary disk that is to be mirrored by creating a mirror device (d20)
using one of the sub-mirrors (d21):
#metainit d20 -m d21
When you want to perform a roll back, the primary disk and the mirror disk (the CDP
virtual device), will be out of sync and the mirror will need to be broken. In Solaris
SVM this can be achieved by placing the primary and mirror device into a logging
mode.
1. Disable the remote mirror software and discard the remote mirror:
rmshost1# sndradm -dn -f /etc/opt/SUNWrdc/rdc.cf
2. Edit the rdc.cf file to swap the primary disk information and the secondary
disk information. Unmount the remote mirror volumes:
rmshost1# umount mount-point
3. When the data is de-staged, mount the secondary volume in read-write mode so
your application can write to it.
5. Fix the "failure" at the primary volume by disabling logging mode using the re
synchronization command.
7. Roll back the secondary volume to its original pre-disaster state to match the
primary volume by using the sndradm -m copy or sndradm -u update
commands.
Keep the changes from the updated secondary volume and resynchronize so
that both volumes match using the sndradm -m r reverse copy or by using
sndradm - u r reverse update commands.
FalconStor Recovery Agents offer recovery solutions for your database and
messaging systems.
FalconStor® Message Recovery for Microsoft® Exchange (MRE) and Message
Recovery for Lotus Notes/ Domino (MRN) expedite mailbox/message recovery by
enabling IT administrators to quickly recover individual mailboxes from point-in-time
snapshot images of their messaging server.
FalconStor® Database Recovery for Microsoft® SQL Server expedites database
recovery by enabling IT administrators to quickly recover a database from point-
intime snapshot images of their SQL database.
IntegrityTrac is a validation tool that allows you to check the application data
consistency of snapshots taken from Microsoft Exchange servers before using them
for backup and recovery.
FalconStor® Recovery Agent for Microsoft® Volume Shadow-Copy Service (VSS)
enables IT administrators to restore volumes and volume groups from point-in-time
snapshots created by the FalconStor® Snapshot Agent for Microsoft® VSS.
Refer to the Recovery Agents User Guide for more information regarding how to
recover your data using the following products:
• Message Recovery for Microsoft Exchange
• Message Recovery for Lotus Notes/Domino
• Database Recovery for Microsoft SQL Server
• IntegrityTrac
• Recovery Agent for VSS
RecoverTrac
RecoverTrac allows you to create scripts that manage the recovery process for
multiple host machines in a group or "farm". In the event of an emergency,
RecoverTrac can quickly recover the hosts and help you bring them back online in
the required sequence, simultaneously or sequentially, to the best recovery point.
Refer to the FalconStor RecoverTrac User Guide for more information regarding the
FalconStor RecoverTrac disaster recovery tool.
Index
A Pre-installation 20
About this document 6 Client Throughput Report 129
Acceptable throughput 36 Clone Agent 141
Access control columns, showing/hiding/re-ordering 28
SAN Client 116 Configuration repository 86
Access rights Configuration wizard 83
IPStor Admins 96 Connectivity 98
IPStor Users 96 Console 81
SAN Client 116 Administrator Management 95
Accounts Change password 98
Manage 95 Connectivity 98
ACSL Custom menu 118
Change 116 Definition 2
Activity Log 90 Discover IPStor Servers 82, 86
Adapters Import a disk 109
Rescan 15, 108 Log 118
Administrator Logical Resources 112
Management 95 Options 117
Advanced custom settings 33 Physical Resources 104
agent trace log 125 Replication 114
AIX Rescan adapters 15, 108
Logical Volume Manager 75 SAN Clients 115
Alias 110 Save/restore configuration 86
Allocate Disk 30 Search 85
Auto Save 87, 94 Server properties 89
automatic expansion policy 44 Start 81
System maintenance 101
B User interface 85
Balance performance and coverage 33 Continuous Data Protector (CDP) 1
Benefits Continuous mirror mode 33
24 x 7 server protection 5 Continuous mode 32
Block devices 108
boot CD (see recovery CD) D
booting remotely 142–146 data
accessing after system failure 142–146
C sorting 28
Cache Deterioration threshold 36
Write 113 Device Management option 137
CDP journaling 57, 71, 80 Direct Attached Storage (DAS) 1
Central Client Manager (CCM) 2 Disaster recovery
CHAP authentication 45 Import a disk 109
CHAP secret 98 Save/restore configuration 86
Client Disk
Definition 3 Foreign 109
Installation IDE 108
Linux 20 Import 109
Windows 20 System 105
T
Task creation 34
TCP 93
Thin Provisioning 1, 4, 19, 31
Throttle 93
Throughput
Test 110
TimeMark 44
policy 59, 167
Rollback 147, 160
timestamp 151, 164
TimeView 147, 160
mount_tv 154, 155, 165
recover_vg script 151, 164
Troubleshooting 120