CDP Administration Guide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 180

FalconStor® Continuous Data Protector (CDP)

Administration Guide

FalconStor Software, Inc.


2 Huntington Quadrangle, Suite 2S01
Melville, NY 11747
Phone: 631-777-5188
Fax: 631-501-7633
Web site: www.falconstor.com

Copyright © 2001-2010 FalconStor Software. All Rights Reserved.


FalconStor Software, IPStor, HotZone, SafeCache, TimeMark, TimeView, EZStart, and the EZStart logo are either registered
trademarks or trademarks of FalconStor Software, Inc. in the United States and other countries.
Linux is a registered trademark of Linus Torvalds.
Windows is a registered trademark of Microsoft Corporation.
All other brand and product names are trademarks or registered trademarks of their respective owners.
FalconStor Software reserves the right to make changes in the information contained in this publication without prior notice. The
reader should in all cases consult FalconStor Software to determine whether any such changes have been made.
This product is protected by United States Patents Nos. 7,093,127 B2; 6,715,098; 7,058,788 B2; 7,330,960 B2; 7,165,145 B2
;7,155,585 B2; 7.231,502 B2; 7,469,337; 7,467,259; 7,418,416 B2; 7,406,575 B2 , and additional patents pending

9/10/10
Contents
Introduction
Concepts and components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
Hardware/software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
Web Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6

Getting Started
Data Protection in Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
Install DiskSafe for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
Uninstall DiskSafe for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
Silent Installation 10
License management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
Install and configure FalconStor Snapshot Agents . . . . . . . . . . . . . . . . . . . . . . . . .12
Prepare host connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
Enable iSCSI target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
Enable Fibre Channel target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
Set QLogic ports to target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
Prepare physical storage for use with CDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
Present storage to the CDP appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
Rescan adapters (initiators) for the assigned storage . . . . . . . . . . . . . . . . . . . . . . .15
Prepare physical disks for virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
Create storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
Set Storage Pool properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
Virtualize storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
Create a virtual device SAN Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
Prepare your client machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19
Pre-installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
Windows client installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
Linux client installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
Prepare the AIX host machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
Install AIX FalconStor Disk ODM Fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
Install the AIX SAN Client and Filesystem Agent . . . . . . . . . . . . . . . . . . . . . . . . . .23
Prepare the CDP and HP-UX environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25
Install the SAN Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26
Install the HP-UX file system Snapshot Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26

Data Protection
Data Protection in a Windows environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
Use DiskSafe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
Protect a disk or partition with DiskSafe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29
Protect a group of disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40
Suspend or resume protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42
Data protection in a Linux environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44
Installing DiskSafe for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44

CDP Administration Guide


Using DiskSafe for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44
Protecting disks and partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
Scheduled Disk Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
DiskSafe for Linux operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
Root Disk Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48
Data Protection in SuSE Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51
Pre-configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51
Hardware/software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52
Set up the mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52
Create a DOS partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53
Create an EVMS segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53
Create a RAID-1 mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54
Create an EVMS volume in the RAID region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54
Create a file system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55
Mount a RAID device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55
Recovery and rollback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56
Data Protection in Solaris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58
Hardware/software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58
Set up the mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58
Break the mirror for rollback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59
Data Protection in Red Hat Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60
Hardware/software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60
Supported kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60
Initialize a disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61
Set up the mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61
Create physical volumes for the primary and mirror disks . . . . . . . . . . . . . . . . . . . .62
Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63
Data protection in an HP-UX environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65
Download and configure the protection and recovery scripts for HP-UX LVM . . . .65
Download and configure the protection and recovery scripts for HP-UX VxVM . . .67
Protect your servers in a HP-UX environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69
Data Protection in an AIX environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72
Download and configure the protection and recovery scripts for AIX LVM . . . . . . .72
Establish the AIX Logical Volume Manager (LVM) mirror on a volume group . . . . .75
Establish the AIX LVM mirror on a Logical Volume . . . . . . . . . . . . . . . . . . . . . . . . .77
Establish the AIX LVM mirror on HACMP Volume Group . . . . . . . . . . . . . . . . . . . .79

FalconStor Management Console


Start the console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81
Connect to your storage server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82
Configure your server using the configuration wizard . . . . . . . . . . . . . . . . . . . . . . . . . . .83
Step 1: Enter license keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .83
Step 2: Setup network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .83
Step 3: Set hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .84
FalconStor Management Console user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85
Discover storage servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .86
Protect your storage server’s configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .86
Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .88

CDP Administration Guide ii


Set Server properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89
Manage accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95
Change the root user’s password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .98
Check connectivity between the server and console . . . . . . . . . . . . . . . . . . . . . . . .98
Add an iSCSI User or Mutual CHAP User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .98
Apply software patch updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100
System maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101
Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104
Physical resource icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105
Prepare devices to become logical resources . . . . . . . . . . . . . . . . . . . . . . . . . . . .106
Rename a physical device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107
Use IDE drives with CDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .108
Rescan adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .108
Import a disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109
Test physical device throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .110
SCSI aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .110
Repair paths to a device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .111
Logical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .112
Logical resource icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .113
Write caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .113
Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .114
SAN Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115
Change the ACSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116
Grant access to a SAN Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116
Webstart for Java console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116
Console options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .117
Create custom menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118

Data Management
Verify snapshot creation and status in DiskSafe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120
Browse the snapshot list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120
Check DiskSafe Events and the Windows event log . . . . . . . . . . . . . . . . . . . . . . .121
Check Microsoft Exchange snapshot status . . . . . . . . . . . . . . . . . . . . . . . . . . . . .123
Check Microsoft SQL Server snapshot status . . . . . . . . . . . . . . . . . . . . . . . . . . . .124
Check Oracle snapshot status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .125
Check Lotus Notes/Domino snapshot status . . . . . . . . . . . . . . . . . . . . . . . . . . . . .126
Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .126
CCM Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .126
CDP Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .127
CDP Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .127
CDP Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .129
Global replication reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130

Data Recovery
Restore data using DiskSafe for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132
Restore a file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132

CDP Administration Guide iii


Restore a disk or partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133
Restoring group members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .134
Restore data using DiskSafe for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .135
Mount a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .135
Unmounting a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136
Restore a disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136
Group Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136
Group Rollback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136
Recovery CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137
Set the Recovery CD password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137
Restore a disk or partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137
Accessing data after system failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142
Booting remotely using PXE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .143
Booting remotely using an HBA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .144
144
Restoring a disk or partition while booting remotely . . . . . . . . . . . . . . . . . . . . . . . . . . .145
Recover AIX Logical Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .147
Recover a Volume Group using TimeMark Rollback for LVM . . . . . . . . . . . . . . . .148
Recover a Logical Volume using TimeMark Rollback for AIX LVM . . . . . . . . . . . .149
Recover a Volume Group using TimeMark Rollback for AIX HACMP LVM . . . . . .150
Recover a Volume Group using a TimeView (recover_vg) for AIX LVM . . . . . . . .151
Recover a Logical Volume using a TimeView (recover_lv) for AIX LVM . . . . . . . .152
Recover a Volume Group using a TimeView (recover_vg_ha) for HACMP LVM .153
Recover a Volume Group using a TimeView (mount_vg) . . . . . . . . . . . . . . . . . . .154
Recover a Logical Volume using a TimeView (mount_lv) for AIX LVM . . . . . . . . .155
Recover an HACMP Shared Volume Group using a TimeView (mount_vg_ha) . .156
Remove a TimeView Volume Group (umount_vg) . . . . . . . . . . . . . . . . . . . . . . . .157
Remove a TimeView Logical Volume (umount_lv) . . . . . . . . . . . . . . . . . . . . . . . .157
Remove a TimeView HACNMP Volume Group (umount_vg_ha) . . . . . . . . . . . . .158
Pause and resume mirroring for an AIX Volume Group . . . . . . . . . . . . . . . . . . . .158
Recover a Volume Group on a disaster recovery host . . . . . . . . . . . . . . . . . . . . .159
Recover HP-UX Logical Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .160
Recover a Volume Group using TimeMark Rollback for HP-UX LVM . . . . . . . . . .161
Recover a Volume Group using TimeMark Rollback for HP-UX VxVM . . . . . . . . .162
Recover a Volume Group using a TimeView (recover_vg) for HPUX LVM . . . . . .163
Recover a Volume Group using a TimeView (recover_vg) for HP-UX VxVm . . . .164
Recover a Volume Group using a TimeView (mount_tv) . . . . . . . . . . . . . . . . . . . .165
Remove TimeView Volume Group (umount_tv) . . . . . . . . . . . . . . . . . . . . . . . . . .166
Recover a Volume Group on a disaster recovery host . . . . . . . . . . . . . . . . . . . . .166
Disaster Recovery in a Solaris environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .167
Prepare the Solaris machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .167
Break the mirror for rollback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .168
Additional FalconStor disaster recovery tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .169
Recovery Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .169
RecoverTrac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .169

CDP Administration Guide iv


Introduction
FalconStor Continuous Data Protector (CDP) provides TOTALLY Open™ disk-
based data protection for servers, workstations, and laptops in both Storage Area
Network (SAN) and Direct Attached Storage (DAS) environments.
CDP provides the industry's fastest, most granular (any point in time) data recovery
of mission-critical databases, messaging data, files, and even entire systems for
business continuity. With its advanced technology, CDP creates a continuously
updated local and/or remote copy of an active data set.
By keeping a complete mirrored copy of data in its native format, as well as a series
of point-in-time snapshots, and a unique data journaling system, CDP allows you to
recover up to the last bit of information written before a service outage. In addition,
periodic protection, based on FalconStor TimeMark® snapshots, gives you
numerous bootable recovery images
CDP lets you replace or enhance your existing backup solution to protect data
between regularly scheduled backups, minimize the risk of data loss, and meet
increasingly stringent recovery point and recovery time objectives.
CDP also offers Thin Provisioning to maximize disk utilization efficiency. Thin
Provisioning allocates physical storage on an as needed basis, using less physical
storage than what is represented by virtual disks.

CDP Administration Guide 1


Introduction

Concepts and components


The primary components of the CDP solution are the storage appliance, SAN
Clients, and the console. These components all sit on the same network segment,
the storage network. The terminology and concepts used in CDP are described
here. For additional information, refer to the CDP Reference Guide.

CDP Appliance This is a dedicated storage server. The storage appliance is attached to the physical
SCSI and/or Fibre Channel storage device. The job of the appliance is to
communicate data requests between the clients and the SAN Resources via Fibre
Channel or iSCSI.

Central Client FalconStor CCM allows you to monitor and manage application server activity by
Manager displaying status and resource statistics on a centrailized console for FalconStor
(CCM) CDP clients.

DiskSafe™ This host-side backup software is installed on each Windows machine to capture
every write and journal them on the CDP Appliance using the iSCSI protocol.

DynaPath® A load balancing/path redundancy application that ensures constant data availability
and peak performance across the SAN by performing Fibre Channel and iSCSI HBA
load-balancing, transparent failover, and fail-back services. DynaPath creates
parallel active storage paths that transparently reroute server traffic without
interruption in the event of a storage network problem.

FalconStor The administration tool for the CDP storage network. It is a Java application that can
Management be used on a variety of platforms and allows administrators to create, configure,
Console manage, and monitor the storage resources and services on the storage network.

FileSafe™ This software application protects your file by backing up files and folders to another
location.

Logical Logical resources consists of sets of storage blocks from one or more physical hard
Resource disk drives. This allows the creation of virtual devices that contain a portion of a
larger physical disk device or an aggregation of multiple physical disk devices.
CDP has the ability to aggregate multiple physical storage devices (such as JBODs
and RAIDs) of various interface protocols (such as SCSI or Fibre Channel) into
logical storage pools. From these storage pools, virtual devices can be created and
provisioned to application servers and end users. This is called storage virtualization
which offers the added capability of disk expansion. Additional storage blocks can
be appended to the end of existing virtual devices without erasing the data on the
disk.
Logical resources are all of the logical/virtual resources defined on the storage
appliance, including SAN Resources (virtual drives, and service-enabled devices),
and Snapshot Groups.

CDP Administration Guide 2


Introduction

Near-line mirror Allows production data to be synchronously mirrored to a protected disk that resides
on a second CDP server. With near-line mirroring, the primary disk is the disk that is
used to read/write data for a SAN Client and the mirrored copy is a copy of the
primary. Each time data is written to the primary disk, the same data is
simultaneously written to the mirror disk. TimeMark or CDP can be configured on the
near-line server to create recovery points. The near-line mirror can also be
replicated for disaster recovery protection.

NIC Port Allows you to use multiple network ports in parallel to increase the link speed
Bonding beyond the limit of a single port and improve redundancy for higher availability. The
appliance must have at least two NIC ports to create one bond group and at least
four NIC ports to create two bond groups.
If you choose 1 Bond Group, all ports as the bond type, all discovered NIC ports will
be combined into a single group. If you choose 2 Bond Groups, half of the ports in
each as the bond type, each group will contain half of the discovered NIC ports.
Round-Robin mode (mode 0) transmits data in a sequential, round-robin order and
is the default mode. For a more dedicated mode where the NIC ports work in
concert with switches using the 802.1AX standard for traffic optimization, select Link
Aggregation mode.

Physical These are the actual physical LUNs as seen by the RAID controller/storage HBA
Resource within the CDP appliance used to create Logical Resources. Clients do not gain
access to physical resources; they only have access to Logical Resources. This
means that an administrator must reserve Physical Resources for use as either
virtual devices or service-enabled devices before creating Logical Resources.
Storage pools can be used to simplify Physical Resource allocation/management
before creating Logical SAN Resources.

SAN Clients These are the actual file and application servers used to communicate with the CDP
appliance. FalconStor calls them SAN Clients because they utilize the storage
resources via the CDP appliance. The storage resources appear as locally attached
devices to the SAN Clients' operating systems (Windows, Linux, Solaris, etc.) even
though the SCSI devices are actually located at the CDP appliance.

SAN SAN Resources provide storage for file and application servers (SAN Clients). When
Resources a SAN Resource is assigned to a SAN Client, a virtual adapter is defined for that
client. The SAN Resource is assigned a virtual SCSI ID on the virtual adapter. This
mimics the configuration of actual SCSI storage devices and adapters, allowing the
operating system and applications to treat them like any other SCSI device. A SAN
Resource can be a virtual device or a service-enabled device.

Service- Hard drives with existing data that are protected by CDP.
enabled
devices

Snapshot The concept of performing a snapshot is similar to taking a picture. When we take a
photograph, we are capturing a moment in time and transferring it to a photographic
medium. Similarly, a snapshot of an entire device allows us to capture data at any

CDP Administration Guide 3


Introduction

given moment in time and move it to either tape or another storage medium, while
allowing data to be written to the device. The basic function of the snapshot engine
is to allow point-in-time, "frozen" images to be created of data volumes (virtual
drives) using minimal storage space. By combining the snapshot storage with the
source volume, the data can be recreated exactly at it appeared at the time the
snapshot was taken. For added protection, a snapshot resource can also be
mirrored through CDP. You can create a snapshot resource for a single SAN
Resource or you can use the batch feature to create snapshot resources for multiple
SAN Resources. Refer to the CDP Reference Guide for additional information.

Snapshot Snapshot agents collaborate with NTFS volumes and applications in order to
Agents guarantee that snapshots are taken with full application level integrity for fastest
possible recovery. A full suite of Snapshot Agents is available so that each snapshot
can later be used without lengthy chkdsk and database/email consistency repairs.
Snapshot Agents are available for Oracle®, Microsoft® Exchange, Lotus Notes®/
Domino®, Microsoft® SQL Server, IBM® DB2® Universal Database, Sybase® and
many other applications.

Storage pools Groups of one or more physical devices. Creating a storage pool enables you to
provide all of the space needed by your clients in a very efficient manner. You can
create and manage storage pools in a variety of ways, including tiers, device
categories, and types.
For example, you can classify your storage by tier (low-cost, high-performance,
high-redundancy, etc.) and assign it based on these classifications. Using this
example, you may want to have your business critical applications use storage from
the high-redundancy or high-performance pools while having your less critical
applications use storage from other pools.

Thin This feature allows you to use your storage space more efficiently by allocating a
provisioning minimum amount of space for each virtual resource. Then, when usage thresholds
are met, additional storage is allocated as necessary.

CDP Administration Guide 4


Introduction

Hardware/software requirements

Component Requirement

Enterprise class storage • Supplied by customer or vendor


• Directly attached to CDP appliance or by FC switch
CDP Appliance Certified enterprise class server provided by FalconStor, customer, or
vendor - directly attached to an NSS appliance or by FC switch
Integrated RAID
Redundant power supply
Dual Ethernet
PCI-X, PCI-E slots

Linux Server Red Hat Enterprise Linux 5 Update 3, kernel 2.6.18-128.el5 (64-bit)
CentOS Linux version 5.3, kernel 2.6.18-128.el5 (64-bit)
Oracle Enterprise Linux 5.3, kernel 2.6.18-128.el5 (64-bit)

Supported HBAs • CDP Appliance: QLogic FC HBAs with a minimum of two available ports
• Linux Server: FC HBA supported by Linux
The following target mode HBAs are supported:
• QLogic 23xx HBA
• QLogic 24xx HBA
• QLogic 256x HBA
Consult the FalconStor Certification Matrix for a complete list of supported
HBAs.

CPU Dual-core AMD Opteron and Intel Xeon EM64T are supported.

Memory 2 GB RAM minimum (8 GB recommended)

System disk 80 GB minimum disk space, SCSI, IDE or SATA

Network Interface Card Gigabit Ethernet network cards that are supported by Linux

FalconStor Management A virtual or physical machine that supports the Java 2 Runtime Environment
Console (JRE).

Logical Volume Manager (LVM) The appropriate LVM for your operating system. For example:
or DiskSafe Solaris Volume Manager (SVM)
AIX Logical Volume Manager

CDP Administration Guide 5


Introduction

Web Setup
Once you have physically connected the appliance, powered it on, and performed
the following steps via the Web Setup installation and server setup, you are ready to
begin using CDP.

1. Configure the Appliance


The first time you connect, you will be asked to:
• Select a language.
(If the wrong language is selected, click your browser back button or go to: /
/10.0.0.2/language.php to return to the language selection page.
• Read and agree to the FalconStor End User License Agreement.
• (Storage appliances only) Configure your RAID system.
• Enter the network configuration for your appliance

2. Manage License Keys


Enter the server license keys.

3. Check for Software Updates


Click the Check for Updates button to check for updated agent software.
Click the Download Updates button to download the selected client software.

4. Install Management Software and Guides


You can install the following management consoles:
• Server Management Console
• Client Management Console
• RecoverTrac

5. Install Client Software and Guides


You can install the following client software:
• iSCSI Initiator - for iSCSI connections
• DynaPath - for multi-pathing over Fibre Channel
• DiskSafe - for host-based disk/volume protection
• FileSafe - for host-based file/directory protection
• Snapshot Agents - for point-in-time data images
• Recovery Agents - for rapid application recovery
• HyperTrac - for backup acceleration

6. Configure Advanced Features


Advanced features allow you to perform the following:
• NIC Port Bonding - for use of multiple network ports in parallel to increase
the link speed beyond the limit of a single port and improve redundancy for
higher availability.

CDP Administration Guide 6


Introduction

• Add Storage Capacity - for extra storage capacity, you can connect
additional storage via Fibre Channel or iSCSI
• Disable Web Services - for businesses with policies requiring web services
to be disabled.
If you encounter any problems while configuring your appliance, contact FalconStor
EZStart technical support via the web at: www.falconstor.com/supportrequest.
(Additional contact methods are available in each step by clicking the EZStart
Technical Support link.

CDP Administration Guide 7


Getting Started
Once you have connected your CDP hardware to your network and set your network
configuration via Web Setup, you are ready to protect your data.
For information on protecting your data in a Windows environment, see Data
Protection in Windows.
For information on protecting your data in an AIX environment, refer to the Data
Protection in an AIX environment section.
For information on protecting your data in an HP-UX environment, refer to the Data
protection in an HP-UX environment section.
For information on protecting your data in a Red Hat Linux or SUSE environment,
refer to the Data Protection in Red Hat Linux section.
For information on protecting your data in a Solaris environment, refer to the Data
Protection in Solaris section.
Protection can also be set using the FalconStor Management Console. Refer to the
CDP Reference Guide for additional information on using the FalconStor
Management Console.

Data Protection in Windows


CDP in a Windows environment uses DiskSafe for backup and disaster recovery
(DR). DiskSafe protects Windows application servers, desktops, and laptops
(referred to as hosts or clients) by copying the local disks or partitions to a mirror
located on the FalconStor CDP appliance.
Once the local data has been initially copied to the mirror, DiskSafe can either write
data to both the local disk and its mirror simultaneously, or it can periodically
synchronize the two at scheduled intervals. With periodic synchronization, DiskSafe
copies only those data blocks that have changed since the last synchronization,
thereby optimizing performance.
Using less than 5% of CPU and 8 MB memory for 1 TB of server protection,
DiskSafe requires a small amount of system resources to protect a machine.
CDP in a Windows environment also uses Snapshot Agents. Once DiskSafe is
installed, you can install the appropriate Snapshot Agents if you plan to use the
snapshot protection feature. This feature allows you to take backup snapshots from
which you can restore files or protected disks from a specific point in time. Snapshot
agents are included with CDP as well as with the DiskSafe installation package.

CDP Administration Guide 8


Getting Started

To determine the appropriate snapshot agents, refer to the Snapshot Agent User
Guide. For most desktops and laptops, you would install the Snapshot Agent for File
Systems. For application servers, you would install the appropriate agent for that
application, such as the Snapshot Agent for Microsoft Exchange or the Snapshot
Agent for Oracle.
Note: You can only take snapshots if you use a remote mirror, and only if TimeMark
or the Snapshot Service is licensed on the storage server.

Install DiskSafe for Windows

The DiskSafe for Windows installation process intelligently detects the client host
operating system and installs the appropriate installation package. You will need to
install DiskSafe on each host that you want to protect.
DiskSafe can be installed from the CDP Server Web Setup feature or through an
admistrative share that contains the management and client software.
To install DiskSafe:

1. Log on as an administrator and use the Web Setup utility to install DiskSafe. Use
a web browser to connect to the CDP server via http using its primary IP
address.
• The default user name is fsadmin
• The default password is IPStor101
If you are not using the Web Setup utility, launch the installation media and click
Install Products --> DiskSafe. From CDP, click Install Products --> Install Host-
Based Applications --> Install DiskSafe for Windows.

Note: To be able to remotely boot, you must install DiskSafe on the first
system partition (that is, where Windows is installed).

2. When you have finished installing DiskSafe, you will be prompted to restart your
computer. You must restart your computer before running DiskSafe.

A DiskSafe shortcut icon will be placed on your desktop.

Once you have restarted the machine and launched DiskSafe, you will be
prompted to enter your license key code. For all operating systems, a DiskSafe
license keycode must be provided within 5 days of installation. If a license
keycode is entered but the license is not activated (registered with FalconStor)
immediately, the product can be used for 30 days (the grace period).

CDP Administration Guide 9


Getting Started

If your computer has an Internet connection, the license is activated as soon as


you enter your keycode and click OK. However, if your Internet connection is
temporarily down or if your computer has no Internet connection, your license
will not be activated. You must activate your license within 30 days so that you
can continue to use DiskSafe.
If your Internet connection is temporarily down, your license will be activated
automatically the next time DiskSafe is started, assuming you have an Internet
connection. Or, you can add your license through the SAN Disk Manager (the
utility that is installed with SDM during DiskSafe installation).
If your computer has no Internet connection, you must perform offline activation.
Refer to License management for more information.

Note: If you do not enter a keycode, you will only have five days to use
DiskSafe.

Uninstall DiskSafe for Windows

If you must uninstall DiskSafe for any reason, you can do so by navigating to
Programs --> FalconStor --> DiskSafe Uninstall. This will remove DiskSafe along
with all associated applications. You can also remove DiskSafe from the Control
Panel --> Add/Remove programs, but this only removes DiskSafe. SDM will remain
installed. You will need 20 MB of free disk space to uninstall DiskSafe.

Silent Installation
To install the DiskSafe installation package in silent mode, follow the steps below:

1. Manually unzip the DiskSafe package.

2. Launch the command console.

3. If you are installing on a 64-bit version of Windows, go to the DiskSafe\AMD64


folder. Otherwise, go to the DiskSafe\i386 folder.

4. Install IMA by executing the following commands:


cd ima
setup.exe /s
cd ..

5. Install DiskSafe by executing the following commands:


cd disksafe
setup.exe /s /v/qn
cd ..

CDP Administration Guide 10


Getting Started

The system automatically restarts after DiskSafe installation. If you do not want to
restart after DiskSafe installation, use the following command:
setup.exe /s /v“/qn REBOOT=suppress /log c:\dsInstall.log”

In a Cluster environment the Add Storage Server for Cluster Protection message
displays during DiskSafe installation prompting you to add the CDP server
information for cluster protection policies.

License management

When you install DiskSafe, you are prompted to license the product. If you do not
enter a license, you are only given five days to use DiskSafe. If you subsequently
need to add a license or change the license—for example, to upgrade from a trial
license to a standard license—you can do so through the License Manager.

Adding a To add a license:


license
1. Right-click DiskSafe and select License Manager.

2. Type or paste the keycode and then click OK.

Changing a Changing the trial license to a standard license does not remove protection, but it
license does temporarily stop protection until a new license is added.
To change the license:

1. Right-click DiskSafe and select License Managerr.

2. Click Enter a new key code, enter the new key code, and then click OK.

3. Click Yes to confirm the replacement.

Activating a Your DiskSafe license must be activated (registered with FalconStor). Once
license activated, you can select License Manager and see the message This product is
licensed.
If your computer has an Internet connection, the license is activated as soon as you
add it. However, if your Internet connection is temporarily down or if your computer
has no Internet connection, your license will not be activated. You must activate your
license within 30 days.
If your Internet connection is temporarily down, your license will be activated
automatically the next time DiskSafe is started, assuming you have an Internet
connection then. Or, you can add your license through the SAN Disk Manager.

CDP Administration Guide 11


Getting Started

If your computer has no Internet connection, you must perform offline activation. To
do this:

1. Right-click DiskSafe and select License Manager.

2. Click Perform offline activation.

3. Click Export license file.

4. Save the generated file and email it to the following address:


[email protected]

5. When you receive an e-mail response, save the returned signature file.

6. Launch the License Manager and select Perform offline activation.

7. Click Import license file.


Browse to the location where the returned signature file exists and select it.

Install and configure FalconStor Snapshot Agents

FalconStor DiskSafe works in conjunction with the FalconStor Snapshot Agents to


ensure full application level integrity for the fastest possible recovery. Even if you
change the data location or add more databases, they can still be protected by
DiskSafe.
Once DiskSafe is installed, use Web Setuo to install the appropriate Snapshot
Agents to protect databases and file systems. Typically, you would install the
Snapshot Agent for Windows File Systems for most desktops and laptops. In
addition, for each database or application, you would install the appropriate agent
for that application. For example, if the system is a Microsoft Exchange server, you
will need to install the Snapshot Agent for Microsoft Exchange.
To determine the appropriate snapshot agents, refer to the Snapshot Agent User
Guide.

CDP Administration Guide 12


Getting Started

Prepare host connections


Once the appliance is configured with a permanent IP address, ensure the proper
target mode protocol (iSCSI and/or FC) is enabled for host access.

Enable iSCSI target mode

If you have SAN Clients (application hosts) that need to access the CDP appliance
via iSCSI (IP based SAN), you will need to enable the iSCSI target mode.
This step can be done in the console’s configuration wizard. To do this afterward,
right-click on your storage server and select Options --> Enable iSCSI.
As soon as iSCSI is enabled, a new SAN client called Everyone_iSCSI is
automatically created on your storage server. This is a special SAN client that does
not correspond to any specific client machine. Using this client, you can create
iSCSI targets that are accessible by any iSCSI client that connects to the storage
server.
Before an iSCSI client can be served by a CDP appliance, the two entities need to
mutually recognize each other. You need to register your iSCSI client as an initiator
to your storage server to enable the storage server to see the initiator. To do this,
you will need to launch the iSCSI initiator on the client machine and identify your
storage server as the target server.
Refer to the documentation provided by your iSCSI initiator for detailed instructions
about how to do this.

Enable Fibre Channel target mode

If you are using external storage and Fibre Channel protocol, you will need to enable
Fibre Channel target mode. This step can be done in the console’s configuration
wizard. To do this afterward, right-click the CDP server that has the FC HBAs and
select Options --> Enable FC Target Mode.
An Everyone_FC client will be created under SAN Clients. This is a generic client
that you can assign to all (or some) of your SAN Resources. It allows any WWPN
not already associated with a Fibre Channel client to have read/write non-exclusive
access to any SAN Resources assigned to Everyone.

Set QLogic ports to target mode

By default, all QLogic point-to-point ports are set to initiator mode, which means they
will initiate requests rather than receive them. Determine which ports you want to
use in target mode and set them to become target ports so that they can receive
requests from your Fibre Channel Clients.
It is recommended that you have at least four Fibre Channel ports per server in
initiator mode, one of which is attached to your storage device.

CDP Administration Guide 13


Getting Started

You need to switch one of those initiators into target mode so your clients will be
able to see the CDP Server. You will then need to select the equivalent adapter on
the secondary server and switch it to target mode.

Note: If a port is in initiator mode and has devices attached to it, that port cannot
be set for target mode.

To set a port:

1. In the Console, expand Physical Resources.

2. Right-click on a HBA and select Options --> Enable Target Mode.


You will get a Loop Up message on your storage server if the port has
successfully been placed in target mode.

3. When done, make a note of all of your WWPNs.


It may be convenient for you to highlight your server and take a screenshot of
the Console.

Prepare physical storage for use with CDP


Once the proper target mode protocol (iSCSI and/or FC) is enabled for host access,
you will need to discover and prepare the storage and assign it to storage pools.
Each Fibre Channel initiator, target, and client initiator must have unique World Wide
Port Names (WWPNs). To determine the WWPN of your storage: click on the server
or adapter and look in the right pane for it.
If you used Web Setup to configure your system and you are adding storage, see
the example below of what the LUN naming convention will look like:

CDP Administration Guide 14


Getting Started

The physical devices on the head will be named as follows:


• _SystemDisk
• Repository1_0 (for storing the configuration info)
If the head has any data disks, they will be named: Data1_0 (meaning data drive
1_shelf0)
Drives in attached storage enclosures will be named as follows (where “shelf"
number is equivalent to enclosure number.)
• Storage Enclosure 1:
• Data1_1 (data drive 1 on shelf 1)
• Data 2_1 (data drive 2 on shelf 1)
• Storage Enclosure 2:
• Data1_2 (data drive 1 on shelf 2)
• Data 2_2 (data drive 2 on shelf 2)

Present storage to the CDP appliance

Follow your vendor-specific instructions to install storage and present the disk to
your CDP appliance. Typically, you have to create an entity to represent the CDP
appliance within your storage unit and you need to associate the CDP appliance’s
initiator name with the storage unit.
You will also need to zone the CDP appliance with the target of the storage unit.
Once this is done, storage can be presented to the CDP appliance for use.

Rescan adapters (initiators) for the assigned storage

1. To rescan all adapters (iSCSI or FC), right-click on Physical Resources and


select Rescan.

CDP Administration Guide 15


Getting Started

If you only want to scan a specific adapter, right-click on that adapter and select
Rescan. Make sure to select Scan for New Devices.

2. Determine what you want to rescan.


If you are discovering new devices, set the range of adapters, SCSI IDs, and
LUNs that you want to scan.
Use Report LUNs - The system sends a SCSI request to LUN 0 and asks for a
list of LUNs. Note that this SCSI command is not supported by all devices.
Stop scan when a LUN without a device is encountered - This option will scan
LUNs sequentially and then stop after the last LUN is found. Use this option only
if all of your LUNs are sequential.
Auto detect FC HBA SCSI ID scans QLogic HBAs. It ignores the range of SCSI
IDs specified and automatically detects the SCSI IDs with persistent binding.

3. If you want to set up load balancing, you can use the NIC Port Bonding feature
via Web Setup or via the FalconStor Management Console as described in the
CDP Reference Guide.

Prepare physical disks for virtualization

To prepare multiple devices, highlight Physical Resources, right-click and select


Prepare Disks. This launches a wizard that allows you to virtualize multiple devices
at the same time.
Alternatively, the CDP Server detects unassigned devices when you connect to it (or
when you execute the Rescan command). When new devices are detected, a dialog
box displays notifying you of the discovered devices. At this point you can highlight a
device and press the Prepare Disk button to prepare it.

CDP Administration Guide 16


Getting Started

Create storage pools


A storage pool is a group of one or more physical devices.
Creating a storage pool enables you to provide all of the space needed by your
clients in a very efficient manner. You can create and manage storage pools in a
variety of ways, including:
• Tiers - Performance levels, cost, or redundancy
• Device categories - Virtual, service enabled, direct
• Types - Primary storage, Journal, snapshot, and configuration.
• Specific application use - FalconStor DiskSafe, etc.
For example, you can classify your storage by tier (low-cost, high-performance,
high-redundancy, etc.) and assign it based on these classifications. Using this
example, you may want to have your business critical applications use storage from
the high-redundancy or high-performance pools while having your less critical
applications use storage from other pools.
Each storage pool can only contain the same type of physical devices. Therefore, a
storage pool can contain only virtualized drives or only service-enabled drives, or
only direct devices. A storage pool cannot contain mixed types.
Physical devices that have been allocated for a logical resource can still be added to
a storage pool.
To create a storage pool:

1. Right-click on Storage Pools and select New.

2. Enter a name for the storage pool.


For HP-UX, this storage pool will be used by the script as a mirror for the Volume
Group.

3. Indicate which type of physical devices will be in this storage pool.


Each storage pool can only contain the same type of physical devices.

4. Select the devices that will be assigned to this storage pool.


Physical devices that have been allocated for any logical resource can still be
added to a storage pool.

5. Click OK to create the storage pool.

Set Storage Pool properties

You can specify the purpose of each storage pool as well as assign it to specific
users or groups. The assigned users can create virtual devices and allocate space
from the storage pools assigned to them.

CDP Administration Guide 17


Getting Started

For proper CDP Appliance operation, you need to have at least one storage pool
with the following roles selected: Storage, Virtual Headers, Snapshot, Configuration
Repository (if you intend to set up an HA pair), Journal (if you intend to use CDP
Journal), and CDR (if you intend to use continuous mode replication).
You also need to ensure that the Security tab is used to enable at least one IPStor
User to have access rights to this pool. This is the IPStor User account with which
you will authenticate during DiskSafe setup operations.
If you intend to use HP-UX or AIX Auto-LVM scripts or Windows clients, you must
enable the Protection user for the storage pool. As a best practice, you should use a
separate Storage Pool for the various roles. For example, one for “Storage”, one for
“TimeMark” (Snapshot), one for “CDP”, etc.

Virtualize storage
Once you have prepared your storage, you are ready to create Logical Resources to
be used by your CDP clients. This configuration can be done entirely from the
console.

Create a virtual device SAN Resource

1. Right-click on SAN Resources and select New.

2. Select Virtual Device.

3. Select the storage pool or physical device(s) from which to create this SAN
Resource.

CDP Administration Guide 18


Getting Started

You can create a SAN Resource from any single storage pool. Once the
resource is created from a storage pool, additional space (automatic or manual
expansion) can only be allocated from the same storage pool.
You can select List All to see all storage pools, if needed.

4. Depending upon the resource type, select Use Thin Provisioning for more
efficient space allocation.
You will have to allocate a minimum amount of space for the virtual resource.
When usage thresholds are met, additional storage is allocated as necessary.
You will also have to specify the fully allocated size of the resource to be
created. The default initial allocation is 1GB.
From the client side, it appears that the full disk size is available.

5. Select how you want to create the virtual device.


Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express lets you designate how much space to allocate and then automatically
creates a virtual device using an available device.
Batch lets you create multiple SAN Resources at one time. These SAN
Resources will all be the same size.

6. (Express and Custom only) Enter a name for the new SAN Resource.
The name is not case sensitive.

7. Confirm that all information is correct and then click Finish to create the virtual
device SAN Resource.

8. Do not assign the new SAN Resource to a client.

Prepare your client machines


SAN Clients access their storage resources via iSCSI software initiators (for iSCSI)
and HBAs (for Fibre Channel or iSCSI).

CDP Administration Guide 19


Getting Started

Pre-installation

CDP provides client software for many platforms and protocols. Please check the
certification matrix on the FalconStor website for the versions and the patch levels (if
applicable) that are currently supported.

Notes:

• The CDP Client should not be installed onto a network drive.


• For an iSCSI or Fibre Channel client, you do not need to install any SAN
Client software unless the client is using a FalconStor snapshot agent or
the client is using multiple protocols. If you do need to install SAN Client
software, do not install it until your Fibre Channel configuration is
underway. For more information, see the CDP Reference Guide.
• Client software requires network connectivity to the storage server,
preferably on a separate, CDP-only network. This means that normal
LAN traffic does not occur on the adapter(s) dedicated to the CDP
storage network.

Windows client installation

The FalconStor Intelligent Management Agent (IMA) is automatically installed with


FalconStor DiskSafe.

Linux client installation


Note: It is not recommended to install the Linux client on a storage server.

1. Make sure that the CDP appliances that the client will use are all up and running.
2. To install the client software, log into your system as the root user.
3. Mount the installation CD to an available or newly created directory and copy the
files from the CD to a temporary directory on the machine.
The software packages are located in the /client/linux/ directory off the CD.
4. Type the following command to install the client software:
rpm -i <full path>/ipstorclient-<version>-<build>.i386.rpm

For example:
rpm -i /mnt/cdrom/Client/Linux/ipstorclient-4.50-0.954.i386.rpm

The client will be installed to the following location:


/usr/local/ipstorclient
It is important that you install the client to this location. Installing the client to a
different location will prevent the client driver from loading.After installation

CDP Administration Guide 20


Getting Started

5. Log into the client machine as the root user again so that the changes in the user
profile will take effect.

6. Add the CDP Servers that this client will connect to for storage resources by
typing the following command from /usr/local/ipstorclient/bin:
./ipstorclient monitor

7. Select Add from the menu and enter the server name, login ID and password.
After this server is added, you can continue adding additional servers.

8. To start the Linux client, type the following command from the /usr/local/
ipstorclient/bin directory:
./ipstorclient start

CDP Administration Guide 21


Getting Started

Prepare the AIX host machine


This section describes preparing your AIX host machine for data protection in an AIX
environment.
The first step is preparing the CDP environment by following the steps below:

1. Ensure that a FalconStor storage appliance configured for CDP is available. This
appliance may be factory-shipped from FalconStor or can be built using the
EZStart USB key on a supported hardware appliance, such as HP DL 38x/58x,
IBM x365x, or Dell 29xx family servers with supported QLogic FC HBA ports and
SATA/SAS/FC storage.

2. Install the FalconStor Management Console on a Windows host.

3. Use the console to create a user account with the username "Protection" and the
type "IPStor User". Set the password to "Protection".
These credentials will be used by the AIX scripts for CDP operations.

4. Create a storage pool with the name "Protection" and add a disk to the pool with
sufficient size. For instructions on creating a storage pool, refer to the Create
storage pools section
This storage pool will be used by the script as a mirror for the Volume Group on
AIX.

CDP Administration Guide 22


Getting Started

5. Select the Security tab and check the user named "Protection”.

6. Edit the fshba.conf parameters for improved support with AIX.


• vi $ISHOME/etc/fshba.conf
• Add "reply_invalid_device=63" to the bottom of the fshba.conf file.
• Save the file and restart IPStor services by typing: ipstor restart all
Warning: The ipstor restart all command includes restarting fshba which will
break the FC connection and take the storage server offline.

Install AIX FalconStor Disk ODM Fileset

1. Download the package ipstordisk.rte to a temporary directory on the AIX


client (host machine).

2. Install AIX FalconStor ODM Fileset:


installp -aXd ipstordisk.rte all

3. Confirm that the package installation was successful by listing system installed
packages.
lslpp -l | grep ipstordisk

4. If system configuration involves HACMP Cluster, repeat the process above for
other cluster nodes.

Install the AIX SAN Client and Filesystem Agent

1. Download the package ipstorclient-4.50-907.bff to a temporary


directory on the AIX client (host machine).

2. Install the CDP client


installp -aXd ipstorclient-4.50-907.bff all

CDP Administration Guide 23


Getting Started

3. Confirm that the package installation was successful by listing system installed
packages.
lslpp -l | grep IPStorclient

4. Download the package filesystemagent-1.00-1136.rte to a temporary


directory.

5. Use the "cd" command to change the directory to the temporary directory where
the package was downloaded.

6. Install the AIX filesystem Snapshot Agent with the following command:
installp -aXd filesystemagent-1.00-1136.rte all

7. Confirm the package installation was successful by listing the system installed
packages:
lslpp -l | grep jfsagt

8. Authenticate the CDP Client to the storage server by running ipstorclient


monitor.

a. If this is the first time running "ipstorclient monitor", you will be prompted to
select Fibre Channel (FC) or iSCSI protocol.

b. Enter y to enable Fibre Channel protocol support if you are using FC or enter
y to enable iSCSI protocol support if you are using iSCSI.

c. Choose option 3 to Add Servers.

d. Enter the IP address for the storage server.

e. Enter username Protection.

f. Enter password Protection.

g. Enter "n" when it asks if you would like to add more servers.

h. Enter "0" to exit.

9. If system configuration involves HACMP Cluster, repeat the process above for
other cluster nodes.

CDP Administration Guide 24


Getting Started

Prepare the CDP and HP-UX environments


This section describes the the steps required to prepare the FalconStor CDP and
HP-UX environments before protecting a logical volume.
The first step is preparing the CDP environment by following the steps below:

1. Ensure that a FalconStor IPStor appliance configured for CDP is available. This
appliance may be factory-shipped from FalconStor or can be built using the
EZStart USB key on a supported hardware appliance, such as HP DL 38x/58x,
IBM x365x, or Dell 29xx family servers with supported QLogic FC HBA ports and
SATA/SAS/FC storage.

2. Install the FalconStor Management Console on a Windows host.

3. Use the console to create a user account with the username "Protection" and the
type "IPStor User". Set the password to "Protection".
These credentials will be used by the HP-UX scripts for CDP operations.

4. Create a storage pool with the name "Protection" and add a disk to the pool with
sufficient size. For instructions on creating a storage pool, refer to the Create
storage pools section.

Notes:
• Physical devices must be prepared (virtualized, service enabled, or
reserved for a direct device) before they can be added into a storage
pool.
• For best results, create multiple storage pools with different roles (i.e.
mirror storage, snapshot storage, and CDP journaling storage).
• Each storage pool can only contain the same type of physical devices.
Therefore, a storage pool can contain only virtualized drives or only
service-enabled drives or only direct devices.
• A storage pool cannot contain mixed types. Physical devices that have
been allocated for a logical resource can still be added to a storage pool.

5. Select the Security tab and check the user named "Protection”

6. Edit the IPStor fshba.conf parameters for improved support with HP-UX 11.23.
• vi $ISHOME/etc/fshba.conf
• Add "reply_invalid_device=63" to the bottom of the fshba.conf file.
• Save the file and restart IPStor services by typing: ipstor restart all
Warning: The ipstor restart all command includes restarting fshba which will
break the FC connection and take the storage server offline.

CDP Administration Guide 25


Getting Started

Install the SAN Client

1. Download the package ipstorclient-4.50-954.tar to a temporary directory on the


HP-UX client (host machine).

2. Extract the package ipstorclient-4.50-954.tar:


• Use the "cd" command to go to the temporary directory where the package
was downloaded.
• Execute the command tar xvf ipstorclient-4.50-954.tar.

3. Install the IPStor Client:


swinstall -s `pwd`/ipstorclient-4.50-954.tar IPStor

4. Confirm that the package installation was successful by listing system-installed


packages.
swlist | grep IPStor

Install the HP-UX file system Snapshot Agent

1. Download the package filesystem agent-4.50-901.depot to a temporary


directory.

2. Navigate (use the "cd" command) to the temporary directory where the package
was downloaded.

3. Install the agent with the following command:


swinstall -s `pwd`/ filesystemagent-4.50-901.depot -x
allow_incompatible=true VxFSagent

4. Confirm that the package installation was successful by listing system installed
packages:
swlist | grep VxFSagent

5. Authenticate the IPStor Client to the CDP server by running ipstorclient


monitor.
• If this is the first time running "ipstorclient monitor", select FC or iSCSI
protocols.
• Type y to enable Fibre Channel protocol support if you are using FC or type
y to enable iSCSI protocol support if you are using iSCSI.
• Choose 3 to add more Servers.
• Enter the IP address for the IPStor/CDP Server.
• Enter username and password Protection.
• Type n when asked if you want to add more servers.
• Type 0 to exit.

CDP Administration Guide 26


Data Protection

FalconStor® CDP enables you to protect business-critical data and provide rapid
data recovery in the event of a system crash or disk failure. DiskSafe™, a host-
based replication software agent that delivers block-level data protection for a broad
base of software and hardware platforms, is able for both Windows and Linux
environments. FileSafe™ is available for file-level data protection in Windows and
Linux environments.
This chapter contains instructions for protecting your data using DiskSafe. For
additional information regarding file-based protection, refer to the FileSafe™ User
Guide.
In addition, there are Logical Volume Manager (LVM) scripts available for Unix
platforms. Refer to following sections for more information regarding data protection
in your environment:
• Data Protection in a Windows environment
• Data protection in a Linux environment
• Data Protection in Red Hat Linux
• Data Protection in Solaris
• Data Protection in SuSE Linux
• Data Protection in an AIX environment

Data Protection in a Windows environment


CDP in a Windows environment uses DiskSafe for backup and disaster recovery
(DR). For details on installing DiskSafe, refer to the Getting Startedchapter.

Use DiskSafe

Once installed, you can access the DiskSafe application in three ways:
• Via the desktop, double-click the DiskSafe Icon.
• Via the Start menu (Start --> Programs --> FalconStor --> DiskSafe)
• Via Computer Management (Start --> Settings --> Control Panel -->
Administrative Tools --> Computer Management --> Storage --> DiskSafe)
The DiskSafe application window is divided into two panes. The left pane contains a
navigation tree with nodes that you can click, expand, and collapse. When you click
a node in the navigation tree, the right pane displays associated information. For
example, when you click the Disks node in the navigation tree, the right pane
displays a list of all protected disks and partitions, including their name, size, mirror
mode, current activity, and status information.

CDP Administration Guide 27


Data Protection

Accessing the The menus at the top of the application window provide access to several functions
menus that are common to all Microsoft® Management Console-based applications, such
as exiting the application. In Windows 2008, Vista, 2003, and XP, the common
functions are available via the File, Action, View, Window, and Help menus.

Note: In Windows 2000, the common functions are available via the Console,
Action, View, Window, and Help menus.

Functions that are specific to DiskSafe typically appear in the Action menu. The
Action menu is dynamic; the items that appear here change, depending on which
element of the user interface (UI) has focus. For example, when you click the Disks
node, the Action menu displays Protect. When you click the Events node, the Action
menu displays Set Filter.
You can also access DiskSafe functions by right-clicking the elements on the
screen. For example, to protect a disk or partition, you can either click the Disks
node and click Protect from the Action menu, or you can right-click the Disks node
and click Protect from the pop-up menu. (All procedures in this guide help system
describe how to perform the functions using the pop-up menus.)

Showing, You can determine which columns display in the right pane. For example, when you
hiding, or re- click the Disks node, the right pane displays the Primary, Capacity, Mode, Current
ordering Activity, Status, and Mirror columns by default. You can add and remove columns by
columns selecting View from the main menu. For example, in Windows 2008, Vista, 2003,
and XP, if you don’t want the Capacity column to display, you can remove it from the
screen by right-clicking the Disks node. Then click View --> Add/Remove Columns,
click Capacity in the Displayed columns list, and then click Remove and OK. In
Windows 2000, you can click View --> Choose Columns.

Note: You cannot remove or re-order the left-most column.

To restore the Capacity column, in Windows 2008, Vista, 2003, and XP, you would
click View --> Add/Remove Columns, highlight Capacity click it in the Available
columns list and then click Add. You can also restore the right pane to its default
state by clicking Restore Defaults. In Windows 2000, you can click View --> Choose
Columns, highlight Capacity click it in the Hidden columns list and then click Add.
You can also reset to your previously-set state by clicking Reset.
In addition, you can change the order of the columns. For example, to move the
Status column to the left of the Current Activity column, you would click Status in the
Displayed columns list and then click Move Up. To move it back to the right of the
Current Activity column, you would click Status and then click Move Down.

Sorting data To quickly find the information that you want, you can click the column headings in
the right pane to sort the information in that column alphanumerically. For example,
when you click the Disks node, you can click the Capacity column heading to sort
the listed disks by size, or you can click the Mode column heading to sort them by
mirror mode (Continuous or Periodic).

CDP Administration Guide 28


Data Protection

Selecting items In the right pane, most functions (such as viewing the properties of an item) can be
performed on only one item at a time. You can select an item by clicking anywhere in
the row. However, some functions (such as removing protection) can be performed
on multiple items simultaneously.
To select multiple contiguous items, click the first item, press and hold down the Shift
key, and then click the last item. All items between the first and last item are
selected. To select multiple non-contiguous items, hold down the Ctrl key as you
click each item.

Protect a disk or partition with DiskSafe

FalconStor DiskSafe provides an easy and efficient way to protect entire disks or
selected partitions by capturing all system and data changes and journalling them
on the CPD appliance without impacting application performance.
To protect your server:

1. From the Start menu, select Programs --> FalconStor --> DiskSafe.

2. Expand DiskSafe --> Protected Storage, right-click Disks, and then select
Protect.
The Disk Protection Wizard launches.

3. Click Next on the first screen of the wizard to get started.


The Eligible primary storage list displays remote virtual disks assigned to this
host, local disks, previous mirrors, and any previously protected disks that have
had protection removed but did not have the mirror deleted from the storage
server. If the Eligible primary storage list does not display as expected, click
Refresh to update the list.

CDP Administration Guide 29


Data Protection

4. Select the disk or partition to protect.

While protecting individual partitions provides more flexibility in choosing what to


restore, protecting the entire disk offers better point-in-time data integrity for the
entire disk.

5. From the Mirror Storage Selection page, select the disk or partition where the
primary storage disk or partition is to be mirrored and click Next. To select your
new CDP appliance, click New Disk.

6. On the Allocate Disk page, select the registered CDP appliance and then click
the Options button next to Disk Size to enable Continuous Data Protection.

CDP Administration Guide 30


Data Protection

If this is the first time you are protecting a disk, you will need to add the new CDP
server first by clicking the Add Server button.
• Enter the name or IP address of the storage server in the Server name text
box. (If the storage server is in a Windows domain, select the Windows
Domain Authentication check box and type the domain name in the
Domain Name text box. If the storage server is not in a Windows domain,
clear the Windows Domain Authentication check box.)
• Enter a user name (ipstoruser) and password (IPStor101) for accessing
the server or domain.
• Select the communication protocol(s) to use (iSCSI and/or Fibre Channel
• Click OK on the Add Server dialog box.
The Snapshot Advanced Settings dialog screen displays allowing you to enable
CDP, specify the percentage of the Snapshot resource size and specify the size
of the Journal resource.

7. Enter a number for the percentage of the snapshot resource size.


CDP enhances the benefits of using TimeMark by recording all changes made to
data, allowing you to recover to any point in time.

8. Click the checkbox if you want to enable Thin Provisioning.


Thin Provisioning allows you to use your storage space more efficiently by
allocating a minimum amount of space for the virtual resource. Then, when
usage thresholds are met, additional storage is allocated as necessary. The
maximum size of a disk with thin provisioning is limited to 16,777,216 MB. The
minimum permissible size of a thin disk is 10GB.

9. Enter a number for the size of the Journal resource.

10. Click OK on both screens to create the new mirror disk on the CDP appliance.

CDP Administration Guide 31


Data Protection

You should now see the new mirror disk in the Eligible mirror disks list. If you do
not, click Refresh.

The Mirror Mode and Initial Synchronization options screen displays.

11. Select the mirror mode and initial synchronization options and click Next.
Select Continuous mode to have the mirror updated simultaneously with the
local disk. There are four options to set for Continuous mode. You can leave the
default setting to balance performance and mirror synchronization or choose to
change the control parameters to stay in sync at the expense of performance or
vice versa. The options are as follows:

CDP Administration Guide 32


Data Protection

• Minimize performance impact to primary I/O - Select this option if you want
to stay in sync even if there is an impact on performance. The maximum
number of mirror buffers will be set at 64 and the wait time when the
maximum buffer is reached will be set at zero seconds.
• Optimize data mirror coverage - Select this option if you want to maintain
performance even at the expense of breaking the mirror synchronization.
The maximum number of mirror buffers will be set at 8 and the wait time
when the maximum buffer is reached will be set at ten seconds.
• Balance performance and coverage - The default setting. A balance is
maintained between performance and mirror synchronization. The
maximum number of mirror buffers will be set at one and the wait time
when the maximum buffer is reached will be set at ten seconds.
• Advanced custom settings - Select this option to change the default
values. You can change the maximum number of mirror buffers as well as
the wait time when the maximum buffer is reached (break mirroring state)
after exceeding configured memory-buffer.
Select Periodic mode to update the mirror at regularly scheduled intervals if you
have low network bandwidth on the CDP appliance.
Specify what data to copy during the initial synchronization by selecting or
clearing the Copy only sectors used by file system check box.
If your disk is formatted with a file system, select the Copy only sectors used by
file system option. Only the sectors used by the file system are copied to the
mirror. If you are using a database or other application that uses raw space on
the disk (without a file system), clear this option. If you clear this option, all
sectors on the entire disk are copied to the mirror.
Select the Optimize data copy check box to have DiskSafe scan both the local
disk and its mirror for changes in 4-KB blocks, and then copy the blocks to the
mirror. This uses minimal network bandwidth and speeds up synchronization.
Clear this check box to skip the local and mirror disk scan for changes and
simply copy all the data from the local disk to the mirror. This would be
appropriate if you have never used the selected mirror before, or if you used it
for another disk.

Note: This option is selected by default if you have selected a target disk that
was mirrored before

The Scheduling screen displays allowing you to schedule snapshots for


continuous mode or synchronization frequency for periodic mode.

12. Click Schedule to create scheduled synchronization or snapshot tasks.

CDP Administration Guide 33


Data Protection

The Task creation screen displays allowing you to schedule snapshots or


synchronization of the local disk and mirror hourly, daily, weekly, or monthly.

For each schedule, specify the date and time to start. You can also specify an
end date. Click the start date or end by field to display a calendar.
Scheduling options are described below.
• Click the Hourly radio button to synchronize the local disk and mirror every
specified number of hours and minutes. Enter the number of hours in the
first text box, and then specify the number of minutes in the second text
box.
• Click the Daily radio button to synchronize the local disk and mirror every
specified number of days.
• Click the Weekly radio button to synchronize the local disk and mirror every
specified number of weeks and then specify the day of the week the
synchronization is to occur.
• Click the Monthly radio button to synchronize the local disk and mirror every
specified number of months and specify the day of the month.
• Click the Advanced button from the Task Creation screen to further
customize your synchronization schedule.

CDP Administration Guide 34


Data Protection

For example, you can define and exclude holidays from the synchronization
schedule.

13. Click OK when you have finished configuring the schedule.

14. Click Next to confirm the schedule.

15. On the Advanced Synchronization Options page, customize your


synchronization.

CDP Administration Guide 35


Data Protection

The dialog is different depending upon whether you selected continuous or


periodic mode.
• Select Optimize data copy during synchronization to have DiskSafe scan
both the local disk and its mirror for changes and copy the 4 KB blocks to
the mirror. This option is handy for environments with slow connection
speeds or low bandwidth because you can minimize the impact to the
network if you have previously mirrored to the selected disk.
• If you want to limit network bandwidth utilization, enable Limit I/O
throughput generated by DiskSafe (KB/s) and set the maximum allowed
bandwidth.
• For periodic mode, use the If periodic synchronization fails, retry for field to
determine the retry period. If you select 30 minutes, DiskSafe will attempt
to synchronize the disks for half an hour; if you select Unlimited, DiskSafe
will continue to attempt synchronization indefinitely.
• For a gigabit environment, you can disable Suspend I/O when mirror disk
throughput deteriorates. If you select this option, you must also specify
the:
• Acceptable throughput - This option allows you to select a maximum
number of kilobytes per second that can be written to the mirror. If you
clear this option, synchronization will not be temporarily paused when
the mirror is not responding quickly enough. As a result, the host might
hang while waiting for the mirror to acknowledge that data has been
written to it.

Click Test to determine the optimum throughput setting for the disk
where the mirror resides. It is recommended that you do not set this
value higher than the value displayed by the test to ensure DiskSafe
trigger a synchronization pause when needed.

For example, you might set the acceptable throughput to 10240 KB/s,
the deterioration threshold to 75%, and the interval to resume
synchronization to 10 minutes. In this case, if the throughput to the
mirror falls to 7680 KB/s (10240 x .75), DiskSafe will temporarily pause
synchronization and then resume again after 10 minutes.
• Deterioration threshold to suspend I/O - This option allows you to
select the percentage of the acceptable throughput at which
synchronization will pause.
• Interval to try resuming I/O - This option allows you to select the
interval to try resuming synchronization when using periodic mode.
Choose from 10 seconds to one hour.
• Encrypt mirror disk - Allows you to specify an encryption key to protect
data against unauthorized access of the mirror disk. Encryption must be
enabled and added while you are protecting the mirror disk; you cannot
add encryption after the disk has been protected. In addition, you cannot
remove encryption unless you remove the protection.

Click Change to add or import an encryption key.

CDP Administration Guide 36


Data Protection

16. Specify the snapshot options in the Advanced Snapshot Options screen or click
Next to accept the defaults. This screen only displays if you are mirroring to a
remote disk and TimeMark or Snapshot is licensed on the storage server.

Snapshot option parameters are as follows:


• Take a temporary snapshot before each synchronization to recover the
mirror in case of failure
This option allows you to take temporary snapshot before each
synchronization to recover the mirror in case of failure.
If you select this option, a snapshot of the mirror is taken before the local
disk and its mirror are synchronized. This ensures that, if an error occurs
during synchronization, the mirror can be restored to its previous state.
Once synchronization completes successfully, this temporary snapshot is
automatically deleted. Clear this option if you do not want a snapshot
taken before synchronization.
In continuous mode, this snapshot occurs when you resume protection
after it has been suspended, or when a network problem or other event
has interrupted the connection to the mirror.
• Invoke snapshot agents every __ scheduled snapshot(s)
This option allows you to specify whether or not to notify the snapshot
agents before and after a snapshot occurs.
If you select this option, you must also specify the interval at which the
snapshot agents will be invoked. For example, if you specify 1, the
snapshot agents will be invoked for every snapshot. If you specify 3, the
snapshot agents will be invoked only for every third snapshot.

Note: If you have snapshot agents but do not select this option, your agents
will not be invoked, and there might be problems with the integrity of your
snapshots, particularly for hosts running very active databases.

CDP Administration Guide 37


Data Protection

• Keep fixed number of snapshots


This option allows you to limit the number of snapshots that DiskSafe can
take to ensure that older snapshots taken by the storage server are not
deleted to make room for newer DiskSafe snapshots. Newer DiskSafe
snapshots replace only older DiskSafe snapshots, and newer storage
server snapshots replace only older storage server snapshots.
If this option is unchecked, the maximum number of snapshots supported
by the storage server is kept. To limit the number of snapshots to keep,
select this check box and then select the maximum number of snapshots
to keep from the adjacent list. (The maximum number displayed in this list
is the maximum number of snapshots supported by the storage server.)
The number of snapshots you keep and the frequency with which you take
them determines how far back you can retrieve data. For example, if you
limit the number of snapshots to 24, and you take a snapshot every day,
you can retrieve any data from the past 24 days. If you take snapshots
once a month, you can retrieve any data from the past two years.
• Define snapshot preserving patterns
This option allows you to organize your snapshots, indicating the number
of snapshots to keep at each level. You can specify how many snapshots
to keep for any or all of the following categories:
Hour (0 - 23)
Day (0 - 31)
Week (0 - 4)
Month (0 - 12)
Once you have entered the number of snapshots to keep at each level,
click the Advanced button to define the specific time and/or date for each
snapshot you are preserving.

• For hourly snapshots, define the minute of the hour to keep (0-59).
• For daily snapshots, define the hour of the day to keep (0-23).
• For weekly snapshots, define the day of the week to keep (Mon - Sun).
• For monthly snapshots, define the day of the month to keep (1 -31)
The snapshot consolidation feature allows you to save a pre-determined
number of snapshots and delete the rest independently from being

CDP Administration Guide 38


Data Protection

scheduled or manually taken. The snapshots that are preserved are the
result of the pruning process. This method allows you to keep only
meaningful snapshots.
Every time a snapshot is created, DiskSafe checks to determine which
snapshots to purge. Outdated snapshots are deleted unless they are
needed for a larger granularity. The smallest unit snapshot is used. Then
subsequent snapshots are selectively left to satisfy the Daily, Weekly, or
Monthly specification.
When defining the snapshot preserving pattern, you need to specify the
offset of the moment to keep. For example, for daily snapshots, you are
asked which hour of the day to use for the snapshot. For for weekly
snapshots, you are asked which day of the week to keep. If you set an
offset for which there is no snapshot, the closest one to that time is taken.
The default offset values correspond to typical usage based on the fact
that the older the information, the less valuable it is. For instance, you can
take snapshots every 20 minutes, but keep only those snapshots taken at
the minute 00 each hour for the last 24 hours, and also keep 7 snapshots
representing the last 7 days taken at midnight, 4 snapshots representing
the last 4 weeks by those taken on Mondays, and 12 snapshots
representing the last 12 months, taken the first day of the month.

17. On the Completing the Disk Protection Wizard page, review your selections and
then click Finish.
Your data is now protected. You can check the status of each disk immediately.
You can view information about the synchronization mode, current activity, and
synchronization status.

After protecting a disk, the mirror will appear in Disk Management. For details
about the information displayed in this screen, see the DiskSafe User Guide.

CDP Administration Guide 39


Data Protection

Protect a group of disks

Once you have protected two or more disks or partitions, you can put them into
groups. Groups offer synchronization advantages and ensure data integrity amongst
multiple disks.
For example, if your database uses one disk for its data and a separate disk for its
transaction logs and control files, protecting both disks and putting them into a group
causes snapshots of both disks to be taken at the same time, ensuring the overall
integrity of the database in case you need to restore it.
Likewise, if you are using a dynamic volume that spans multiple physical disks,
protecting all the related disks and putting them in a group ensures that they can be
reliably protected and restored.
Follow the steps below to protect a group of disks:

1. Follow the standard procedure to protect a disk or partition.


You do not need to configure snapshot policies or any other options. Policies
configured for a group will apply to all members of that group.
After protecting a disk or partition, allow each to finish its initial synchronization.

2. In the DiskSafe console, expand DiskSafe --> Protected Storage, right-click


Groups, and select Create.
The Create Group wizard launches.

3. Click Next to get started.

4. On the Group Mirror Mode page, enter a Group name (up to 64 letters or
numbers). Then select Continuous mode or Periodic mode and click Next.

5. Click the Schedule button to create scheduled tasks.

CDP Administration Guide 40


Data Protection

6. On the Continuous Mode Snapshot Schedule page, select Take snapshot to


enable scheduled snapshots and then set how often snapshots should be taken.

7. On the Advanced Snapshot Options page, keep all of the default settings.

8. On the Completing the Disk Protection Wizard page, review your selections and
then click Finish.
The group is created. You will be prompted to add members into your newly
created group. To add members into the new group, click Yes.

9. On the Add Member page, select all disks that should be added into the group.

You cannot add disks while the following activities are taking place: initial
synchronization, analysis of data, taking of a snapshot, or restoration.
You can add a disk or partition to a group at any time, as long as the group is in
one of the following states:
Empty
Waiting for synchronization
Synchronizing
Suspended
• To add a disk or partition to a group, expand DiskSafe --> Protected
Storage --> Groups.
• In the right pane, right-click on the group to which you want to add a disk
or partition and click Join.
• From the Protected disks list, select the disks you want to add to the group
and click OK.
Policies configured for a group will apply to all members of that group. For
example, Disk 0 Partition 1 was configured for periodic mode. After it has been

CDP Administration Guide 41


Data Protection

added into a group configured for continuous mode, the mirror mode for the disk
is immediately changed to continuous mode.

To test the group snapshot function, right-click on the group and select Advanced
-->Take Snapshot.
You can monitor the activity of the disks in the group. The Current Activity of all the
disks should be Taking Snapshot and the time created of this snapshot should be
the same for all the disks.

Suspend or resume protection

Once you have enabled protection for a disk or partition, you can suspend it at any
time. For example, if several hosts are mirroring continuously to a remote disk, and
the network is experiencing temporary bandwidth problems, you might want to
suspend protection for one or more hosts until full network capacity is restored.

CDP Administration Guide 42


Data Protection

When you suspend protection, data is written only to the local disk, not to the mirror.
As a result, the local disk and its mirror become out of sync. When you later resume
protection, the disks are synchronized automatically.

Note: If the disk or partition is part of a group, you cannot suspend protection for
that individual member. You can only suspend or resume protection for the entire
group.

To suspend protection:

1. Expand DiskSafe --> Protected Storage and then click Disks or Groups.

2. In the right pane, right-click the disk, partition or group for which you want to
suspend protection, and then click Suspend.
The Current Activity column displays Suspended.
To resume protection:

1. Expand DiskSafe --> Protected Storage and then click Disks or Groups.

2. In the right pane, right-click the disk, partition, or group for which you want to
resume protection, and then click Resume.
If the disk, partition, or group uses continuous mode, synchronization occurs
immediately. If it uses periodic mode, the local disk and its mirror are
synchronized at the regularly scheduled time.

CDP Administration Guide 43


Data Protection

Data protection in a Linux environment


DiskSafe for Linux allows you to protect entire disks, selected partitions, root disk/
partition/LVM, or data logical volumes. Protecting the entire disk provides point-in-
time data integrity for the entire disk.

Installing DiskSafe for Linux

DiskSafe for Linux is distributed as an rpm for each distribution and version.
DiskSafe is installed in the /usr/local/falconstor/disksafe directory and
can only be installed and used by root user.
Install DiskSafe by using the dsinstall.sh script.This script performs the following
functions:

1. Installs the standard iSCSI Initiator if it is not already installed.

2. Upgrades the iSCSI Initiator with FalconStor iSCSI Initiator. Support files are
provided with the release.

3. Installs FalconStor San Disk Manager (SDM)/IMA.

4. Prompts you to enter the license key, if necessary.

5. Adds new storage server, if necessary. You will be prompted to enter the IP
address, account credentials, enable iSCSI or Fibre Channel protocol if
supported. If not enabled during installation, a message displays informing you
to enable the protocol manually after the DiskSafe installation.

6. Install FalconStor DiskSafe.

Using DiskSafe for Linux

During the protection process, you will specify which local or remote disk to use as a
mirror. When specifying the mirror, keep in mind that a mirror must be an entire disk;
it cannot be a partition. However, when you protect a partition, a corresponding
partition is created on the mirror disk.
When creating a mirrored disk on the storage server, TimeMark is enabled on the
device and a snapshot resource is created that is 20% of the size of the original disk.
The snapshot resource is configured with an automatic expansion policy. If needed,
you can manually expand the snapshot resource from the storage server console.
Other configuration options include specifying whether to write data to the mirror
continuously or periodically, and other options discussed in this section.
Some rules to remember when protecting your data:
• If you protect an entire disk, you cannot subsequently protect an individual
partition of that disk. Likewise, if you protect only an individual partition of a
disk, you cannot later protect the entire disk. However, you can protect all

CDP Administration Guide 44


Data Protection

the other partitions of the disk. (To switch from protecting an entire disk to
protecting just a partition or vice versa, you must remove and then re-
protect the affected disks or partitions.
• It is recommended that you do not change the size of the primary disk/
partition once protection is created. If you protect a disk, the size of the
mirror disk is the same size as the primary disk. If you protect a partition, the
mirror size is at least one MB larger than the primary partition. Therefore, if
the size of the primary disk or partition is changed, the mirror disk image
may be corrupted. If this occurs, you will need to remove the protection then
recreate it.
• If the host already exists on the storage server, it must use CHAP
authentication. Otherwise, authentication errors will occur.

Protecting disks and partitions

A disk can be protected by DiskSafe on a mirror disk that is the same size as the
primary disk. Protecting a partition requires mirror disk with at least one MB greater
in size. Both local disks and IPStor/CDP disks can be used as primary and/or mirror
disks. However features such as snapshots are available only when a mirror disk is
a CDP disk.
The mirror disk ID is an optional parameter. If not specified, a disk the same size as
the primary disk is automatically assigned using SDM/IMA from an already
configured storage server and will be used as a mirror disk during disk protection.
During partition protection, the automatically assigned mirror disk size is one MB
more than the primary partition size.
If the size of the disk or partition to be protected is more than 10 GB, you will be
prompted to use thin provisioning for mirror disk allocation.

Notes: Thin Provisioning allows you to use your storage space more efficiently by
allocating a minimum amount of space for the virtual resource. Then when usage
thresholds are met, additional storage is allocated as necessary. The maximum
size of a disk with thin provisioning is limited to 16,777,146 MB. The minimum
permissible size of a thin disk is 10 GB. Thin Provisioning is available for CDP, or
IPStor version 6.0 or later.

Syntax:
dscli disk protect primary=<DiskID> [mirror=<DiskID>]
[-mode:continuous | periodic <daily [-days:<#>] [-time:<H:M>] | hourly
[-hours:<H:M>]>] [-starttime:<Y-M-D*H:M>]
[-microscan]
[-force]
[-fsscan]
[-umappath:path]

CDP Administration Guide 45


Data Protection

Option Description
-mode The default protection mode is continuous. In
continuous mode, all write operations on a protected
disk are performed on both the primary and mirror disk
at the same time.
If you select Periodic mode, you can specify the
synchronization schedule to update the mirror at
regularly scheduled intervals.
-starttime:<Y-M- The default start time is right now. The start time
D*H:M> parameter can be used to specify the protection starting
time. Initial synchronization is done when the start time
is reached. A start time less than the current time is
treated as current time.
-microscan You can use this option to analyze each
synchronization block on-the-fly during synchronization
and transmit only changed sectors in the block.
-force You can use this option when the specified mirror disk
has partitions.
-fsscan You can use this option to copy only the sectors used
by the file system during initial synchronization. Any
unallocated space or partition with a file system which
is not supported by disksafe will be treated as different
data. The current supported list of file system is ext2
and ext3.
-umappath:path You can use this option to appoint the location for
storing the Umap. Note: For a system disk to be
protected when the Linux logical volume is root, the
umappath should be on a separate path.

CDP Administration Guide 46


Data Protection

Notes:
• If mirror=<DiskID> is not specified, DiskSafe will allocate a new disk. The
following are the defaults:
• -- default protection mode will continuous
• -- default path for UMAP will be /usr/local/falconstor/disksafe
• -- default days will be 1 for daily
• -- default hours will be 1 for hourly
• -- default start time will be right now
• Protecting and unprotecting system directories other than root (/), i.e.
disks or partitions mounted at /usr, /var, /etc, /tmp, etc. require you to
reboot to enable protection. In addition, these disks can join a group but
group stop and group rollback are not allowed. They can only be restored
using Disksafe Recovery CD. If you would like to manually mount a
protected disk/partition, the device name must include the DiskSafe
layer. For example: /dev/disksafe/sdb or /dev/disksafe/sdc1.

Scheduled Disk Protection

Scheduled disk protection is used when you do not want your disk write operations
to slow down due to continuous mirroring. Select Periodic mode if you want to select
a daily or hourly based predefined times synchronization points. Once periodic
protection is enabled, the I/O operations are only performed on the primary disk and
all blocks updated between one synchronization and the next are flagged. When a
synchronization point is reached, all flagged blocks are synchronized or copied from
the primary to the mirror disk.
A schedule can be specified for periodic protection while protecting a disk or by
changing the mode of a protected disk. Refer to the DiskSafe User Guide for
additional information.

DiskSafe for Linux operations

Various disk operations can be performed by DiskSafe on a protected disk. Some


commands, such as disk list can also be used on unprotected disks. The supported
commands are described in detail in the following table:

Command class Operation


disk list [-all | -refresh | -protected | -primaryeligible |[<DiskID> [-
mirroreligible]]|[<MirrorDiskID> [-
restoreeligible]][server=<#> protocol=<fc | iscsi>]
*Refer to the DiskSafe User Guide for more information.

CDP Administration Guide 47


Data Protection

Command class Operation


disk protect primary=<DiskID> [mirror=<DiskID>]
[-mode:continuous | periodic <daily [-days:<#>] [-
time:<H:M>] | hourly [-hours:<H:M>]> ] [-starttime:<Y-M-
D*H:M>]
[-microscan]
[-force]
[-fsscan]
[-umappath:path]
disk unprotect <<DiskID> |<DiskSafeID>>
disk stat <DiskID>
disk suspend <DiskID>
disk resume <DiskID>
disk stop <DiskID>
disk schedule <DiskID>[-mode:continuous | periodic <daily [-days:<#>] [-
time:<H:M>]|hourly [-hours:<H:M>] [-exclude:<H:M>-
<H:M>]>][-starttime:<Y-M-D*H:M>]
disk limitio <DiskID> maxio=<#>
disk sync <DiskID>
disk stopsync <DiskID>
disk analyze <DiskID>
disk stopanalyze <DiskID>
disk restore <MirrorDiskID> <TargetDiskID> [timestamp=<#>][-force]
disk retrysync <DiskID> period=<#>
disk add server=<#> <sizeinmb=<#> | sectors=<#>> protocol=<fc |
iscsi> [-thindisk]
disk delete server=<#> protocol=<fc | iscsi> deviceid=<#> [-force]
disk help

Root Disk Protection

In Linux, root can either be an LVM logical volume or a native disk partition. Root (/)
or any busy system partition protection such as /etc, /usr, /var, etc. requires the
system to reboot to enable or disable protection. Hence after a successful
protection, synchronization does not start until the system is rebooted. The same is
applicable for unprotection, where DiskSafe devices are not removed until the
system is rebooted following an unprotect operation.
Protection must be set at the LVM PV (physical volume) level. All PVs in a logical
volume or a volume group can be joined together in a group to enable consistent
data on all PVs for snapshots.

CDP Administration Guide 48


Data Protection

Note: Protection of root logical volume is not supported at this time and cannot be
restored using the Recovery CD.
Protecting a system disk or partition when root is a native disk or partition requires a
reboot after enabling or disabling protection. System disk or partition protection
when root is an LVM logical volume requires an update of the LVM configuration so
that the volume group and logical volume uses the DiskSafe disk instead of native
disk. The steps required to protect or unprotect a system disk or partitions when root
is a logical volume are described in this section.
The following limitations are applicable for root or any system partition protection
that is in continuous use:
• Restore operations are not allowed when disks are online.
• Stop protection is not allowed. Use suspend and unprotect operations only.
• Protection cannot join a group until the system reboots and protection is
fully enabled.
• If protection is part of a group, then group stop and group rollback is not
allowed.
DSRecPE Recovery CD must be used to restore from a mirror or a snapshot.
Recovery using Recovery CD requires the following conditions. For additional
information on using Recovery CD, refer to Chapter 6.
• iSCSI protocol must be enabled on the Linux host.
• The recovery password must be set using the following command.
#dscli server recoverypwd server=<#> passwd=<#>

Notes:
• The Disksafe uninstall operation is not allowed while a root or a busy
system disk is protected. Any such protections must be unprotected and
the machine rebooted before proceeding with uninstalling disksafe.
• The LVM configuration filter can be specified in different ways as
required. Use the information specified below as a guideline.

Protect LVM To protect a root logical volume, follow the steps below:
Root PV(s)
1. Prepare the umap disk.
Root PV protection requires the umap to be on a disk other than the root logical
volume. The umap path should be specified during protecting the PV.
Alternatively a separate disk can be used for storing umap, sample steps for
which are given below.
Example:
• #mkfs /dev/sdb
• #mount /dev/sdb /mnt/umappath
• add the mount point entry in /etc/fstab to make sure it is mounted
automatically.
/dev/sdb /mnt/umppath ext2 defaults 0 0

CDP Administration Guide 49


Data Protection

2. Protect the root PV with a specified path for storing the umap.
Example: dscli disk protect primary=sda mirror=sdd -
umappath:/mnt/umappath

3. Update the LVM configuration (lvm.conf file) to recognize DiskSafe devices.


Change the filter settings in /etc/lvm/lvm.conf
The default filter setting in lvm.conf:
filter = [ "a/.*/" ]
Change the filter to include the DiskSafe device
filter = [ "a|^/dev/disksafe/<root PV name>.*|",<other required
PVs>, "r/.*/" ]
Example:
filter = [ "a|^/dev/disksafe/sda.*|", "r/.*/" ]
filter = [ "a|^/dev/disksafe/sda2.*|", "a|^/dev/sda3.*|", "r/.*/"
]

4. Reboot.

5. Check to make sure the DiskSafe device is in use as a PV instead of the native
device.
Example:
#pvdisplay

Unprotect LVM To unprotect the physical volume of a root logical volume, follow the steps below:
Root PV(s)
1. Unprotect the physical volume of the root logical volume LVM (system disk).

2. Update LVM configuration (lvm.conf file) to recognize native physical volume.


Change the filter settings in /etc/lvm/lvm.conf
The filter for a protected device in lvm.conf:
filter = [ "a|^/dev/disksafe/<PV name>.*|",<other required PVs>,
"r/.*/" ]
Change the filter to include the native device instead of the DiskSafe device
filter = [ "a|^/dev/<root PV name>.*|",<other required PVs>, "r/
.*/" ]
Example:
filter = [ "a|^/dev/sda.*|", "r/.*/" ]
filter = [ "a|^/dev/sda1.*|", "a|^/dev/sda3.*|", "r/.*/" ]

3. Reboot.

CDP Administration Guide 50


Data Protection

Data Protection in SuSE Linux


This chapter explains how to protect your data with CDP using volume managers on
SUSE Linux systems, including details on using Enterprise Volume Management
System (EVMS) to mirror a SUSE Linux disk running SUSE version 9.0 and above.
EVMS allows more flexible system management by creating a consistent way to
manage various block devices. EVMS uses regions to interface to both software raid
('md') and Linux Volume Manager (LVM). For example, a RAID 0 region for all RAID
0 volumes and a RAID 1 region for RAID 1 mirrors.
EVMS configuration can be done using one the three methods:
• To launch the EVMS graphical user Interface (GUI), enter evmsgui from a
console.
• To launch EVMS in text mode (aka Ncurses), enter evmsn from a console.
• To use the command line interface (CLI), enter evms from a console.

Pre-configuration

To use this out-of-band CDP solution, you will have to use internal storage, allocated
from a non-IPStor resource, as your primary disk. For your mirror resource, you will
have to allocate a suitably-sized resource from an IPStor CDP appliance using the
iSCSI or FC protocol.
The example used in this section assumes the following:
• The source disk is /dev/sdb and is 2000 MB in size
• The IPStor disk is /dev/sdc and is 2000 MB in size

Note: The source disk should not contain any volume that shares physical disk
with another volume; otherwise restore affects all volumes on the physical disk.

CDP Administration Guide 51


Data Protection

Hardware/software requirements

Component Requirement

CDP Appliance • Certified enterprise class server provided by FalconStor, customer,


or vendor.
• Integrated RAID
• Redundant power supply
• Dual Ethernet
• PCI-X, PCI-E slots
• QLogic FC HBAs with a minimum of two available ports (see
FalconStor Certification Matrix for supported HBAs)
Enterprise class storage • Supplied by customer or vendor
• Directly attached to CDP appliance or by FC switch
FalconStor Management A virtual or physical machine running any version of Microsoft
Console Windows that supports the Java 2 Runtime Environment (JRE).
EVMS Enterprise Volume Management System (EVMS) to mirror a SUSE
Linux disk running SUSE version 9.0 and above.
CDP Installation media Required if CDP software needs to be installed on the appliance
CDP License key Keycodes to enable CDP components

Set up the mirror

1. In FalconStor Management Console, create a SAN Resource that is large


enough to be used as the mirror disk.

2. Add the Linux machine as a client.

3. Assign the client to the SAN Resource.

4. Scan for the new IPStor disk on the Linux client.

5. Find the new IPStor disk (for example: /dev/sdc).

6. Create DOS partitions.

7. Create an EVMS segment.

8. Create a RAID-1 mirror.

9. Create an EVMS volume in the RAID region.

10. Create a file system for the volume.

11. Mount the file system.

CDP Administration Guide 52


Data Protection

Create a DOS partition

You will need to log in as root or equivalent to perform the following steps.

Using EVMS To launch the GUI:


GUI
1. Enter evmsgui from a console.

2. Select Actions --> Add --> Segment Manager.

3. From the list, select the DOS Segment Manager, and then click Next.

4. Select the device and then click Add to initialize it.

Using EVMS To launch the CLI:


CLI
1. Enter evms from a console.

2. Delete the existing partitions using the following command:


You will need to do this on both the source and mirror resource.
Delete:/dev/evms/sdb

3. Use the following command to create a DOS partition on the resource.


You will need to do this on both resources.
Add Segment Manager:DosSegMgr={},sdb

Create an EVMS segment

Using EVMS To launch the GUI:


GUI
1. Enter evmsgui from a console.

2. Select Action --> Create --> Segment to open the DOS Segment Manager.

3. Select the free space segment you want to use.

4. Specify the amount of space to use for the segment.

5. Specify the segment options and then click Create.

Using EVMS To use the command line interface (CLI):


CLI
1. Enter evms from a console.

2. Allocate a segment from each resource using the following command:


Create: Segment,sdb_freespace1,size=2000MB

CDP Administration Guide 53


Data Protection

Create a RAID-1 mirror

Using EVMS To launch the GUI:


GUI
1. Enter evmsgui from a console.

2. Select Action --> Create --> Region to open the Create Storage Region dialog
box.

3. Specify "MD RAID 1 Region Manager" as the type of software RAID you want to
create.

4. From the list of storage objects, select the ones to use for the RAID device.
IMPORTANT: The order of the objects in the RAID is implied by their order in the
list.

5. Click Create to create the RAID device under the /dev/evms/md directory.
The device has a name such as md0 and EVMS mount location
/dev/evms/md/md0.

Using EVMS To launch the CLI:


CLI
1. Enter evms from a console.

2. Create a RAID device using the following command:


Create: Region,MDRaid1RegMgr={},sdb1,sdc1

Create an EVMS volume in the RAID region

Using EVMS To launch the GUI:


GUI
1. Enter evmsgui from a console.

2. Select Action --> Create --> EVMS Volume or Compatible Volume.

3. Select the RAID-1 mirror device that you created above, such as /dev/evms/md/
md0.

4. Specify a name for the device.


Use standard ASCII characters and naming conventions.

5. Click Done.

CDP Administration Guide 54


Data Protection

Using EVMS To launch the CLI:


CLI
1. Enter evms from a console.

2. Create a volume using the following command:


Create: Volume, "md/md0", Name="database"

Create a file system

Using EVMS To launch the GUI:


GUI
1. Enter evmsgui from a console.

2. Select Action --> File System --> Make to view a list of file system modules.

3. Select the type of file system you want to create, such as ReiserFS or Ext2/3FS.

4. Select the RAID-1 mirror device that you created above, such as /dev/evms/
md/md0.

5. Specify a name to use as the Volume Label and then click Make.
The name must not contain any spaces or it will fail to mount later.

6. Click Save to create the file system.

Using EVMS To launch the CLI:


CLI
1. Enter evms from a console.

2. Create a file system using the following command:


Mkfs: Ext2/3={vollabel=database}, /dev/evms/ database

3. Save and quit to commit all of the changes made.


Save
Quit

Mount a RAID device

To mount a RAIS device using the GUI:

1. Select Action > File System > Mount.

2. Select the RAID device you created earlier, such as /dev/evms/md/md0.

3. Specify the location where you want to mount the device, such as /home.

4. Click Mount.

CDP Administration Guide 55


Data Protection

Recovery and rollback

This section explains how to recover data after a failure.

Recover from Sometimes a disk can have a temporary problem that causes the disk to be marked
an out-of-sync faulty and the RAID region to become degraded. For instance, a loose drive cable
mirror can cause the MD kernel driver to think the disk has disappeared. When the cable is
plugged back in, the disk should be available for normal use. However, the MD
kernel driver and the EVMS MD plug-in might continue to indicate that the disk is a
faulty object because the disk might have missed some writes to the RAID region
and will therefore be out of sync with the rest of the disks in the region.
In order to correct this situation, the faulty object needs to be removed from the
RAID region and added back to the RAID region as a spare. When the changes are
saved, the MD kernel driver will activate the spare and sync the data and parity.
When the sync is complete, the RAID region will be operating in its original, normal
configuration.
This procedure can be accomplished while the RAID region is active and in use.

1. Remove the out-of-sync mirror from the RAID, which should have been marked
as a faulty disk.
• In EVMS, select Actions --> Remove --> Faulty Object from a Region.
• Select the RAID device you want to manage from the list of regions and
click Next.
• Select the failed disk.
• Click Remove.

2. Add the disk back as a spare for the RAID.


• In EVMS, select Actions --> Add --> Spare Disk to a Region.
• Select the RAID device you want to manage from the list of regions and
click Next.
• Select the device to use as the spare disk.
• Click Add.
The md driver automatically begins the replacement and synchronization
process. You can follow the progress of the synchronization process by checking
the /proc/mdstat file.
You can control the speed of synchronization by setting parameters in the
/proc/sys/dev/raid/speed_limit_min and /proc/sys/dev/raid/
speed_limit_max files.
To speed up the processing, echo a larger number into the speed_limit_min
file.
You can monitor the status of the RAID to verify that the process begins and then
completes successfully.

CDP Administration Guide 56


Data Protection

Roll a disk back 1. Manually mark the local disk of the mirror as faulty.
to a previous
In EVMS, use the markfaulty plug-in function for RAID-1. This command can be
TimeMark
used while the RAID region is active and in use.

2. Remove the faulty disk from the mirror.


• In EVMS, select Actions --> Remove --> Faulty Object from a Region.
• Select the RAID device you want to manage from the list of regions and
click Next.
• Select the failed disk.
• Click Remove.

3. Unmount all file systems and volumes from the mirror region.

4. Roll back the CDP appliance from the FalconStor Management Console using
the CDP journal or snapshots.
You may want to create a TimeView first to identify the appropriate time. A
TimeView allows you to verify the data before converting the primary disk.
Information about how to mount a TimeView and roll back from a snapshot or
from the CDP journal can be found in the CDP Reference Guide.

5. Add the local disk back as a spare for the RAID.


• In EVMS, select Actions --> Add --> Spare Disk to a Region.
• Select the RAID device you want to manage from the list of regions and
click Next.
• Select the device to use as the spare disk.
• Click Add.
The md driver automatically begins the replacement and synchronization
process.
You can monitor the status of the RAID to verify that the process begins and then
completes successfully.

Monitor the Using EVMS GUI


status of RAID
devices in The Regions tab in the EVMS GUI (evmsgui) lists any software RAID devices that
EVMS are defined and whether they are currently active.
Using /proc/mdstat
A summary of RAID and status information (active/not active) is available in the
/proc/mdstat file.
Using mdadm
To view the RAID status with the mdadm command, enter the following at a terminal
prompt:
mdadm -D /dev/mdx
Replace mdx with the RAID device number.

CDP Administration Guide 57


Data Protection

Data Protection in Solaris


This chapter explains how to protect your data in a Solaris environment using the
Solaris Volume Manager (SVM) to mirror a Solaris disk.

Hardware/software requirements

Component Requirement

CDP Appliance • Certified enterprise class server provided by FalconStor, customer,


or vendor.
• Integrated RAID
• Redundant power supply
• Dual Ethernet
• PCI-X, PCI-E slots
• QLogic FC HBAs with a minimum of two available ports (see
FalconStor Certification Matrix for supported HBAs)
Enterprise class storage • Supplied by customer or vendor
• Directly attached to CDP appliance or by FC switch
FalconStor Management A virtual or physical machine running any version of Microsoft
Console Windows that supports the Java 2 Runtime Environment (JRE).
Solaris Volume Manager The Solaris Volume Manager to mirror a Solaris disk.
(SVM)
CDP Installation media Required if CDP software needs to be installed on the appliance
CDP License key Keycodes to enable CDP components

Set up the mirror

1. In the FalconStor Management Console, create a SAN Resource that is large


enough to be used as the mirror disk.

2. Add the Solaris machine as a client.

3. Assign the client to the SAN Resource.

4. Use the devfsadm command to perform a device scan on Solaris and then use
the format command to verify that the client claimed the device.

5. Create a metadb of your primary disk.


#metadb -a -f -c 1 /dev/dsk/c2t8d0s2
The metadb is a repository that tracks the state of each logical device.

6. Create two stripes for the two sub-mirrors as d21 and d22:

CDP Administration Guide 58


Data Protection

#metainit d21 1 1 c2t6d0s2


#metainit d22 1 1 c2t7d0s2

7. Specify the primary disk that is to be mirrored by creating a mirror device (d20)
using one of the sub-mirrors (d21):
#metainit d20 -m d21

8. Attach the primary disk (d20) to the mirror disk (d22):


#metattach d20 d22

9. Set a TimeMark policy on the mirror disk.


Refer to your CDP Reference Guide for more information about configuring
TimeMark.

Break the mirror for rollback

When you want to perform a roll back, the primary disk and the mirror disk (the
IPStor virtual device), will be out of sync and the mirror will need to be broken. In
Solaris SVM this can be achieved by placing the primary and mirror device into a
logging mode.

1. Disable the remote mirror software and discard the remote mirror:
rmshost1# sndradm -dn -f /etc/opt/SUNWrdc/rdc.cf

2. Edit the rdc.cf file to swap the primary disk information and the secondary
disk information. Unmount the remote mirror volumes:
rmshost1# umount mount-point

3. When the data is de-staged, mount the secondary volume in read-write mode so
your application can write to it.

4. Configure your application to read and write to the secondary volume.


The secondary bitmap volume tracks the volume changes.

5. Fix the "failure" at the primary volume by disabling logging mode using the re
synchronization command.

6. Quiesce your application and unmount the secondary volume.


You can now resynchronize your volumes.

7. Roll back the secondary volume to its original pre-disaster state to match the
primary volume by using the sndradm -m copy or sndradm -u update
commands.
Keep the changes from the updated secondary volume and resynchronize so
that both volumes match using the sndradm -m r reverse copy or by using
sndradm - u r reverse update commands.

CDP Administration Guide 59


Data Protection

Data Protection in Red Hat Linux


This section explains how to protect your data with CDP in a Red Hat Linux
environment, including using Linux Volume Manager 2 (LVM2) to mirror a Red Hat
Linux disk.
LVM2 is a tool that provides logical volume management facilities on Linux. It is
reasonably backwards-compatible with the original LVM toolset. You need three
things to use LVM2:
• Device-mapper in your kernel
• The userspace device-mapper support library (libdevmapper)
• The userspace LVM2 tools.
Refer to http://sources.redhat.com/dm/ for information about the device-
mapper kernel and userspace components.

Hardware/software requirements

Component Requirement

CDP Appliance • Certified enterprise class server provided by FalconStor, customer,


or vendor.
• Integrated RAID
• Redundant power supply
• Dual Ethernet
• PCI-X, PCI-E slots
• QLogic FC HBAs with a minimum of two available ports (see
FalconStor Certification Matrix for supported HBAs)
Enterprise class storage • Supplied by customer or vendor
• Directly attached to CDP appliance or by FC switch
FalconStor Management A virtual or physical machine running any version of Microsoft
Console Windows that supports the Java 2 Runtime Environment (JRE).
Logical Volume Manager The LVM for the Red Hat Linux operating system.
(LVM)
CDP Installation media Required if CDP software needs to be installed on the appliance
CDP License key Keycodes to enable CDP components

Supported kernels

The patches subdirectory (from the above link) also includes up-to-date device-
mapper kernel patches for 2.4.26-rc1 and old patches for 2.4.20, 2.4.21 and 2.4.22
onwards. The 2.6 kernels already contain the device-mapper core, but you need to
apply development patches if you want additional functionality.

CDP Administration Guide 60


Data Protection

Initialize a disk

If your disks have not already been created and formatted, you will need to initialize
them. Before you can start using LVM2, you will need to allocate the primary storage
device from your internal hard drive or storage provisioned from another storage
system (not from an IPStor appliance).

1. Format your primary disk using the fdisk command:


fdisk <drive path> example: fdisk /dev/sdb

2. Follow the utility to create a new partition of the desired size:


• 'n' for new partition
• 'p' for primary partition
• '1' any number for partition path
• '+1024M' any number and M or G unit for size
• 'w' to commit the changes and exit.

3. After initializing the primary disk, you have to initialize the mirror disk that is
being provisioned by IPStor.
Use ISCSI or FC to assign the resource to the client you will be protecting.
Follow the fdisk instructions provided above to format the mirror disk. Make a
note of the path. As you will see later, the order of usage for these drive paths is
very important.

Set up the mirror

The following sequence of actions needs to be taken in order to establish a mirror


relationship between the primary disk and the mirror disk allocated by the storage
server:

1. Create a new physical volume for the primary disk.

2. Create a new physical volume for the mirror disk from IPStor.

3. Create a logical Volume Group with both the primary physical volume and mirror
physical volume in it.

4. Create the relationship between the primary disk and the mirror disk.
If you need to recover from the mirror disk, you will need to remover the mirror
relationship and then recreate the relationship with the resources reversed.
The table below gives you a LVM2 tool description:
Logical Volume Manager 2 Tool Descriptions

LVM2 Tool Description

pvcreate Create a physical volume from a hard drive.

CDP Administration Guide 61


Data Protection

Logical Volume Manager 2 Tool Descriptions

LVM2 Tool Description

vgcreate Create a logical volume group from one or more physical


volumes
vgextend Add a physical volume to an existing volume group
vgreduce Remove a physical volume from a volume group
lvcreate Create a logical volume group from available space in the
volume group
lvextend Extend the size of a logical volume group from free physical
extents in the logical volume group
lvremove Remove a logical volume from a logical volume group (after
un-mounting it)
vgdisplay Show properties of existing volume group
lvdisplay Show properties of existing logical volumes
pvscan Show properties of existing physical volumes

Create physical volumes for the primary and mirror disks

Use the pvcreate command to create a new physical volume from the partition
created from the disk:
pvcreate <disk>
Replace <disk> with the device name of the hard drive partition.
You will need to do this for both the primary and the mirror disk.

Create a logical A Volume Group can be created from one or more physical volumes. To scan the
Volume Group systems for all the physical volumes, use the pvscan command as root.
To create a logical Volume Group, execute the vgcreate command as root.
vgcreate <vgname> <pvlist>
Replace <vgname> with a unique name for the Volume Group.
Use <pvlist> for the paths of the physical devices that you want in the Volume
Group. For example:
vgcreate mirGroup /dev/sda1/ dev/sdb1
This example will create a mirror group mirGroup with primary disk partition
/dev/sda1 and mirror disk /dev/sdb1.

CDP Administration Guide 62


Data Protection

Create the Use the lvcreate command to create a mirrored volume.


mirror
Use the -m1 argument to specify one mirrored volume. If you want multiple mirrors,
you can modify 1 to be any number.
Use the -L argument to allocate free space for the mirror.
Use the -n argument to set the name for the mirrored logical volume followed by the
group name and the resources allocated for the mirror. It is important to remember
that the first physical volume is the primary disk and the second physical volume is
the mirror disk from IPStor.
For example:
lvcreate -L 500M -m1 --corelog -n mirrolv mirGroup /dev/sdb1 /
dev/sdc1
In this example, we created a mirrored logical volume with a single mirror. The
volume is 500 MB in size. Its name is mirrorlv and it is carved out of the Volume
Group mirGroup. The original data is on the primary device /dev/sdb1 and the mirror
copy is on /dev/sdc1. The --corelog argument stores the mirror log in memory.
To confirm that the mirror relationship was established correctly, use the following
command:
lvs -a -o+devices

Recovery

This section explains how to roll back or recover data to a previous point in time
using CDP with Snapshot and journal. If you want to revert the primary disk to a
previous point in time you will have to do the following:

1. Create a TimeView to identify the appropriate time.


A TimeView allows you to verify the data before converting the primary disk.
Information about how to mount a TimeView can be found in the CDP Reference
Guide.

2. Convert the mirror group to linear.

3. Remove the mirror group.

4. Roll back the disk from the FalconStor Management Console using the CDP
journal or snapshots. Information about how to roll back from a snapshot or from
the CDP journal can be found in the CDP Reference Guide.

5. Create a new group.

6. Switch resources to create a new mirror relationship.

7. Scan the mirror group and volume.

CDP Administration Guide 63


Data Protection

Convert the Use the following command to convert the existing mirror volume to a linear volume:
mirror group to
linear lvconvert -m1 --corelog <mirror logical volume>
For example:
lvconvert -m1 --corelog /dev/vg/mirrorlv volume
This example breaks the mirror group /dev/vg/mirrorlv, converting it from a
mirrored logical volume to a linear logical Volume Group.

Remove the 1. Run the command to remove the logical volume and then remove the Volume
mirror group Group.
The command to remove the logical volume is:
lvremove <name of mirror Volume Group>
For example:
lvremove mirrorLV
This example removes the mirror logical Volume Group called mirrorLV.

2. Remove the group.


vgremove <group name>
For example,
vgremove vg
In this example we are removing the Volume Group vg.
After breaking the mirror relationship you can perform the CDP roll back using
snapshots or the CDP journal.

Switch Repeat the procedure to 'Create a logical Volume Group' and 'Create the mirror' as
resources to mentioned above but be sure to switch the resources.
create a new
mirror You will need to confirm that what was originally your mirror resource (the resource
relationship from the FalconStor Management Console) is not your primary disk, and that your
original primary disk is now your mirror resource.
Use the lvs command to confirm that 100% of the copy is complete and then return
back to the original state of mirroring.

CDP Administration Guide 64


Data Protection

Data protection in an HP-UX environment


This section provides detailed instructions on protecting your data in a HP-UX
environment, including downloading protection and recovery scripts,
For peparation steps, refer to the Prepare the CDP and HP-UX environments
section of the Getting Started Chapter.

Download and configure the protection and recovery scripts for HP-UX LVM

FalconStor provides HP-UX scripts to simplify and automate the protection and
recovery process of logical volumes on HP-UX platforms.

ssh_setup The ssh_setup script is used to create a ssh public/private key between the HP-UX
script and the CDP server.

1. Download the package auto_lvm_scripts_1.6.tar.Z to the directory /usr/


local/ipstorclient/bin.

2. Extract the package auto_lvm_hpux_v1.6.tar.Z


cd /usr/local/ipstorclient/bin
zcat auto_lvm_hpux_v1.6.tar.Z | tar xvf

3. Configure ssh public/private key authentication between HP-UX and CDP


server.
ssh_setup <CDP server IP address>
• Enter the file in which to save the key. Use default and click enter.
• Enter the passphrase or click enter for empty passphrase.
• Enter the same passphrase again.
• Are you sure you want to continue? Type Yes and press Enter to continue.
• Enter the password for the CDP Server.
• Enter the password for CDP Server again to append the authorized key.

CDP Administration Guide 65


Data Protection

An example of the ssh_setup output is shown below:


# ssh_setup 10.6.5.102
Generating public/private key pair...
Enter file in which to save the key (//.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Copying keys file to Server...
The authenticity of host '10.6.5.102 (10.6.5.102)' can't be
established.
RSA key fingerprint is
f8:c8:42:6f:36:24:15:87:e5:45:51:e0:5e:bd:ff:cc.
Are you sure you want to continue connecting (yes/no)? yes
[email protected]'s password:
Appending authorized_keys on Server...
[email protected]'s password:
Checking if access is available to Server...
Access to Server is available

Protection The protect_vg script is used to establish the LVM mirror relationship with the
script FalconStor CDP Appliance. The protect_vg script will:
• Create an HP-UX SAN client to represent this HP-UX host machine on the
CDP appliance, if one does not already exist.
• Create mirror LUNs on the CDP appliance, using disks in the "Protection"
storage group.
• Assign those mirror LUNs to the HP-UX SAN client.
• Establish a logical volume mirror between the local primary logical Volume
and the CDP provisioned disk acting as the mirror.
As a result, each and every volume found within the specified Disk Group will be
mirrored to a CDP-provisioned LUN. If necessary, you will need to use the
FalconStor Management Console to associate sets of the mirror LUNs to form a
snapshot group. The console is also used to enable TimeMark or CDP journaling for
protection against corruption and remote Replication for disaster recovery.

Recovery The recover_vg, mount_vg, and umount_vg scripts are used to recover data
scripts given different scenarios of primary disk failure (physical failure or logical
corruption).
Follow the instructions below to download and configure the scripts:

1. Download the package auto_lvm_scripts_1.0.tar.Z to the directory:


/usr/local/ipstorclient/bin.

CDP Administration Guide 66


Data Protection

2. Extract the package auto_lvm_scripts_1.6.tar.Z.


cd /usr/local/ipstorclient/bin
zcat auto_lvm_scripts_1.6.tar.Z |tar xvf

Download and configure the protection and recovery scripts for HP-UX VxVM

FalconStor provides HP-UX scripts to simplify and automate the protection and
recovery process of Veritas Volume Manager on HP-UX platforms.

ssh_setup The ssh_setup script is used to create ssh public/private key between the HP-UX
script and the storage server.

1. Download the package auto_vxvm_hpux_v1.0.tar.Z to the directory /usr/


local/ipstorclient/bin.

2. Extract the package auto_vxvm_hpux_v1.0.tar.Z.


cd /usr/local/ipstorclient/bin
zcat auto_vxvm_hpux_v1.0.tar.Z | tar xvf -

3. Configure ssh public/private key authentication between HP-UX and the storage
server.
ssh_setup <storage server IP address>
• Enter the file in which to save the key. Use default and click enter.
• Enter the passphrase or hit enter for empty passphrase.
• Enter the same passphrase again.
• Are you sure you want to continue? Type "yes" and click Enter to continue.
• Enter the password for the storage server.
• Enter the password for the storage server again to append authorized key.

CDP Administration Guide 67


Data Protection

An example of the ssh_setup output is shown below:


# ssh_setup 10.6.5.102
• Generating public/private key pair...
• Enter file in which to save the key (//.ssh/id_rsa):
• Enter passphrase (empty for no passphrase):
• Enter same passphrase again:
• Copying keys file to server...
• The authenticity of host '10.6.5.102 (10.6.5.102)' can't be
established.
• RSA key fingerprint is
f8:c8:42:6f:36:24:15:87:e5:45:51:e0:5e:bd:ff:cc.
• Are you sure you want to continue connecting (yes/no)?
yes
[email protected]'s password:
• Appending authorized_keys on storage server...
[email protected]'s password:
• Checking if access is available to Server...
• Access to storage server is available.

Protection The protect_dg script is used to establish the VxVM mirror relationship with the
script FalconStor CDP Appliance. The protect_dg script will:
• Create an HP-UX SAN client to represent this HP-UX host machine on the
CDP appliance, if one does not already exist.
• Create mirror LUNs on the CDP appliance, using disks in the "Protection"
storage group.
• Assign those mirror LUNs to the HP-UX SAN client.
• Establish VxVM mirrors between the local primary Veritas Volume and the
CDP provisioned disk acting as the mirror.
As a result, each and every volume found within the specified Disk Group will be
mirrored to a CDP-provisioned LUN. If necessary, you will need to use the
FalconStor Management Console to associate sets of the mirror LUNs to form a
snapshot group. The console is also used to enable TimeMark or CDP journaling for
protection against corruption and remote Replication for disaster recovery.

Recovery The recover_dg, mount_dg, and umount_dg scripts are used to recover data in
scripts the event of primary disk failure (physical failure or logical corruption). Follow the
instructions below to download and configure the scripts:

1. Download the package auto_vxvm_scripts_1.0.tar.Z to the directory /


usr/local/ipstorclient/bin.

2. Extract the package auto_vxvm_scripts_1.0.tar.Z.


cd /usr/local/ipstorclient/bin
zcat auto_lvm_scripts_1.6.tar.Z |tar xvf

CDP Administration Guide 68


Data Protection

Protect your servers in a HP-UX environment

Protecting your servers in a HP Unix environment requires that you establish a


mirror relationship between the HP-UX (LVM and vxVM) Volume Group's Logical
Volumes and the mirror LUNs from the CDP appliance.

Establish the The following procedure describes how to use the protect_vg script to establish
HP-UX Logical mirror relationships between the HP-UX Volume Group's Logical Volumes and the
Volume mirror LUNs from the CDP appliance.
Manager (LVM)
mirror As an example, let’s assume that we have a Volume Group named vg01 that we
want to protect.

1. Display Volume Group information for vg01 to confirm that no mirrors exist.
• Display Volume Group information and list available logical volume by
running vgdisplay -v vg01.
• Display logical volume information by running lvdisplay -v /dev/
vg01/<logical volume name> and confirm the "Mirror copies" section
is "0".

2. Protect a Volume Group by running the protect_vg script.


An example of the protect_vg output is as follows:
# protect_vg vg01
• Cleaning up Offline Disk…
• Creating FC Client called etlhp2...
• Creating Virtual Disk etlhp2-vg01-Protection size of
1500MB...
• Assigning Virtual Disk etlhp2-vg01-Protection to
etlhp2...
• Scanning for new disk which could take time
depending on your system...
• Extending Volume Group vg01 with /dev/dsk/c7t0d0
• Creating Mirror for /dev/vg01/lvol1
• Creating Mirror for /dev/vg01/lvol2
• Creating Mirror for /dev/vg01/lvol3
• Synchronizing Mirrors for vg01...

3. Confirm mirrors are created and in sync for all logical volume on vg01.
vdisplay -v /dev/vg01/<logical volume name> | more
• The section "Mirror copies" should now be "1", which means there is one
mirror for that logical volume.
• The section "LV Status" should also be "available/syncd", which means
mirrors are synchronized; otherwise it will display "available/stale".
You are now ready to enable TimeMark, CDP Journaling, and/or Replication for
virtual disk etlhp2-vg01-Protection on Volume Group vg01 mirror.

CDP Administration Guide 69


Data Protection

Establish the Let’s assume that we have a Disk Group named dg01 that we want to protect.
HP-UX VxVM
mirror 1. Display Disk Group information for dg01 to confirm that no CDP VxVM mirrors
already exist.
Execute the command vxprint -g dg01 -p | grep ipstor_pl to make
sure that there are no IPStor plex existing on disk group dg01.

2. Protect a Volume Group by running the protect_dg script: protect_dg dg01


An example of the protect_dg output is as follows:
# protect_dg 10.6.7.240
• Creating Virtual Disk etlhp4-dg01-Protection size of 5120MB...
• Scanning VxVM Disks...
• Assigning Virtual Disk etlhp4-dg01-Protection vid 448 to etlhp4...
• Scanning system for new disk...
• Scanning VxVM Disks...
• Initializing c7t0d1 for use with VxVM...
• Adding c7t0d1 to Disk Group dg01...
• Creating Subdisk ipstor_sd1 from ipstor_dg01...
• Creating Plex ipstor_pl1 from ipstor_sd1...
• Enabling IPStor Microscan Write for etlhp4-dg01-Protection vid 448...
• Attaching Plex ipstor_pl1 to Volume vol01...
• Disabling IPStor Microscan Write for etlhp4-dg01-Protection vid 448...
• Creating Subdisk ipstor_sd2 from ipstor_dg01...
• Creating Plex ipstor_pl2 from ipstor_sd2...
• Enabling IPStor Microscan Write for etlhp4-dg01-Protection vid 448...
• Attaching Plex ipstor_pl2 to Volume vol02...
• Disabling IPStor Microscan Write for etlhp4-dg01-Protection vid 448...
• Creating Subdisk ipstor_sd3 from ipstor_dg01...
• Creating Plex ipstor_pl3 from ipstor_sd3...
• Enabling IPStor Microscan Write for etlhp4-dg01-Protection vid 448...
• Attaching Plex ipstor_pl3 to Volume vol03...
• Disabling IPStor Microscan Write for etlhp4-dg01-Protection vid 448...
• Creating Snapshot Resource Area for etlhp4-dg01-Protection vid 448...
• Enabling TimeMark for etlhp4-dg01-Protection vid 448...

3. Confirm mirrors are created and in sync for all volumes in disk group dg01.
vxprint -g dg01 -p|grep ipstor_pl
Each of the IPStor plex should be "ENABLED" and "ACTIVE".
You are now ready to create TimeMark, CDP Journaling, and/or Replication for
virtual disk etlhp4-dg01-Protection on Disk Group dg01 mirror.

CDP Administration Guide 70


Data Protection

Refer to your CDP Reference Guide for information about configuring TimeMarks,
CDP Journaling, and Replication using the FalconStor Management Console. The
administration guide also provides details about how to create a snapshot group and
how to enable the above protection services at the group level. Generally, for a
database or E-mail system, all of the data files and transaction logs for the same
application should be grouped into a snapshot group in order to achieve transaction-
level integrity for the application.

CDP Administration Guide 71


Data Protection

Data Protection in an AIX environment


The first step in protecting data in an AIX environment is preparing the CDP
environment. Refer to the Prepare the AIX host machine section in the previous
chapter.
Protecting your servers in an AIX Unix environment requires that you establish a
mirror relationship between the AIX Volume Group's Logical Volumes and the mirror
LUNs from the CDP appliance. This section describes the steps required to protect
your servers in an AIX environment.

Download and configure the protection and recovery scripts for AIX LVM

FalconStor provides AIX scripts to simplify and automate the protection and
recovery process of logical volumes on AIX platforms.
ssh_setup script
The ssh_setup script is used to create a ssh public/private key between the AIX and
the CDP server.

1. Download the package auto_lvm_aix_v1.0.tar.Z to the directory /usr/


local/ipstorclient/bin.

2. Extract the package:


auto_lvm_aix_v1.0.tar.Z.
cd /usr/local/ipstorclient/bin
zcat auto_lvm_aix_v1.0.tar.Z | tar xvf -

3. Configure ssh public/private key authentication between AIX and the CDP
server.
ssh_setup <IP address>
• Enter the file in which to save the key. Use default and click Enter.
• Enter the passphrase or click Enter to the passphrase field empty.
• Enter the same passphrase again.
• Are you sure you want to continue? Type Yes and press Enter to continue.
• Enter the password for the CDP Server.
• Enter the password for CDP Server again to append the authorized key.

CDP Administration Guide 72


Data Protection

An example of the ssh_setup output is shown below:


# ssh_setup 10.6.5.102
• Generating public/private key pair...
• Enter file in which to save the key (//.ssh/id_rsa):
• Enter passphrase (empty for no passphrase):
• Enter same passphrase again:
• Copying keys file to Server...
• The authenticity of host '10.6.5.102 (10.6.5.102)' can't be established.
• RSA key fingerprint is f8:c8:42:6f:36:24:15:87:e5:45:51:e0:5e:bd:ff:cc.
• Are you sure you want to continue connecting (yes/no)? yes
[email protected]'s password:
• Appending authorized_keys on Server...
[email protected]'s password:
• Checking if access is available to Server...
• Access to Server is available

ssh_setup_ha The ssh_setup_ha script is used to create ssh public/private key for each
script HACMP cluster node. This process is only required if the volume groups to be
protected are configured as part of HACMP cluster.
Make sure to perform this process from each node to each other (node1 -> node2 &
node2 -> node1). As of version 1.4, two node cluster is supported for protection and
recovery.

1. Download the package auto_lvm_aix_v1.4.tar.Z to the directory /usr/


local/ipstorclient/bin.

2. Extract the package auto_lvm_aix_v1.4.tar.Z.


cd /usr/local/ipstorclient/bin
zcat auto_lvm_aix_v1.4.tar.Z | tar xvf -

3. Configure ssh public/private key authentication between each of the HACMP


custer nodes.
ssh_setup_ha <Other Node>
• Enter file in which to save the key. Use the default and click Enter.
• Enter a passphrase or click Enter to keep the passphrase field empty.
• Enter the same passphrase again.
• Are you sure you want to continue? Type yes and click Enter to continue.
• Enter the password for other node.
• Enter the password for other node again to append the authorized key.

CDP Administration Guide 73


Data Protection

An example of the ssh_setup output is shown below:


# ssh_setup vaix2
Generating public/private key pair...
Enter file in which to save the key (//.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Checking if directory vaix2/:.ssh exists...
The authenticity of host 'vaix2 (192.168.15.140)' can't be
established.
RSA key fingerprint is
c1:97:f5:aa:50:03:19:6d:14:a7:b8:b4:56:76:f7:10.
Are you sure you want to continue connecting (yes/no)?yes
root@vaix2's password:
Copying keys file to other node...
root@vaix2's password:
Appending authorized_keys on other node...
root@vaix2's password:

Protection The protect_vg script is used to establish the mirror relationship with the FalconStor
script CDP Appliance and protect_vg_ha if the system is configured with HACMP cluster.
The protect_vg and protect_vg_ha script will:
• Create an AIX SAN client to represent this AIX host machine on the CDP
appliance, if one does not already exist. If the system is configured with
HACMP cluster, both local and remote HACMP node will be created to
represent them on the CDP appliance.
• Create mirror LUNs on the CDP appliance, using disks in the "Protection"
storage group.
• Assign the mirror LUNs to the AIX SAN client and both local and remote
HACMP node (if configured).
• Establish a logical volume mirror between the local primary logical volume
and the CDP provisioned disk acting as the mirror.
As a result, each and every Logical Volume found within the specified Volume Group
will be mirrored to a CDP-provisioned LUN. If necessary, you can use the
FalconStor Management Console to associate sets of the mirror LUNs to form a
snapshot group. The console is also used to enable TimeMark or CDP journaling for
protection against corruption and remote Replication for disaster recovery.

Recovery The recover_vg, mount_vg, and umount_vg scripts are used to recover data given
scripts different scenarios of primary disk failure (physical failure or logical corruption). Use
recover_vg_ha, mount_vg_ha, and umount_vg_ha scripts if HACMP cluster is
configured on the system.

CDP Administration Guide 74


Data Protection

Follow the instructions below to download and configure the scripts:

1. Download the package auto_lvm_aix_v1.4.tar.Z to the directory /usr/


local/ipstorclient/bin.

2. Extract the package auto_lvm_aix_v1.4.tar.Z.


cd /usr/local/ipstorclient/bin
zcat auto_lvm_aix_v1.4.tar.Z | tar xvf -

Establish the AIX Logical Volume Manager (LVM) mirror on a volume group

As an example, let’s assume that we have a Volume Group named vg01 (with total
size 10240mb and used logical volume size of 500 MB) that we want to protect.

1. Display the Volume Group information for vg01 to confirm that the number of
mirrors has not reached the maximum of three.
• Display Volume Group information and list available logical volume by
running vgdisplay -v vg01
• Display logical volume information by running lslv <logical volume
name> and confirm the "copies" section ihas not reached the maximum of
three.

2. Protect a Volume Group by running the protect_vg script.


protect_vg <IP Address>_vg01

CDP Administration Guide 75


Data Protection

An example of the protect_vg output is shown below


# protect_vg 10.6.7.236_vg01
The ssh public/privare key creation will only happen if it has not already
been established.
• The authenticity of host '10.6.7.236 (10.6.7.236)' can't be established.
• RSA key fingerprint is
6b:a7:68:ef:01:9e:fd:31:ee:bc:32:3b:78:2c:27:f7.
• Are you sure you want to continue connecting (yes/no)?yes
• Generating public/private key pair...
• Enter file in which to save the key (//.ssh/id_rsa):
• Enter passphrase (empty for no passphrase):
• Enter same passphrase again:
• Checking if Directory 10.6.7.236:/root/.ssh exists...
[email protected]'s password: <Enter Password>
• Copying keys file to Server...
[email protected]'s password: <Enter Password>
• Appending authorized_keys on Server...
[email protected]'s password: <Enter Password>
• Creating FC Client called etlaix3...
• Creating Virtual Disk etlaix3-vg01-Protection size of 10240MB...
• protect_vg <IP address> vg01
• Assigning Virtual Disk etlaix3-vg01-Protection vid 457 to etlaix3...
• Adding Disk hdisk2 to Volume Group vg01...
• Creating Mirror for Logical Volume fslv05 on Disk hdisk2...
• Creating Mirror for Logical Volume fslv06 on Disk hdisk2...
• Creating Mirror for Logical Volume lv00 on Disk hdisk2...
• Creating Mirror for Logical Volume loglv02 on Disk hdisk2...
• Creating Mirror for Logical Volume loglv04 on Disk hdisk2...
• Enabling FalconStor Microscan Write for etlaix3-vg01-Protection vid
457...
• Synchronizing Volume Group vg01...
• Disabling FalconStor Microscan Write for etlaix3-vg01-Protection vid
457...
• Creating Snapshot Resource Area for etlaix3-vg01-Protection vid
457...
• Enabling TimeMark for etlaix3-vg01-Protection vid 457...:
A CDP Virtual Disk is created with a total size of 10240 MB. Since thin
provisioning is used, only 500 MB space will be used.

3. Confirm mirrors are created and in sync for all logical volume on vg01.
lslv <logical volume name>
• The section "Copies" should be incremented by one, which means there is
an additional physical volume that has been added to the logical volume.

CDP Administration Guide 76


Data Protection

• The section "LV Status" should be "closed/syncd" or “opened/syncd”, which


means mirrors are synchronized; otherwise it will display "closed/stale" or
“opened/stale”.
You are now ready to enable TimeMark, CDP Journaling, and/or Replication for
virtual disk etlaix3-vg01-Protection on Volume Group vg01 mirror.

Establish the AIX LVM mirror on a Logical Volume

Let's assume we have a Logical Volume named fslv05 with a size of 300mb but with
total Volume Group size of 10240mb that we would like to protect.

1. Display Logical Volume information for fslv05 to confirm that the number of
mirrors hasn't reached the maximum number of "3".
Display logical volume information by running:
lslv <logical volume name> and confirm the "COPIES" section
hasn't reached the maximum number of "3".

2. Protect a Volume Group by running the protect_vg script.


protect_lv <CDP IP Address> fslv05

CDP Administration Guide 77


Data Protection

An example of the protect_lv output is shown below:


# ./protect_lv 10.6.7.236 fslv05
The ssh public/private key creation will only happen if it hasn't already been
established.
• The authenticity of host '10.6.7.236 (10.6.7.236)' can't be established.
• RSA key fingerprint is
6b:a7:68:ef:01:9e:fd:31:ee:bc:32:3b:78:2c:27:f7.
• Are you sure you want to continue connecting (yes/no)?yes
• Generating public/private key pair...
• Enter file in which to save the key (//.ssh/id_rsa):
• Enter passphrase (empty for no passphrase):
• Enter same passphrase again:
• Checking if Directory 10.6.7.236:/root/.ssh exists...
[email protected]'s password: <Enter Password>
• Copying keys file to Server...
[email protected]'s password: <Enter Password>
• Appending authorized_keys on Server...
[email protected]'s password: <Enter Password>
• Creating FC Client called etlaix3...
• Creating Virtual Disk etlaix3-vg01-Protection size of 9992MB...
• Assigning Virtual Disk etlaix3-vg01-Protection vid 457 to etlaix3...
• Adding Disk hdisk2 to Volume Group vg01...
• Creating Mirror for Logical Volume fslv05 on Disk hdisk2...
• Creating Mirror for Logical Volume loglv02 on Disk hdisk2...
• Enabling FalconStor Microscan Write for etlaix3-vg01-Protection vid
457...
• Synchronizing Volume Group vg01...
• Disabling FalconStor Microscan Write for etlaix3-vg01-Protection vid
457...
• Creating Snapshot Resource Area for etlaix3-vg01-Protection vid
457...
• Enabling TimeMark for etlaix3-vg01-Protection vid 457...:

It will create a CDP Virtual Disk with a total size of 10240 MB. However since it uses
thin provisioning, only 300 MB of space will be used.

3. Confirm mirrors are created and in sync for all logical volume on vg00.
lslv <logical volume name>
• The section "COPIES" should be incremented by one, which means there's
an additional physical volume that has been added to the logical volume.
• The section "LV STATE" should also be "closed/syncd" or "opened/syncd",
which means mirrors are synchronized; otherwise it will display "closed/
stale" or "opened/stale".

CDP Administration Guide 78


Data Protection

You are now ready to enable CDP Journaling, and/or Replication for virtual disk
etlaix3-vg01-Protection on Volume Group vg01 mirror.

Establish the AIX LVM mirror on HACMP Volume Group

Let's assume we have a shared HACMP Volume Group named vg01 that we would
like to protect and it is active on remote HACMP node vaix3.
Note: Make sure that HACMP node name resolves properly to an IP address which
can be added through /etc/hosts.

1. Display Volume Group information for vg01 to confirm that no mirrors already
exist.
• Display Volume Group information and list available logical volume by
running lsvg -l sharevg_01.
• Display logical volume information by running: lslv <logical volume
name> and confirm the "COPIES" section is "1".

2. Protect a Volume Group by running the protect_vg script.


protect_vg_ha <CDP IP Address> <Other Node> <Volume Group>
An example of the protect_vg_ha output is shown below:
• # protect_vg_ha 192.168.15.96 vaix3 sharevg_01
• Creating FC Client called vaix2...
• Creating FC Client called vaix3...
• Creating Virtual Disk Terayon-sharevg_01-Protection size of 2044MB...
• Assigning Virtual Disk Terayon-sharevg_01-Protection vid 261 to vaix2...
• Rescanning DynaPath Devices...
• Assigning Virtual Disk Terayon-sharevg_01-Protection vid 261 to vaix3...
• Rescanning DynaPath Devices on Node vaix3...
• Adding Disk hdisk5 to Volume Group sharevg_01 on Node vaix3...
• Creating Mirror for Volume Group sharevg_01 on Disk hdisk5 on Node
vaix3...
• Enabling FalconStor Microscan Write for Terayon-sharevg_01-Protection vid
261...
• Synchronizing Volume Group sharevg_01 on Node vaix3...
• Disabling FalconStor Microscan Write for Terayon-sharevg_01-Protection vid
261...
• Creating Snapshot Resource Area for Terayon-sharevg_01-Protection vid
261...
• Enabling TimeMark for Terayon-sharevg_01-Protection vid 261...:

3. Confirm mirrors are created and in sync for all logical volume for sharevg_01 on
node vaix3.
lslv <logical volume name>

CDP Administration Guide 79


Data Protection

• The section "COPIES" should now be "2", which means there are 2 physical
volume that makes up that logical volume.
• The section "LV STATE" should also be "closed/syncd" or "opened/syncd",
which means mirrors are synchronized; otherwise it will display "closed/
stale" or "opened/stale".
You are now ready to enable TimeMark, CDP Journaling, and/or Replication for
virtual disk Terayon-sharevg_01-Protection on Volume Group vg01 mirror.
Refer to the CDP Reference Guide for information about configuring TimeMarks,
CDP Journaling, and Replication using the FalconStor Management Console. The
Administration guide also provides details on how to create a snapshot group and
how to enable the above protection services at the group level. Generally, for a
database or E-mail system, all of the data files and transaction logs for the same
application should be grouped into a snapshot group in order to achieve transaction-
level integrity for the application.

CDP Administration Guide 80


FalconStor Management
Console

The FalconStor Management Console is the administration tool for the storage
storage network. It is a Java application that can be used on a variety of platforms
and allows administrators to create, configure, manage, and monitor the storage
resources and services on the storage server network as well as run/view reports,
enter licensing information, and add/delete administrators.
The FalconStor Management Console software can be installed on each machine
connected to a storage server. In addition to installing the FalconStor Management
Console, the console is also available via download from your storage server
appliance.

Start the console


On Windows, select Start --> Programs --> FalconStor IPStor --> IPStor Console.On
Linux and other UNIX environments, execute the following:

cd /usr/local/ipstorconsole
./ipstorconsole

Note:
• If your screen resolution is 640 x 480, the splash screen may be cut off
while the console loads.
• The console might not launch on certain systems with display settings
configured to use 16 colors.
• The console needs to be run from a directory with “write” access.
Otherwise, the host name information and message log file retrieved
from the storage server will not be able to be saved to the local directory.
As a result, the console will display event messages as numbers and
console options will not be able to be saved.

CDP Administration Guide 81


FalconStor Management Console

Connect to your storage server


1. Discover all storage servers on your storage subnet by selecting Tools -->
Discover.

2. Connect to a storage server.


You can connect to an existing storage server, by right-clicking on it and
selecting Connect. Enter a valid user name and password (both are case
sensitive).
If you want to connect to a server that is not listed, right-click on the storage
servers object and select Add, enter the name of the server, a valid user name
and password.
When you connect to a server for the first time, a configuration wizard is
launched.
You may see a dialog box notifying you of new devices attached to the server.
Here, you will see all devices that are either unassigned or reserved devices. At
this point you can either prepare the device (reserve it for a virtual, direct, or
service enabled device) and/or create a logical resource.
Once you are connected to a server, the server icon will change to show that you
are connected:
If you connect to a server that is part of a failover configuration, you will
automatically be connected to both servers.

Note: The FalconStor Management Console remembers the servers to which the
console has successfully connected. If you close and restart the console, the
servers will still be displayed in the tree but you will not be connected to them.

CDP Administration Guide 82


FalconStor Management Console

Configure your server using the configuration wizard


A wizard has been provided to lead you through entering license keycodes and
setting up your network configuration. If this is the first time you are connecting to
your CDP server, you will see one of the following:

You will only see step 4 if IPStor detected


IPMI when the server booted up.

Step 1: Enter license keys

Click the Add button and enter your keycodes.


Be sure to enter keycodes for any options you have purchased. Each FalconStor
option requires that a keycode be entered before the option can be configured and
used. Refer to ‘Licensing’ for more information.
Configuration note: After completing the configuration wizard, if you need to
add license keycodes, you can right-click on your CDP appliance and select
License.

Step 2: Setup network

Enter information about your network configuration.


If you need to change storage server IP addresses, you must make these changes
using System Maintenance --> Network Configuration in the console. Using yast or
other third-party utilities will not update the information correctly.
Refer to ‘Network configuration’ for more information.
Configuration note: After completing the configuration wizard, if you need to
change these settings, you can right-click on your CDP appliance and select
System Maintenance --> Network Configuration.

CDP Administration Guide 83


FalconStor Management Console

Step 3: Set hostname

Enter a valid name for your storage appliance.


Valid characters are letters, numbers, underscore, or dash.

You will need to restart the server if you change the hostname.

Note: Do not change the hostname if you are using block devices. If you do, all
block devices claimed by CDP will be marked offline and seen as foreign devices.

CDP Administration Guide 84


FalconStor Management Console

FalconStor Management Console user interface


The FalconStor Management Console displays the configuration for the storage
servers on your storage network. The information is organized in a familiar Explorer-
like tree view.
The tree allows you to navigate the various storage servers and their configuration
objects. You can expand or collapse the display to show only the information that
you wish to view. To expand an item that is collapsed, you can click on the
symbol next to the item. To collapse an item, click on the symbol next to the item.
Double-clicking on the item will also toggle the expanded/collapsed view of the item.
You need to connect to a server before you can expand it.
When you highlight an object in the tree, the right-hand pane contains detailed
information about the object. You can select one of the tabs for more information.
The console log located at the bottom of the window displays information about the
local version of the console. The log features a drop-down box that allows you to
see activity from this console session.

Search for The console has a search feature that helps you find any physical device, virtual
objects in the device, or client on any storage server. To search:
tree
1. Highlight a storage server in the tree.

2. Select Edit menu --> Find.

3. Select the type of object to search for and the search criteria.
Once you select an object type, a list of existing objects appears. If you highlight
one, you will be taken directly to that object in the tree.
Alternatively, you can type the full name, ID, ACSL (adapter, channel, SCSI,
LUN), or GUID (Global Unique Identifier). Once you click the Search button, you
will be taken directly to that object in the tree.

Storage server The console displays the configuration and status of the storage server.
status and Configuration information includes the version of the CDP software and base
configuration operating system, the type and number of processors, amount of physical and
swappable memory, supported protocols, and network adapter information.
The Event Log tab displays system events and errors.

Alerts The console displays all critical alerts upon login to the server. Select the Display
only the new alerts next time if you only want to see new critical alerts the next time
you log in. Selecting this option indicates acknowledgement of the alerts.

CDP Administration Guide 85


FalconStor Management Console

Discover storage servers

CDP can automatically discover all storage servers on your storage subnet. Storage
servers running CDP will be recognized as storage servers. To discover the
servers:

1. Select Tools --> Discover

2. Enter your network criteria.

Protect your storage server’s configuration

FalconStor provides several convenient ways to protect your CDP configuration.


This is useful for disaster recovery purposes, such as if a storage server is out of
commission but you have the storage disks and want to use them to build a new
storage server. You should create a configuration repository even on a standalone
server.

Continuously You can create a configuration repository that maintains a continuously updated
save version of your storage system configuration. The status of the configuration
configuration repository is displayed on the console under the General tab. In the case of a failure
of the configuration repository, the console displays the time of the failure along with
the last successful update. This feature works seamlessly with the FalconStor
Failover option to provide business continuity in the event that a storage server fails.
For additional redundancy, the configuration repository can be mirrored to another
disk.
To create a configuration repository:

1. Highlight a storage server in the tree.

2. Right-click on the server and select Options --> Enable Configuration


Repository.

3. Select the physical device(s) for the Configuration Repository resource.

4. Confirm all information and click Finish to create the repository.


You will now see a Configuration Repository object in the tree under Logical
Resources.
To mirror the repository, right-click on it and select Mirror --> Add.
If you are using the FalconStor Failover option and a storage server fails, the
secondary server will automatically use the configuration repository to maintain
business continuity and service clients.
If you are not using the Failover option, you will need to contact technical support
when preparing your new storage server.

CDP Administration Guide 86


FalconStor Management Console

Auto save You can set your system to automatically replicate your system configuration to an
configuration FTP server on a regular basis. Auto Save takes a point-in-time snapshot of the
storage server configuration prior to replication. To use Auto Save:

1. Right-click on the server and select Properties.

2. Select the Auto Save Config tab and enter information automatically saving your
storage server system configuration.
For detailed information about this dialog, refer to the ’Auto Save Config’ section.

Manually save You can manually save your system configuration any time you make a change in
configuration as the console, including any time you add/change/delete a client or resource, assign a
needed client, or make any changes to your failover/mirroring/replication configuration. If
you add a server to a client from the Client Monitor (or via command line for Unix
clients), you should also re-save your configuration.
To do this:

1. Highlight a storage server in the tree.

2. Select File menu --> Save Configuration.

3. Select a filename and location.


Creating a folder only supports single byte characters. For double byte
characters, create a folder from Explorer before saving the configuration.

Restore You can restore a storage server configuration from a file that was created using the
configuration Save Configuration option. This is for disaster recovery purposes and should not be
used in day-to-day operation of the server. Changes made since the configuration
was last saved will not be included in this restored configuration.
Warning: Restoring a configuration will overwrite existing virtual device and client
configurations for that server. Storage server partition information will not be
restored. This feature should only be used if your configuration is lost or corrupted,
as lost virtual devices can result in lost data for the clients using those virtual
devices.
To restore the configuration:

1. Import the disk(s) that were recovered from the damaged storage server to your
new storage server.
Refer to ‘Import a disk’ for more information.

2. Highlight the new storage server in the tree.


You should not make any changes to the server before restoring the
configuration.

3. Select File menu --> Restore Configuration.

4. Confirm and locate the file that was saved.

CDP Administration Guide 87


FalconStor Management Console

The storage server will be restarted.

5. If any physical or virtual devices were changed after the configuration was
saved, you must rescan (right-click on Physical Resources and select Rescan)
to update the newly restored configuration.

Licensing

To license CDP and its options:

1. Obtain your CDP keycode(s) from FalconStor or its representatives.

2. In the console, right-click on the server and select License.

The License Summary window is informational only and displays a list of the
options supported for this server. You can enter keycodes for your purchased
options on the Keycode Detail window.

3. Press the Add button on the Keycodes Detail window to enter each keycode.

Note: If multiple administrators are logged into a storage server at the same time,
license changes made from one console will take effect in other console only
when the administrator disconnects and then reconnects to the server.

4. If your licenses have not been registered yet, click the Register button on the
Keycodes Detail window.
You can register online if you have an Internet connection.

CDP Administration Guide 88


FalconStor Management Console

To register offline, you must save the registration information to a file on your
hard drive and then E-mail it to FalconStor’s registration server. When you
receive a reply, save the attachment to your hard drive and send it to the
registration server to complete the registration.

Set Server properties

To set properties for a specific server:

1. Right-click on the server and select Properties.

The tabs you see will depend upon your storage server configuration.

2. If you have multiple NICs (network interface cards) in your server, enter the IP
addresses using the Server IP Addresses tab.

CDP Administration Guide 89


FalconStor Management Console

If the first IP address stops responding, the CDP clients will attempt to
communicate with the server using the other IP addresses you have entered in
the order they are listed.

Note:
• In order for the clients to successfully use an alternate IP address, your
subnet must be set properly so that the subnet itself can redirect traffic to
the proper alternate adapter.
• You cannot assign two or more NICs within the same subnet.
• The client becomes aware of the multiple IP addresses when it initially
connects to the server. Therefore, if you add additional IP addresses in
the console while the client is running, you must rescan devices
(Windows clients) or restart the client (Linux/Unix clients) to make the
client aware of these IP addresses.

3. On the Activity Database Maintenance tab, indicate how often the SAN data
should be purged.

The Activity Log is a database that tracks all system activity, including all data
read, data written, number of read commands, write commands, number of
errors etc. This information is used to generate SAN information for the CDP
reports.

CDP Administration Guide 90


FalconStor Management Console

4. On the SNMP Maintenance tab, indicate which types of messages should be


sent as traps to your SNMP manager.

Five levels are available:


• None – (Default) No messages will be sent.
• Critical - Only critical errors that stop the system from operating properly will
be sent.
• Error – Errors (failure such as a resource is not available or an operation
has failed) and critical errors will be sent.
• Warning – Warnings (something occurred that may require maintenance or
corrective action), errors, and critical errors will be sent.
• Informational – Informational messages, errors, warnings, and critical error
messages will be sent.

CDP Administration Guide 91


FalconStor Management Console

5. On the iSCSI tab, set the iSCSI portal that your system should use as default
when creating an iSCSI target.

If you have multiple NICs, when you create an iSCSI target, this IP address will
be selected by default for you.

6. If necessary, change settings for mirror resynchronization and replication on the


Performance tab.

CDP Administration Guide 92


FalconStor Management Console

The settings on this tab affect system performance. The defaults should be
optimal for most configurations. You should only need to change the settings for
special situations, such as if your mirror is remotely located.
Mirror Synchronization: Use [n] outstanding commands of [n] KB - The number
of commands being processed at one time and the I/O size. This must be a
multiple of the sector size.
Synchronize Out-of-Sync Mirrors - Determine how often the system should
check and attempt to resynchronize active out-of-sync mirrors, how often it
should retry synchronization if it fails to complete, and whether or not to include
replica mirrors. These setting will only be used for active mirrors. If a mirror is
suspended because the lag time exceeds the acceptable limit, that
resynchronization policy will apply instead.
Replication: TCP or RUDP - TCP is the default replication protocol for all new
installations of IPStor 6.0 or higher, unless the connecting server does not
support TCP. If you want to update the protocol for existing replication jobs or for
an entire server, click the Update Protocol button.
Timeout replication after [n] seconds - Timeout after inactivity. This must be the
same on both the primary and target replication servers.
Throttle - The maximum amount of bandwidth that will be used for replication.
This is a global server parameter and affects all resources using either remote or
local replication. Throttle does not affect manual replication scans; it only affects
actual replication. It also does not affect continuous replication, which uses all
available bandwidth. Leaving the Throttle field set to 0 (zero) means that the
maximum available bandwidth will be used. Besides 0, valid input is 10-
1,000,000 KB/s (1G).
Enable Microscan - Microscan analyzes each replication block on-the-fly during
replication and transmits only the changed sections on the block. This is
beneficial if the network transport speed is slow and the client makes small
random updates to the disk. This global Microscan option overrides the
Microscan setting for each individual virtual device.

CDP Administration Guide 93


FalconStor Management Console

7. Select the Auto Save Config tab and enter information automatically saving your
storage server system configuration.

You can set your system to automatically replicate your system configuration to
an FTP server on a regular basis. Auto Save takes a point-in-time snapshot of
the storage server configuration prior to replication.
The target server you specify in the Ftp Server Name field must have FTP server
installed and enabled.
The Target Directory is the directory on the FTP server where the files will be
stored. The directory name you enter here (such as ipstorconfig) is a directory
on the FTP server (for example ftp\ipstorconfig). You should not enter an
absolute path like c:\ipstorconfig.
The Username is the user that the system will log in as. You must create this
user on the FTP site. This user must have read/write access to the directory
named here.
In the Interval field, determine how often to replicate the configuration.
Depending upon how frequently you make configuration changes to CDP, set
the interval accordingly. You can always save manually in between if needed. To
do this, highlight your storage server in the tree, select File menu --> Save
Configuration.
In the Number of Copies field, enter the maximum copies to keep. The oldest
copy will be deleted as each new copy is added.

8. On the Location tab, you can enter a specific physical location of the machine.
You can also select an image (smaller than 500 KB) to identify the server
location. Once the location information is saved, the new tab displays in the
FalconStor Management Console for that server.

CDP Administration Guide 94


FalconStor Management Console

Manage accounts

Only the root user can manage users and groups or reset passwords. You will need
to add an account for each person who will have administrative rights in CDP. You
will also need to add a user account for clients that will be accessing storage
resources from a host-based application (such as FalconStor DiskSafe or FileSafe).
To make account management easier, users can be grouped together and handled
simultaneously.
To manage users and groups:

1. Right-click on the server and select Accounts.

A list of all existing users and administrators are listed on the Users tab and a list
of all existing groups is listed on the Groups tab.

2. Select the appropriate option.

Note: You cannot manage accounts or reset a password when a server is in


failover state.

CDP Administration Guide 95


FalconStor Management Console

Add a user To add a user:

1. Click the Add button.

2. Enter the name for this user.


The username must adhere to the naming convention of the operating system
running on your storage server. Refer to your operating system’s documentation
for naming restrictions.

3. Enter a password for this user and then re-enter it in the Confirm Password field.
For iSCSI clients and host-based applications, the password must be between
12 and 16 characters. The password is case sensitive.

4. Specify the type of account.


Users and administrators have different levels of permissions in CDP.
• IPStor Admins can perform any CDP operation other than managing
accounts. They are also authorized for CDP client authentication.
• IPStor Users can manage virtual devices assigned to them and can allocate
space from the storage pool(s) assigned to them. They can also create new
SAN resources, clients, and groups as well as assign resources to clients,
and join resources to groups, as long as they are authorized. IPStor Users
can only view resources to which they are assigned. IPStor Users are also
authorized for CDP client authentication. Any time an IPStor User creates a
new SAN resource, client, or group, access rights will automatically be
granted for the user to that object.

5. (IPStor Users only) If desired, specify a quota.


Quotas enable the administrator to place manageable restrictions on storage
usage as well as storage used by groups, users, and/or hosts.
A user quota limits how much space is allocated to this user for auto-expansion.
Resources managed by this user can only auto-expand if the user’s quota has
not been reached. The quota also limits how much space a host-based
application, such as DiskSafe, can allocate.

6. Click OK to save the information.

CDP Administration Guide 96


FalconStor Management Console

Add a group To add a group:

1. Select the Groups tab.

2. Click the Add button.

3. Enter a name for the group.


You cannot have any spaces or special characters in the name.

4. If desired, specify a quota.


The quota limits how much space is allocated to each user in this group. The
group quota overrides any individual user quota that may be set.

5. Click OK to save the information.

Add users to Each user can belong to only one group.


groups
You can add users to groups on both the Users and Groups tabs.
On the Users tab, you can highlight a single user and click the Membership button to
add the user to a group.
On the Groups tab, you can highlight a group and click the Membership button to
add multiple users to that group.

You will see this dialog


from the Users tab.

You will see this dialog


from the Groups tab.

CDP Administration Guide 97


FalconStor Management Console

Set a quota You can set a quota for a user on the Users tab and you can set a quota for a group
on the Groups tab.
The quota limits how much space is allocated to each user. If a user is in a group,
the group quota will override the user quota.

Reset a To change a password, select Reset Password. You will need to enter a new
password password and then re-type the password to confirm.
You cannot change the root user’s password from this dialog. Use the Change
Password option below.

Change the root user’s password

This option lets you change the root user’s CDP password if you are currently
connected to a server.

1. Right-click on the server and select Change Password.

2. Enter your old password, the new one, and then re-enter it to confirm.

Check connectivity between the server and console

You can check if the console can successfully connect to the storage server. To
check connectivity, right-click on a server and select Connectivity Test.
By running this test, you can determine if your network connectivity is good. If it is
not, the test may fail at some point. You should then check with your network
administrator to find out what the problem is.

Add an iSCSI User or Mutual CHAP User

As a root user, you can add, delete or reset the CHAP secret of an iSCSI User or a
mutual CHAP user. Other users (i.e. IPStor administrator or IPStor user) can also
change the CHAP secret of an iSCSI user if they know the original CHAP secret.
To add an iSCSI user or Mutual CHAP User from an iSCSI server:

1. Right-click on the server and select iSCSI Users from the menu.

2. Select Users.

CDP Administration Guide 98


FalconStor Management Console

The iSCSI User Management screen displays.

From this screen, you can select an existing user from the list to delete the user
or reset the Chap secret.

3. Click the Add button to add a new iSCSI user.


The iSCSI User add dialog screen displays.

4. Enter a unique user name for the new iSCSI user.

5. Enter and confirm the password and click OK.


The Mutual CHAP level of security allows the target and the initiator authenticate
each other. A separate secret is set for each target and for each initiator in the
storage area network (SAN). You can select Mutual CHAP Users (Right-click on the
iSCSI server --> iSCSI Users --> Mutual CHAP User) to manage iSCSI Mutual
CHAP Users.

CDP Administration Guide 99


FalconStor Management Console

The iSCSI Mutual CHAP User Management screen displays allowing you to delete
users or reset the Mutual CHAP secret.\

Apply software patch updates

You can apply patches to your storage server through the console.

Add patch To apply a patch:

1. Download the patch onto the computer where the console is installed.

2. Highlight a storage server in the tree.

3. Select Tools menu --> Add Patch.


The patch will be copied to the server and installed.

Rollback patch To remove (uninstall) a patch and restore the original files:

1. Highlight a storage server in the tree.

2. Select Tools menu --> Rollback Patch.

CDP Administration Guide 100


FalconStor Management Console

System maintenance

The FalconStor Management Console gives you a convenient way to perform


system maintenance for your storage server.

Note: The system maintenance options are hardware-dependent. Refer to your


hardware documentation for specific information.

Network If you need to change storage server IP addresses, you must make these changes
configuration using Network Configuration. Using YaST or other third-party utilities will not update
the information correctly.

1. Right-click on a server and select System Maintenance --> Network


Configuration.

Domain name - Internal domain name.


Append suffix to DNS lookup - If a domain name is entered, it will be appended
to the machine name for name resolution.
DNS - IP address of your DNS server.
Default gateway - IP address of your default gateway.
NIC - List of Ethernet cards in the server.
Enable Telnet - Enable/disable the ability to Telnet into the server.
Enable FTP - Enable/disable the ability to FTP into the server. The storage
server must have the "pure-ftpd" package installed in order to use FTP.
Allow root to log in to telnet session - Log in to your telnet session using root.

CDP Administration Guide 101


FalconStor Management Console

Network Time Protocol - Allows you to keep the date and time of your storage
server in sync with Internet NTP servers. Click Config NTP to enter the IP
addresses of up to five Internet NTP servers.

2. Click Config to configure each Ethernet card.

If you select Static, you must add addresses and net masks.
MTU - Set the maximum transfer unit of each IP packet. If your card supports it,
set this value to 9000 for jumbo frames.

Note: If the MTU is changed from 9000 to 1500, a performance drop will occur. If
you then change the MTU back to 9000, the performance will not increase until
the server is restarted.

Set hostname Right-click on a server and select System Maintenance --> Set Hostname to change
your hostname. You must restart the server if you change the hostname.

Note: Do not change the hostname if you are using block devices. If you do, all
block devices claimed by CDP will be marked offline and seen as foreign devices.

Restart IPStor Right-click on a server and select System Maintenance --> Restart IPStor to restart
the Server processes.

Restart network Right-click on a server and select System Maintenance --> Restart Network to
restart your local network configuration.

Reboot Right-click on a server and select System Maintenance --> Reboot to reboot your
server.

CDP Administration Guide 102


FalconStor Management Console

Halt Right-click on a server and select System Maintenance --> Halt to turn off the server
without restarting it.

IPMI Intelligent Platform Management Interface (IPMI) is a hardware level interface that
monitors various hardware functions on a server.
If CDP detects IPMI when the server boots up, you will see several IPMI options on
the System Maintenance --> IPMI sub-menu, Monitor, and Filter.
Monitor - Displays the hardware information that is presented to CDP. Information is
updated every five minutes but you can click the Refresh button to update more
frequently.

You will see a red warning icon in the first column if there is a problem with a
component.
In addition, you will see a red exclamation mark on the server . An Alert tab will
appear with details about the error.

CDP Administration Guide 103


FalconStor Management Console

Filter - You can filter out components you do not want to monitor. This may be useful
for hardware you do not care about or erroneous errors, such as when you do not
have the hardware that is being monitored. You must enter the Name of the
component being monitored exactly as it appears in the hardware monitor above.

Physical Resources

Physical resources are the actual devices attached to this storage server. The SCSI
adapters tab displays the adapters attached to this server and the SCSI Devices tab
displays the SCSI devices attached to this server. These devices can include hard
disks, tape libraries, and RAID cabinets. For each device, the tab displays the SCSI

CDP Administration Guide 104


FalconStor Management Console

address (comprised of adapter number, channel number, SCSI ID, LUN) of the
device, along with the disk size (used and available). If you are using FalconStor’s
Multipathing, you will see entries for the alternate paths as well.
The Storage Pools tab displays a list of storage pools that have been defined,
including the total size and number of devices in each storage pool.
The Persistent Binding tab displays the binding of each storage port to its unique
SCSI ID.
When you highlight a physical device, the Category field in the right-hand pane
describes how the device is being used. Possible values are:
• Reserved for virtual device - A hard disk that has not yet been assigned to a
SAN Resource or Snapshot area.
• Used by virtual device(s) - A hard disk that is being used by one or more
SAN Resources or Snapshot areas.
• Reserved for direct device - A SCSI device, such as a hard disk, tape drive
or library, that has not yet been assigned as a SAN Resource.
• Used in direct device - A directly mapped SCSI device, such as a hard disk,
tape drive or library, that is being used as a direct device SAN Resource.
• Reserved for service enabled device - A hard disk with existing data that
has not yet been assigned to a SAN Resource.
• Used by service enabled device - A hard disk with existing data that has
been assigned to a SAN Resource.
• Unassigned - A physical resource that has not been reserved yet.
• Not available for IPStor - A miscellaneous SCSI device that is not used by
the storage server (such as a scanner or CD-ROM).
• System - A hard disk where system partitions exist and are mounted (i.e.
swap file, file system installed, etc.).
• Reserved for Striped Set - Used in a disk striping configuration.

Physical resource icons

The following table describes the icons that are used to describe physical resources:

Icon Description

The T icon indicates that this is a target port.

The I icon indicates that this is an initiator port.

The V icon indicates that this disk has been virtualized or is reserved for
a virtual disk.

The D icon indicates that this is a direct device or is reserved for a direct
device.

CDP Administration Guide 105


FalconStor Management Console

Icon Description

The S icon indicates that this is a service enabled device or is reserved


for a service enabled device.

The a icon indicates that this device is used in the logical resource that is
currently being highlighted in the tree.

The red arrow indicates that this Fibre Channel HBA is down and cannot
access its storage.

Storage Pool Icons:

The V icon to the left of the device indicates that this storage pool is
comprised of virtual devices. An S icon would indicate that it is comprised
of service enabled devices and a D icon would indicate that it is
comprised of direct devices.
The C icon to the right of the device indicates that this storage pool is
designated for SafeCache resources.

The G icon to the right of the device indicates that this is a general
purpose storage pool which can be used for any type of resource.

The H icon to the right of the device indicates that this storage pool is
designated for HotZone resources.

The J icon to the right of the device indicates that this storage pool is
designated for CDP journal resources.

The N icon to the right of the device indicates that this storage pool is
designated for snapshot resources.

The R icon to the right of the device indicates that this storage pool is
designated for continuous replication resources.

The S icon to the right of the device indicates that this storage pool is
designated for SAN resources and their corresponding replicas.

Failover and Cross-mirror icons:

The physical disk appearing in color indicates that it is local to this server.
The V indicates that the disk is virtualized for this server. If there were a
Q on the icon, it would indicate that this disk is the quorum disk that
contains the configuration repository.

The physical disk appearing in black and white indicates that it is a


remote physical disk. The F indicates that the disk is a foreign disk.

Prepare devices to become logical resources

You can use one of FalconStor’s disk preparation options to change the category of
a physical device. This is important to do if you want to create a logical resource
using a device that is currently unassigned.

CDP Administration Guide 106


FalconStor Management Console

• The storage server detects new devices when you connect to it. When they
are detected you will see a dialog box notifying you of the new devices. At
this point you can highlight a device and press the Prepare Disk button to
prepare it.
The Physical Devices Preparation Wizard will help you to virtualize, service-
enable, unassign, or import physical devices.

• At any time, you can prepare a single unassigned device by doing the
following: Highlight the device, right-click, select Properties and select the
device category. (You can find all unassigned devices under the Physical
Resources/Adapters node of the tree view.)
• For multiple unassigned devices, highlight Physical Resources, right-click
and select Prepare Disks. This launches a wizard that allows you to
virtualize, unassign, or import multiple devices at the same time.

Rename a physical device

You can rename a physical device. When a device is renamed on a server in a


failover pair, the device gets renamed on the partner server also. However, it is not
possible to rename a device when the server has failed over to its partner.

1. To rename a device, right-click on the device and select Rename.

2. Type the new name and press Enter.

CDP Administration Guide 107


FalconStor Management Console

Use IDE drives with CDP

If you have an IDE drive that you want to virtualize and use as storage, you must
create a block device from it. To do this:

1. Right-click on Block Devices (under Physical Devices) and select Create Disk.

2. Select the device and specify a SCSI ID and LUN for it.
The defaults are the next available SCSI ID and LUN.

3. Click OK when done.


This virtualizes the device. When it is finished, you will see the device listed
under Block Devices. You can now create logical resources from this device.
Unlike a regular SCSI virtual device, block devices can be deleted.

Note: Do not change the hostname if you are using block devices. If you do, all
block devices claimed by CDP will be marked offline and seen as foreign devices.

Rescan adapters

1. To rescan all adapters, right-click on Physical Resources and select Rescan.


If you only want to scan a specific adapter, right-click on that adapter and select
Rescan..

CDP Administration Guide 108


FalconStor Management Console

If you want to discover new devices without scanning existing devices, click the
Discover New Devices radio button and then check the Discover new devices
only without scanning existing devices checkbox. You can then specify
additional scan details.

2. Determine what you want to rescan.


If you are discovering new devices, set the range of adapters, SCSI IDs, and
LUNs that you want to scan.
Use Report LUNs - The system sends a SCSI request to LUN 0 and asks for a
list of LUNs. Note that this SCSI command is not supported by all devices.
Stop scan when a LUN without a device is encountered - This option will scan
LUNs sequentially and then stop after the last LUN is found. Use this option only
if all of your LUNs are sequential.
Auto detect FC HBA SCSI ID scans QLogic HBAs. It ignores the range of SCSI
IDs specified and automatically detects the SCSI IDs with persistent binding.

Import a disk

You can import a ‘foreign’ disk into a CDP appliance. A foreign disk is a virtualized
physical device containing FalconStor logical resources previously set up on a
different storage server. You might need to do this if a storage server is damaged
and you want to import the server’s disks to another storage server.
When you right-click on a disk that CDP recognizes as ‘foreign’ and select the
Import option, the disk’s partition table is scanned and an attempt is made to
reconstruct the virtual drive out of all of the segments.

CDP Administration Guide 109


FalconStor Management Console

If the virtual drive was constructed from multiple disks, you can highlight Physical
Resources, right-click and select Prepare Disks. This launches a wizard that allows
you to import multiple disks at the same time.
As each drive is imported, the drive is marked offline because it has not yet found all
of the segments. Once all of the disks that were part of the virtual drive have been
imported, the virtual drive is re-constructed and is marked online.
Importing a disk preserves the data that was on the disk but does not preserve the
client assignments. Therefore, after importing, you must either reassign clients to
the resource or use the File menu --> Restore Configuration option.

Notes:
• The GUID (Global Unique Identifier) is the permanent identifier for each
virtual device. When you import a disk, the virtual ID, such as SANDisk-
00002, may be different from the original server. Therefore, you should
use the GUID to identify the disk.
• If you are importing a disk that can be seen by other storage servers, you
should perform a rescan before importing. Otherwise, you may have to
rescan after performing the import.

Test physical device throughput

You can test the following for your physical devices:


• Sequential throughput
• Random throughput
• Sequential I/O rate
• Random I/O rate
• Latency
To check the throughput for a device:

1. Right-click on the device (under Physical Resources).

2. Select Test from the menu.


The system will test the device and then display the throughput results on a new
Throughput tab.

SCSI aliasing

SCSI aliasing works with FalconStor’s Multipathing option to eliminate a potential


point of failure in your storage network by providing multiple paths to your storage
devices using multiple Fibre Channel switches and/or multiple adapters and/or
storage devices with multiple controllers. In a multiple path configuration, CDP
automatically detects all paths to the storage devices. If one path fails, CDP
automatically switches to another.

CDP Administration Guide 110


FalconStor Management Console

Repair paths to a device

Repair is the process of removing one or more physical device paths from the
system and then adding them back. Repair may be necessary when a device is not
responsive which can occur if a storage controller has been reconfigured or if a
standby alias path is offline/disconnected.
If a path is faulty, adding it back may not be possible.
To repair paths to a device:

1. Right-click on the device and select Repair.

If all paths are online, the following message will be displayed instead: “There
are no physical device paths that can be repaired.”

2. Select the path to the device that needs to be repaired.


If the path is still missing after the repair or the entire physical device is gone
from the console, the path could not be repaired. You should investigate the
cause, correct the problem, and then rescan adapters with the Discover New
Devices option.

CDP Administration Guide 111


FalconStor Management Console

Logical Resources

Logical resources are all of the resources defined on the storage server, including
SAN Resources, and Groups.

SAN SAN logical resources consist of sets of storage blocks from one or more physical
Resources hard disk drives. This allows the creation of logical resources that contain a portion
of a larger physical disk device or an aggregation of multiple physical disk devices.
Clients do not gain access to physical resources; they only have access to logical
resources. This means that an administrator must configure each physical resource
to one or more logical resources so that they can be assigned to the clients.
When you highlight a SAN Resource, you will see a small icon next to each device
that is being used by the resource.

CDP Administration Guide 112


FalconStor Management Console

In addition, when you highlight a SAN Resource, you will see a GUID field in the
right-hand pane.

The GUID (Global Unique Identifier) is the permanent identifier for this virtual device.
The virtual ID, SANDisk-00002, is not. You should make note of the GUID, because,
in the event of a disaster, this identifier will be important if you need to rebuild your
system and import this disk.

Groups Groups are groups of drives (virtual drives and service enabled drives) that will be
grouped together for SafeCache or snapshot synchronization purposes. For
example, when one drive in the group is to be replicated or backed up, the entire
group will be snapped together to maintain a consistent image.

Logical resource icons

The following table describes the icons that are used to show the status of logical
resources:

Icon Description

This icon indicates a warning, such as:


• Virtual device offline (or has incomplete segments)
• Mirror is out of sync
• Mirror is suspended
• TimeMark rollback failed
• Replication failed
• One or more supporting resources is not accessible (SafeCache,
CDP, Snapshot resource, HotZone, etc.)
This icon indicates an alert, such as:
• Replica in disaster recovery state (after forcing a replication
reversal)
• Cross-mirror need to be repaired on the virtual appliance
• Primary replica is no longer valid as a replica
• Invalid replica

Write caching

You can leverage a third party disk subsystem's built-in caching mechanism to
improve I/O performance. Write caching allows the third party disk subsystem to
utilize its internal cache to accelerate I/O.

CDP Administration Guide 113


FalconStor Management Console

To write cache a resource, right-click on it and select Write Cache --> Enable.

Replication

The Incoming and Outgoing objects under the Replication object display information
about each server that replicates to this server or receives replicated data from this
server. If the server’s icon is white, the partner server is "connected" or "logged in". If
the icon is yellow, the partner server is "not connected" or "not logged in".
When you highlight the Replication object, the right-hand pane displays a summary
of replication to/from each server.
For each replica disk, you can promote the replica or reverse the replication. Refer
to the Replication chapter in the CDP Reference Guide for more information about
using replication.

CDP Administration Guide 114


FalconStor Management Console

SAN Clients

SAN Clients are the actual file and application servers that utilize the storage
resources via the storage server.
These SAN Clients access their storage resources via iSCSI initiators (for iSCSI) or
HBAs (for Fibre Channel or iSCSI). The storage resources appear as locally
attached devices to the SAN Clients’ operating systems (Windows, Linux, Solaris,
etc.) even though the devices are actually located at the storage server site.
When you highlight a specific SAN client, the right-hand pane displays the Client ID,
type, and authentication status, as well as information about the client machine.
The Resources tab displays a list of SAN Resources that are allocated to this client.
The adapter, SCSI ID and LUN are relative to this CDP SAN client only; other clients
that may have access to the SAN Resource may have different adapter SCSI ID and
LUN information.

CDP Administration Guide 115


FalconStor Management Console

Change the ACSL

You can change the ACSL (adapter, channel, SCSI, LUN) for a SAN Resource
assigned to a SAN client if the device is not currently attached to the client. To
change, right-click on the SAN Resource under the SAN Client object (you cannot
do this from the SAN Resources object) and select Properties. You can enter a new
adapter, SCSI ID, or LUN.

Note: For Windows clients:


• One SAN Resource for each client must have a LUN of 0. Otherwise, the
operating system will not see the devices assigned to the SAN client. In
addition, for the Linux OS, the rest of the LUNs must be sequential.

Grant access to a SAN Client

By default, only the root user and IPStor admins can manage SAN resources,
groups, or clients. While IPStor users can create new SAN Clients, if you want an
IPStor user to manage an existing SAN Client, you must grant that user access. To
do this:

1. Right-click on a SAN Client and select Access Control.

2. Select which user can manage this SAN Client.


Each SAN Client can only be assigned to one IPStor user. This user will have
rights to perform any function on this SAN Client, including assigning, adding
protocols, and deletion.

Webstart for Java console


In cases where you need to access the storage server from a machine that does not
have the FalconStor Management Console installed, you can download the
FalconStor Management Console as follows:

1. Open a web browser on the machine you wish to install the console.

2. Type in the IP address of the storage server you wish to access.

CDP Administration Guide 116


FalconStor Management Console

If Java is not detected, you will be prompted to install the appropriate version of
JRE. Once detected, the FalconStor Management Console automatically
installs.

Notes:
• A security warning may display regarding the digital signature for the java
applet. Click run to accept the signature.
• If you are using Windows Server 2008 or Vista and plan to use the
FalconStor Management Console, use the Java console (with Java 1.6)
for best results.

If you plan to reuse the console on this machine, repeat steps 1 and 2 above or
create a shortcut from Java when prompted.
To make sure you are prompted to create a shortcut to the java Webstart, follow the
steps below:

1. Start --> Control Panel --> Java


The Java control panel launches.

2. Click on the Advanced tab and select the prompt user or Always allow radio
button under Shortcut Creation.

Console options
To set options for the console:

1. Select Tools --> Console Options.

2. Make any necessary changes.

CDP Administration Guide 117


FalconStor Management Console

Remember password for session - If the console is already connected to a


server, when you attempt to open a second, third, or subsequent server, the
console will use the credentials that were used for the last successful
connection. If this option is unchecked, you will be prompted to enter a password
for every server you try to open.
Automatically time out servers after nn minute(s) - The console will collapse a
server that has been idle for the number of minutes you specify. If you need to
access the server again, you will have to reconnect to it. The default is 10
minutes. Enter 00 minutes to disable the timeout.
Update statistics every nn second(s) - The console will update statistics by the
frequency you specify.
Automatically refresh the event log every nn second(s) - The console will update
the event log by the frequency you specify, only when you are viewing it.
Console Log Options - The console log (ipstorconsole.log) is kept on the local
machine and stores information about the local version of the console. The
console log is displayed at the very bottom of the console screen. The options
affect how information for each console session will be maintained:
Overwrite log file - Overwrite the information from the last console session when
you start a new session.
Append to log file - Keep all session information.
Do not write to log file - Do not maintain a console log.

Create custom menu

You can create a menu in the FalconStor Management Console from which you can
launch external applications. This can add to the convenience of FalconStor’s
centralized management paradigm by allowing your administrators to start all of their
applications from a single place. The Custom menu will appear in your console
along with the normal menu (between Tools and Help).
To create a custom menu:

1. Select Tools --> Set up Custom Menu.

CDP Administration Guide 118


FalconStor Management Console

2. Click Add and enter the information needed to launch this application.

Menu Label - The application title that will be displayed in the Custom menu.
Command - The file (usually an.exe) that launches this application.
Command Argument - An argument that will be passed to the application. If you
are launching an Internet browser, this could be a URL.
Menu Icon - The graphics file that contains the icon for this application. This will
be displayed in the Custom menu.

CDP Administration Guide 119


Data Management

Verify snapshot creation and status in DiskSafe


Browse the snapshot list

When you have finished taking snapshots, you can expand Snapshots in the left
pane to see the following two additional nodes:
• Disks - Expand this node to view a list of all protected disks and partitions. If
a disk or partition is part of a group, the name of that group displays in
brackets after the disk or partition name.
• Groups - Expand this node to view a list of all groups. Snapshots taken of a
group display for all members of the group.
If you take a snapshot of an individual disk or partition, and click that disk or partition
name within the Snapshots --> Disks node, the right panel displays the following
information about the snapshot:
• The snapshot number
• The date the snapshot was taken
• The time the snapshot was taken
• The status of the snapshot - Yes if it has been mounted or No if it has not.
• The name of the group (if the snapshot was taken of a group rather than of
an individual disk or partition.
If you take a snapshot of a group and click that group’s name within the Snapshots -
-> Groups node, the right panel displays the following information about the
snapshot:
• The snapshot number
• The date the snapshot was taken
• The time the snapshot was taken
Depending on the amount of changed data, it might take several minutes for the
snapshot to appear. If the snapshot does not appear automatically, right-click the
group node and then click Refresh to update the screen.
For group snapshots: If the group uses periodic mode and is configured to take a
snapshot automatically after each synchronization, taking a snapshot manually
actually generates two snapshots. The first is the result of the disks being
synchronized; the second is the result of the snapshot being taken manually.

CDP Administration Guide 120


Data Management

Check DiskSafe Events and the Windows event log

Events You can also get a status of snapshots by highlighting Events in DiskSafe console.
You'll be able to browse all events, including the scheduled snapshot creation status.

If the DiskSafe events list is too long, you can right-click Events and select Set Filter.
There, you can set a Time Range, display the events by Category, Type or Owner,
or use the Description search to find specific information.

Windows event You can also view snapshot status from the Windows event log.
log

End

Start

You will see the following in the Windows event log to detail the snapshot process:

1. DiskSafe starts the snapshot and passes the command to the FalconStor
Intelligent Management Agent (IMA). IMA gets the drive information of the
protected disk or partition.

CDP Administration Guide 121


Data Management

2. SDM/IMA sends the drive information to the proper snapshot agent, after which
time you will see the logs describing the application process.

3. The snapshot agent successfully puts the database into backup mode and then
tells the CDP appliance to take the snapshot on the DiskSafe mirror disk.

4. DiskSafe confirms the snapshot creation and then reports the snapshot
information.

CDP Administration Guide 122


Data Management

Check Microsoft Exchange snapshot status

To confirm the Exchange snapshot process, you can check the Windows Event log.
The Snapshot Agent will send the backup command to each storage group on the
protected disk. You can find these logs after the appctrl event:

1. The snapshot agent sends the full backup command to the storage group and
then Exchange Extensible Storage Engine (ESE) starts the full backup process.

2. Exchange ESE checks the log files and the checkpoint.

3. Exchange ESE processes the log files. The snapshot agent will not request to
truncate the log to affect other Exchange backup process.

4. Exchange ESE completes the backup process on a storage group. You may see
the same process on another storage group.

CDP Administration Guide 123


Data Management

Check Microsoft SQL Server snapshot status

You can see numerous events from the SQL server in the Windows Event log. The
SQL snapshot agent will send the checkpoint command to each of the SQL
databases on the protected disk; you can check whether the agent sent the
checkpoint commands to all database successfully. For example, you will see
events similar to the ones in the pictures below.

You can also get detailed information from the agent trace log. You can find
agttrace.log under the IMA installation folder. It is usually installed in C:\Program
Files\FalconStor\IMA. There, you can see the connection between the SQL instance
and the SQL database list that was successfully created by the checkpoint.

CDP Administration Guide 124


Data Management

Check Oracle snapshot status

You can get detailed information from the agent trace log. You can find agttrace.log
under the IMA installation folder. It is usually installed in C:\Program
Files\FalconStor\IMA. There, you can see the connection to Oracle system and the
ALTER tablespace begin backup and ALTER tablespace end backup command to
all tablespaces on the protected disk.

CDP Administration Guide 125


Data Management

Check Lotus Notes/Domino snapshot status

You can get the detailed information from the agent trace log. You can find
agttrace.log under the IMA installation folder. It is usually installed in C:\Program
Files\FalconStor\IMA. There, you can see the connection to the Domino system and
the backup command to all NFS databases on the protected disks. You can also
check the snapshot agent communication from the log of the Domino server.

Reports
The CDP appliance retains information about the health and behavior of the physical
and virtual storage resources on the Server. CDP provides an event log and reports
that offer a wide variety of information.

CCM Reports

CCM provides a variety of reports related to clients and managed applications.


• DiskSafe Current Status Report - displays the current status of DiskSafe
clients.
• DiskSafe Synchronization Report - displays the status of clients and how
many have synchronized their disks today, since yesterday, this week, or for
a specific period.
• DiskSafe Snapshot Report - displays the status of clients and the number
and size of snapshots made today, since yesterday, this week, or for a
specific period.
• FileSafe Backup Report - displays the status of FileSafe backup jobs: the
latest backup, todayfs backup, backups since yesterday, backups this week,
or backups during a specific period.

CDP Administration Guide 126


Data Management

CDP Reports

CCM provides a variety of reports related to the server, including:


• Performance and throughput - By SAN Client, SAN Resource, SCSI
channel, and SCSI device.
• Usage/allocation - By SAN Client, SAN Resource, Physical Resource, and
SCSI adapter.
• System configuration - Physical Resources.
• Replication reports - You can run an individual report for a single server or
you can run a global report for multiple servers.
Individual reports are viewed from the Reports object in the console. Global
replication reports are created from the Servers object. Reports can be scheduled
on run on an as-needed basis. To create a report, right-click on the Reports object
and select New.
Once the report is generated, you can save report data from the server and device
throughput and usage reports in a comma delimited (.csv) or tab delimited (.txt) text
file and export the information. To export report data, right-click on a report that is
generated and select Export.

CDP Event Log

An Event Log is maintained to record system events and errors. In addition, the
appliance maintains performance data on the individual physical storage devices
and SAN Resources. This data can be filtered to produce various reports through
the FalconStor Management Console.
The Event Log details significant occurrences during the operation of the storage
server. The Event Log can be viewed in the FalconStor Management Console when
you highlight a Server in the tree and select the Event Log tab in the right pane.
The columns displayed are:

Type I: This is an informational message. No action is required.


W: This is a warning message that states that something occurred that
may require maintenance or corrective action. However, the storage
server system is still operational.
E: This is an error that indicates a failure has occurred such that a
resource is not available, an operation has failed, or a licensing violation.
Corrective action should be taken to resolve the cause of the error.
C: This is a critical error that stops the system from operating properly.
You will be alerted to all critical errors when you log into the server from
the console.

Date The date on which the event occurred.

Time The time at which the event occurred.

ID This is the message number.

CDP Administration Guide 127


Data Management

Event This is a text description of the event describing what has occurred.
Message

When you initially view the Event Log, all information is displayed in chronological
order (most recent at the top). If you want to reverse the order (oldest at top) or
change the way the information is displayed, you can click on a column heading to
re-sort the information. For example, if you click on the ID heading, you can sort the
events numerically. This can help you identify how often a particular event occurs.
By default, all informational system messages, warnings, and errors are displayed.
To filter the information that is displayed, right-click on a Server and select Event Log
--> Filter.

Select which message


types you want to
include.

Select a category of
messages to display.

Search for records that


contain/do not contain
specific text.

Specify the maximum


number of lines to
display.

Select a time or date


range for messages.

You can refresh the current Event Log display by right-clicking on the Server and
selecting Event Log --> Refresh.
You can print the Event Log to a printer or save it as a text file. These options are
available (once you have displayed the Event Log) when you right-click on the
Server and select the Event Log options.

CDP Administration Guide 128


Data Management

CDP Reports
The FalconStor reporting feature includes many useful reports including allocation,
usage, configuration, and throughput reports. A description of each report follows.
• Client Throughput Report - displays the amount of data read/written
between this Client and SAN Resource. To see information for a different
SAN Resource, select a different Resource Name from the drop-down box
in the lower right hand corner.
• Delta Replication Status Report - displays information about replication
activity, including compression, encryption, microscan and protocol. It
provides a centralized view for displaying real-time replication status for all
disks enabled for replication. It can be generated for an individual disk,
multiple disks, source server or target server, for any range of dates. This
report is useful for administrators managing multiple servers that either
replicate data or are the recipients of replicated data.
• Disk Space Usage Report - displays the amount of disk space being used
by each SCSI adapter.
• Disk Usage History Report - allows you to create a custom report with the
statistical history information collected. You must have “statistic log” enabled
to generate this report. The data is logged once a day at a specified time.
The data collected is a representative sample of the day. In addition, if
servers are set up in as a failover pair, the “Disk usage history” log must be
enabled on the both servers in order for data to be logged during failover. In
a failover state, the data logging time set on the secondary server is
followed.
• Fibre Channel Configuration Report - displays information about each
Fibre Channel adapter, including type, WWPN, mode (initiator vs. target),
and a list of all WWPNs with client information.
• Physical Resources Configuration Report - lists all of the physical
resources on this Server, including each physical adapter and physical
device. To make this report more meaningful, you can rename the physical
adapter. For example, instead of using the default name, you can use a
name such as “Target Port A”.
• Physical Resources Allocation Report - displays the disk space usage
and layout for each physical device.
• Physical Resource Allocation Report - displays the disk space usage and
layout for a specific physical device.
• Resource IO Activity Report - displays the input and output activity of
selected resources. The report options and filters allow you to select the
SAN Resource and Client to report on within a particular date/time range.
• SCSI Channel Throughput Report - displays the data going through each
SCSI channel on the Server. This report can be used to determine which
SCSI bus is heavily utilized and/or which bus is under utilized. If a particular
bus is too heavily utilized, it may be possible to move one or more devices
to a different or new SCSI adapter. If a SCSI adapter has multiple channels,
each channel is measured independently.

CDP Administration Guide 129


Data Management

• SCSI Device Throughput Report - displays the utilization of the physical


SCSI storage device on the Server. This report can show if a particular
device is heavily utilized or under utilized.
• SAN Client Usage Distribution Report - displays the amount of data read by
Clients of the current Server.
• SAN Client/Resources Allocation Report - displays information about the
resources assigned to each Client selected, including disk space assigned,
type of access, and breakdown of physical resources.
• SAN Resources Allocation Report - displays information about the
resources assigned to each Client, including disk space assigned, type of
access, and breakdown of physical resources.
• SAN Resource Usage Distribution Report - displays the amount of read/
write data for each SAN Resource.
• Server Throughput and Filtered Server Throughput Report - displays the
overall throughput of the Server. The Filtered Server Throughput Report
takes a subset of Clients and/or SAN Resources and displays the
throughput of that subset.
• Storage Pool Configuration Report - displays detailed Storage Pool
information.
• User Quota Usage Report - displays a detailed description of the amount of
space used by each of the resources from the selected users on the current
server.
• Report types - Global replication
While you can run a replication report for a single server from the Reports object,
you can also run a global report for multiple servers from the Servers object.
From the Servers object, you can also create a report for a single server, consolidate
existing reports from multiple servers, and create a template for future reports.

Global replication reports

You can run a global replication report ib CDP by highlighting the Servers object and
selecting Replication Status Reports --> New. Then follow the instructions in the
Report wizard. For additional information, refer to the Reports chapter in the CDP
Reference Guide.

CDP Administration Guide 130


Data Recovery

Once you have protected a disk or partition, you can restore data either to your
original disk or to another disk. The best method to use depends on your restore
objectives.
This chapter discusses data recovery using DiskSafe for Windows and DiskSafe for
Linux. For additional details regarding DiskSafe, refer to the DiskSafe User Guide.
Available recovery methods include the following:
• Restore selected folders or files
If you are using snapshots and accidentally deleted a folder or file that you
need, or if you want to retrieve some older information from a file that has
changed, you can access the snapshot that contains the desired data and
copy it to your local disk.
This procedure can also be used to try different “what if” scenarios—for
example, changing the format of the data in a file—without adversely
affecting the data on your local disk.
• Restore an entire local data disk or partition
If you protected a data disk or partition—that is, a disk or partition that is not
being used to boot the host, has no page file on it - and your system hasn’t
failed, you can restore that disk or partition using DiskSafe. You might need
to do this if the disk has become corrupted or the data has been extensively
damaged. The entire disk or partition will be restored from the mirror or a
snapshot, and can be restored to either your original disk or another disk.
This technique can also be used to copy a system disk or partition to
another disk as long as it is not a disk from which you are currently booting.
You can continue to use your computer while the data is being restored,
although you cannot use any applications or files located on the disk or
partition being restored.
Keep in mind that when you restore a local disk or partition to a new disk,
the protection policy refers to the new disk instead of the original local disk.
• Restore an entire local system disk or partition
If you need to restore your system disk or partition—that is, the disk you
typically boot from—you can do so using the Recovery CD. This is
particularly useful if the hard disk or operating system has failed and been
repaired or replaced. The entire disk or partition will be restored from either
the mirror or from a selected snapshot, and can be restored to either your
original disk or another disk. However, you won’t be able to use your
computer until all the data is restored.

CDP Administration Guide 131


Data Recovery

In addition to allowing you to restore data, DiskSafe also enables you to boot from a
remote mirror or snapshot and continue working while your failed hard disk is being
repaired or replaced. Once the hard disk is available again, you can restore your
data using either DiskSafe or the Recovery CD. For more information, refer to
‘Accessing data after system failure’ on page 142.

Note: If you are using a remote mirror with the Fibre Channel protocol, and the
hard disk or operating system fails, you must remotely boot the host using your
Fibre Channel HBA and then restore the data using DiskSafe. The Recovery CD
does not currently support the Fibre Channel protocol. For more information, refer
to ‘Accessing data after system failure’ on page 142.

Caution: Do not restore a protected remote virtual disk. This can adversely
affect the storage server’s performance.

Restore data using DiskSafe for Windows


Using the DiskSafe application, you can restore either a local mirror or a remote
mirror of a protected data disk to the original hard disk or another disk, and you can
restore a system disk to another disk.
The only limitations of restoring data using the DiskSafe application are that you
cannot restore data when the system is not operating properly (that is, when the
hard disk or operating system has failed), and you cannot restore a system disk to
the original hard disk. In addition, you can restore a disk or partition only as long as
no other DiskSafe operation—such as synchronization, data analysis, or snapshot is
currently occurring.

Note: If you restore the data partition before the system partition has completed
initial synchronization, a warning message displays after restarting to alert you
that the disk needs to be checked. This warning appears every time a disk is not
consistent with the file system. You can click Ignore to bypass the system check.

Restore a file

1. Expand DiskSafe --> Protected Storage and then click Disks.

2. In the right pane, right-click the disk or partition that you want to restore, and
then click Restore.
The DiskSafe Restore Wizard launches to guide you through the restore
process.

3. Click Next to begin restoration.

4. Select File to restore a file from a backup on your storage server and click Next.

CDP Administration Guide 132


Data Recovery

5. Select the snapshot from which you want to restore your file and click Next and
then Finish.
The snapshot is mounted to the local file system with a new drive letter assigned
allowing you to select the file to restore.

Restore a disk or partition

To restore a disk or partition:

1. Expand DiskSafe --> Protected Storage and then click Disks or Groups.
If the disk or partition whose data you want to restore is part of a group, expand
Groups and click the group that contains the disk or partition that you want to
restore.

2. In the right pane, right-click the disk or partition that you want to restore, and
then click Restore.
The DiskSafe Restore Wizard launches to guide you through the restore
process.

3. Click Next to begin restoration.

4. Select Disk or Partition to restore a disk or partition from a backup on your


storage server and click Next.

5. Select the Mirror image or snapshot from which you want to restore and click
Next.

6. Select the destination disk.


You can click the Refresh button to update the disk list or the Advanced button
for the following advanced restore options:
• Restore disk signature/GUID
Check this option to restore the disk signature/GUID with the original
primary disk. This will identify the new disk as the original primary disk to
the operating system. It is necessary to check this option if you are
replacing the original disk with a new disk. This option is disabled when the
primary disk is online, and is enabled when the primary disk has been
removed or disabled.
• Force full restore
Check this option to force a full disk restoration sector by sector instead of
using the compare feature to restore changed data. This option is used to
save time by eliminating the data comparison feature when restoring to an
empty disk. Conversely, you can uncheck this option to compare the
difference between the two disks and restore only the different data. This
method reduces recovery time when restoring to the original disk or
partition.

7. Click Next and Finish.

CDP Administration Guide 133


Data Recovery

A progress window displays as data from the mirror or snapshot is copied to the
specified location.
You can cancel this operation by clicking Cancel. However, this will leave the
disk or partition in an incomplete state, and you will have to restore it again
before you can use it.
Once complete, a screen displays indicating a successful or failed restore.

8. If you are restoring a dynamic volume that spans multiple disks, repeat the
above steps for each affected disk.
Make sure that no data is written to the dynamic volume while you are restoring
each disk.

9. If you have finished restoring, click OK.


Data from the mirror or snapshot is copied to the specified location.

10. Restart the host.


• If a drive letter is not automatically assigned to the restored disk or partition,
use Disk Management to assign one. (The restored disk or partition might
have a different number than it had previously.)
• If you restored a dynamic disk, use Disk Management to make each disk
active again.
• If you restored a dynamic disk from a remote mirror, use your storage server
software to re-assign the mirror to the host.
• If you restored a disk or partition on Windows 2000, it is strongly
recommended that you reboot the system.
DiskSafe protection will continue automatically.

Restoring group members

You cannot restore an entire group; however, you can restore each group member
individually. If the disk or partition whose data you want to restore is part of a group,
expand Groups and click the group that contains the disk or partition that you want
to restore. Only snapshots from the selected group display on the snapshot list.
When you restore any member of a group, protection for the group continues
automatically. The group member being restored automatically leaves the group.
You will need to make sure the mirror disks from the storage server are consistent
with the client.

CDP Administration Guide 134


Data Recovery

Restore data using DiskSafe for Linux


You can restore a file by mounting a snapshot. When you mount a snapshot, a
separate, virtual disk is created. The mounted snapshot is an exact image of the
mirror as it existed when the snapshot was taken. Since a mounted snapshot is
simply a representation of the current mirror plus the changed data in the snapshot
area, it does not require any additional disk space.
A mounted snapshot is not intended to be a working disk. Any changes made to a
mounted snapshot are lost as soon as the snapshot is dismounted. However, you
can use a mounted snapshot to restore individual files that have been damaged or
deleted, perform “what if” scenarios or other operations without affecting your
production data, or review the mounted snapshot to see if it’s an image that you
want to restore by rolling back a snapshot.

Mount a snapshot

A snapshot can be mounted using its timestamp, creating a TimeView of the


snapshot and assigning the TimeView to the SAN client. The disk list may need to
be refreshed to display the mounted snapshot if you do not see the device name in
the disk list.
Syntax:
dscli snapshot mount <DiskID> timestamp=<#>
Example:#
# dscli snapshot mount sdc timestamp=1219159421

CDP Administration Guide 135


Data Recovery

Unmounting a snapshot

You can unmount the snapshot using its timestamp. The corresponding TimeView is
unassigned and deleted.
Syntax:
dscli snapshot unmount <DiskID> timestamp=<#>
Example:
# dscli snapshot unmount sdc timestamp=1219159421

Restore a disk

This command is used to restore from a mirror disk to a primary disk or new target
disk. You can perform a complete restore or a restore to a particular snasphot using
a timestamp. The target disk list can be found using the “restoreeligible” option of
the disk list command.
Syntax:
dscli disk restore <MirrorDiskID> <TargetDiskID>
[timestamp=<#>][-force]
Example:
# dscli disk restore sdb sdd

Note: The force option is used for restoring a partition protection to a new disk
has not been previously partitioned and which has a file system on it.

Group Restore

Refer to Restore a disk command. The members in a group must leave the group
and restore individually.

Group Rollback

The group rollback command is used to rollback the primary disks in the group to a
selected snapshot. A rollback to the selected snapshot is done on each mirror disk
and subsequently a full restore is performed from the mirror disk to the primary disk.
Protections are resumed automatically after a successful rollback.
Syntax:
dscli group rollback <groupname> timestamp=<#>

Example:
# dscli group rollback dsgroup1 timestamp=1224479576

CDP Administration Guide 136


Data Recovery

Stop Group The stop group rollback command is used to stop an active rollback. This command
Rollback must be used with caution as the primary disk will be left in an inconsistent state.
Syntax:
dscli group stoprollback <groupname>
Example:
# dscli group stop rollback dsgroup1

Recovery CD
When you cannot start your Windows computer, another option is to use the
FalconStor DiskSafe Recovery CD to restore a disk or partition. You can obtain the
Recovery CD from Technical Support in the event of a computer disaster.
Using the Recovery CD, you can restore data using a recovery point from within the
FalconStor DSRecPE Recovery Wizard. You can restore both your system disk and
data disks, and you can restore them to the original hard disk or another disk. You
can restore either the mirror itself or a snapshot (i.e. a point-in-time image) of the
data.
You can also perform device management, network configuration, or access the
command console. The Device Management option allows you to load any device
driver (with an .inf extension). The Network Configuration option allows you to set up
your Ethernet adapter configuration.
The only limitations of the Recovery CD are that you cannot use it to restore data
from a local disk or with Fibre Channel connections.

Set the Recovery CD password

It is recommended that you set a Recovery CD password before taking any


snapshots in case you need to use this feature. To set the recovery password:

1. Launch DiskSafe

2. Right-click on the DiskSafe root node

3. Set the Recovery CD password.

Restore a disk or partition

Make sure your computer is configured to boot from the CD-ROM drive. If you are
restoring the system partition, it is recommended that you restore to the original
partition ID. If the disk is protected with encryption, un-mount any snapshots before
using the Recovery CD. It is also recommended that you restore to similar media
(i.e. IDE to IDE or SATA to SATA). If you need to start your computer using this
recovery tool, following the instructions below:

CDP Administration Guide 137


Data Recovery

1. Turn on your computer

2. Insert the Recovery CD into the CD-ROM drive.

3. Restart the computer.

4. While the computer is starting, watch the bottom of the screen for a prompt that
tells you how to access the BIOS. Generally, you will need to press Del, F1, F2,
or F10.

5. From the BIOS screen, choose the Boot menu.

Note: The term boot refers to the location where software required to start the
computer is stored. The Symantec Recovery Disk contains a simple version
of the Windows operating system. By changing the boot sequence of your
computer to your CD drive, the computer can then load this version of
Windows. Boot is also used synonymously with start.

6. Change the CD or DVD drive to be the first bootable device on the list.

7. Save the changes and exit the BIOS setup.

8. As soon as you see the prompt Press any key to boot from CD appear, press a
key to start the Recovery CD.
Once you successfully restart your system, an End User License Agreement
displays.

9. Accept the end user license agreement to launch the Recovery CD.

Note: If you do not accept the license agreement, the system reboots.

To restore a disk or partition using the Recovery CD, select the Recovery Wizard
option. The Recovery Wizard guides you through recovering your data from a
remote storage server. You will be asked to select the remote storage server on
which your disk image is located along with any snapshots. You will then be able to
select the local disk or partition to use as your recovery destination.

CDP Administration Guide 138


Data Recovery

1. Connect to your storage server.

Enter your storage server IP address, client name (i.e. computer name),
recovery password and click Connect.

CDP Administration Guide 139


Data Recovery

After you have successfully connected to the storage server, the selection
screen displays with the available source and destination disks.

2. Select the source and destination for data recovery.

If all disks are not displayed, click the Rescan Disk button to refresh the list. You
can also click the Create Partition button to manage your partition layout.

3. Once you have selected the source and destination pair, click Restore.

Note: For Windows Vista and 2008: If you have previously flipped the disk
signature for the current mirror, you may need to insert the Windows Vista or
2008 operating CD after restoring the system via DiskSafe Recovery CD to
repair the system before boot up.

CDP Administration Guide 140


Data Recovery

All selected pairs will be restored in the sequence selected via the Clone Agent.

CDP Administration Guide 141


Data Recovery

Accessing data after system failure


DiskSafe is specifically designed as a Disaster Recovery (DR) solution, allowing you
to easily restore your protected data after a disaster. However, if the hard disk fails,
you might need to access the data on the mirror or TimeView while you are waiting
for the hard disk to be repaired or replaced. You can also remotely boot the mirror or
timeview image without an HBA through BootIP.
If you are using a remote mirror or TimeView and simply need to access files, you
can assign the mirror to another host that has the appropriate applications installed.
For more information, refer to the documentation for your storage server.
Alternatively, you can remotely boot the failed host from either the mirror itself or a
snapshot of that mirror on the storage server. (If you’re using PXE, you can remotely
boot only from a snapshot, not the mirror itself.)

Notes:
• Certain combinations of HBAs and controllers do not support booting
remotely. For more information, refer to the DiskSafe support page on
www.falconstor.com.
• If Windows wasn’t installed on the first partition of the first disk in the
system, you can remotely boot only if you protected the entire first disk in
the system. Although Windows might reside on other disks or partitions,
certain files required for booting reside only on the first partition of the
first disk.
• Booting from a snapshot rather than the mirror itself is recommended
when booting using a Fibre Channel HBA, as the image will be complete
and intact. If the system failure occurred during synchronization, the
mirror might not be a complete, stable image of the disk.

When the failed hard disk is repaired or replaced, you can either restore all the data
to it using the Recovery CD (as described in ‘Recovery CD’ on page 137), or you
can run DiskSafe while remotely booting and restore all the data using that
application.

Caution: When you boot remotely, do not use DiskSafe for any operation other
than restoring.

CDP Administration Guide 142


Data Recovery

Booting remotely using PXE


To remotely boot using PXE, the host must support the PXE 2.0 protocol. To
determine if the host supports this protocol, check the system BIOS. The boot
device is typically listed as Intel Boot Agent..., IBA... or PXE Agent....
To remotely boot using PXE:

1. Start the host.

2. Use the appropriate procedure for your system to remotely boot using PXE.
For example, you might press F12 when the boot menu appears.

3. When prompted, press F8 and select the remote disk option.


The host automatically boots from the snapshot specified on the storage server.
For more information, refer to the documentation for your storage server.

4. If any error messages appear, click OK.

5. Log in as you normally would.


The message Network Boot Mode appears on the screen to confirm that you
are working from the storage server.
You can now continue normal operations.

CDP Administration Guide 143


Data Recovery

Booting remotely using an HBA


To remotely boot using an iSCSI or Fibre Channel HBA:

1. If you plan to boot from a snapshot, mount the snapshot and assign it to the host
using the storage server software.

Note: To boot remotely from Windows Vista or 2008, you must switch the disk
signature by running the following CLI command on the storage server for the
mirror or TimeView disk prior to boot (and when the host is powered off):
iscli setvdevsignature –s <server-name> -v <vdevid> -F .

2. At the host, physically disconnect the failed hard disk from the system.
For more information, refer to the documentation for your system.

3. Boot the host using the HBA, and then use the appropriate procedure for your
HBA to connect to the mounted snapshot or mirror on the storage server.
For more information, refer to the documentation for your HBA.

4. Restart the host and remotely boot again.


This ensures that the operating system is stable and you can work with or
restore the data on the mounted snapshot or mirror

5. If you protected other disks or partitions in addition to the system disk, assign
drive letters to those disks or partitions.

Notes:
• If you boot from a mounted snapshot, do not dismount that snapshot
either via the storage server or by removing protection for the disk via
DiskSafe and then clicking Yes when prompted to dismount any
mounted snapshots. If you do, your system will no longer function,
and you will have to repeat this procedure in order to boot from the
storage server once more.
• In Windows Vista and 2008, do not put the local system disk and the
mounted snapshot together during boot up. Otherwise, you may not
be able to remotely boot again.

CDP Administration Guide 144


Data Recovery

Restoring a disk or partition while booting remotely


Once your failed disk has been repaired or replaced, follow these steps to restore it:

1. Shut down the host and install the repaired or replaced hard disk.

Notes:
• If you replaced the original hard disk, the new disk must be the same
size as or larger than the mirror.
• If you are restoring a system disk, the system to which you are
restoring the data must be identical to the original system. For
example, if the original system had a particular type of network
adapter, the system to which you are restoring the data must have the
exact same type of network adapter. Otherwise, the restored files will
not operate properly.
• In Windows Vista and 2008, format the hard disk before installing it.

2. Boot remotely from the mirror or a mounted snapshot.

3. Run DiskSafe and restore the protected data (as described in ‘Restore data
using DiskSafe for Windows’ on page 132).
If you need to restore the whole system to the point-in-time snapshot, run
DiskSafe and restore the data. If you need to restore the whole system that is
currently running in remote boot, remove the existing system protection, and
then create a new protection. The primary will be the disk that is currently
booting up and the mirror is the local hard disk.

4. After the recovery is complete, shut down the host and then use your storage
server software to unassign the mirror from the host.
For more information, refer to the CDP Reference Guide. If you don’t have
access to the storage server, contact your system administrator.

Note: If you are using Vista or Windows 2008, you will need to remotely boot
from a TimeView, and then restore from a Snapshot to a new/original disk with
checked Restore disk signature/GUID. Otherwise, after restoration, the
system will not boot.

5. Start the host, go to the BIOS and disable boot from HBA.
For more information, refer to the documentation for your HBA.

6. Start the host, start DiskSafe, remove protection for the disk or partition that you
just restored, and then shut down the host again.

Note: After starting the host, if you are prompted to restart it, do so before
starting DiskSafe.

7. At the storage server, assign the mirror to the host again.

CDP Administration Guide 145


Data Recovery

8. Start the host, start DiskSafe, and protect the disk or partition once more (as
described in ‘Protect a disk or partition with DiskSafe’ on page 29), using the
existing mirror on the storage server as the mirror once again.

CDP Administration Guide 146


Data Recovery

Recover AIX Logical Volumes


This section describes how you can use the FalconStor CDP solution to recover AIX
logical volumes in the case of primary disk errors. There are two methods to recover
your data: TimeMark Rollback and TimeViews.
Scenario 1: In the event of a catastrophic logical volume error due to primary disk
outage (physical errors), the AIX LVM logic will automatically switch off the primary
disk, and the FalconStor CDP mirror LUNs will stand-in for the failed disks without
any application downtime. When the failed primary disk is repaired, simply re-
activate the primary disks. This triggers a reverse synchronization of the mirrors
back to the primary.
Use the following commands to restore after a physical disk failure:

1. Deactivate the failed primary disk.


pvchange -a n /dev/dsk/<raw device path>

2. Replace the failed primary disk.

3. Restore the LVM Volume Group information from the backup.


vgcfgrestore -n <volume group name> <raw device path>

4. Reactivate the Volume Group to resync data.


vgchange -a y <volume group name>

Scenario 2: In the event of a catastrophic logical volume error that results in total
volume data loss, the TimeMark Rollback method should be used. Generally, this
scenario is used when it is decided that the current primary logical volume is
useless, and a full "restore" is necessary to reset it back to a known good point-in-
time.
Scenario 3: In the event of a minor data loss, such as inadvertent deletion of a file
or directory, it is NOT desirable to use TimeMark Rollback because all of the "good
changes" are also rolled back. Therefore, for this case, it is more desirable to create
a virtual view of the protected Volume Group as of a known good point-in-time and
then mount the logical volume in this view in order to copy back the deleted file(s).
This virtual view is called a TimeView, which is created using a specified TimeMark.
This TimeView is an exact representation of the entire Volume Group, and contains
every logical volume inside the group. The Volume Group name of the TimeView is
identical to the primary Volume Group except with a "_tv" appended to the name.
Once the data has been copied back, the TimeView is discarded. Because the
TimeView is virtual, there is no massive copying of data (no extra storage is
required) and the time to mount the TimeView Volume Group is fast.

CDP Administration Guide 147


Data Recovery

Recover a Volume Group using TimeMark Rollback for LVM

1. Run the recover_vg script to recover a volume using the FalconStor CDP
rollback feature. Ex. "recover_vg -rb vg01"

2. Type y to confirm that you want to continue.


Caution: This is an irreversible process that will destroy any existing data on the
LUN.

3. When prompted, select the TimeMark timestamp from the list.


The script will automatically break the mirror(s), roll back the FalconStor CDP
LUN, and then recreate/sync the mirror for one or more logical volumes.
An example for AIX LVM Volume Group Rollback is shown below:
• # recover_vg <IP address> -rb vg01
• This is a destructive process. Do you want to continue? (y)es or (n)o. y
• ### TIMEMARK TIMESTAMP FOR VOLUME GROUP vg01 ###
• 20080703125230 (07/03/2008 12:52:30) 64.00 KB valid
• 20080703125238 (07/03/2008 12:52:38) 68.00 KB valid
• 20080703125921 (07/03/2008 12:59:21) 64.00 KB valid
• Enter TimeMark timestamp to rollback etlaix2-vg01-Protection
• 20080703125921
• Unmounting /dev/fslv00...
• Unmounting /dev/fslv01...
• Removing Mirror for Volume Group vg01...
• Unassigning Virtual Disk etlaix2-vg01-Protection from etlaix2...
• Rolling back etlaix2-vg00-Protection to timestamp (07/03/2008
12:59:21)...
• Assigning Virtual Disk etlaix2-vg01-Protection to etlaix2...
• Re-creating Mirror for Volume Group vg01...
• Running fsck on /dev/fslv00...
• Running fsck on /dev/fslv01...
• Mounting /dev/fslv00 to /mnt/0...
• Mounting /dev/fslv01 to /mnt/1...
• Synchronizing Volume Group vg01.....:

CDP Administration Guide 148


Data Recovery

Recover a Logical Volume using TimeMark Rollback for AIX LVM

1. Run the recover_lv script to recover a logical volume using the FalconStor CDP
rollback feature. Ex. "recover_lv <IP address> -rb fslv05"

2. Type y to confirm that you want to continue.


This is a destructive process that will destroy any existing data on the LUN.

3. When prompted, select the TimeMark timestamp from the list.


The script will automatically break the mirror(s), roll back the FalconStor CDP
LUN, and then recreate/sync the mirror for one or more logical volumes.
An example for AIX LVM Logical Volume Rollback is shown below:
• # recover_lv <IP address> -rb fslv05
• This is a destructive process. Do you want to continue? (y)es or (n)o. y
• ### TIMEMARK TIMESTAMP FOR LOGICAL VOLUME fslv05 ###
• 20081215191203 (12/15/2008 19:12:03) 64.00 KB valid
• 20081215191705 (12/15/2008 19:17:05) 64.00 KB valid
• 20081215192000 (12/15/2008 19:20:00) 64.00 KB valid
• Enter TimeMark timestamp to rollback etlaix3-vg01-Protection
• 20081215192000
• Unmounting Logical Volume /dev/fslv05...
• Removing Primary Mirror for Logical Volume fslv05...
• Removing Primary Mirror for Logical Volume loglv02...
• Unassigning Virtual Disk etlaix3-vg01-Protection vid 457 from etlaix3...
• Rolling back etlaix3-vg01-Protection to timestamp (12/15/2008
19:20:00)...
• Assigning Virtual Disk etlaix3-vg01-Protection to etlaix3...
• Creating Mirror for Logical Volume fslv05 on Primary Disk hdisk0...
• Creating Mirror for Logical Volume loglv02 on Primary Disk hdisk1...
• Running fsck on Logical Volume /dev/fslv05...
• Mounting Logical Volume /dev/fslv05 to /mnt/0...
• Synchronizing Volume Group vg01...

CDP Administration Guide 149


Data Recovery

Recover a Volume Group using TimeMark Rollback for AIX HACMP LVM

1. Run the recover_vg_ha script to recover a volume group using the FalconStor
CDP rollback feature. Ex. "recover_vg <IP address> <Other Node> -rb
sharevg_01"

2. Type y to confirm that you want to continue.


This is a destructive process that will destroy any existing data on the LUN.

3. When prompted, select the TimeMark timestamp from the list.


The script will automatically break the mirror(s), roll back the FalconStor CDP
LUN, and then recreate/sync the mirror for one or more logical volumes.
An example for AIX HACMP Shared Volume Group recovery is shown below:

# recover_vg_ha 192.168.15.96 vaix3 -rb sharevg_01


• This is a destructive process. Do you want to continue? (y)es or (n)o. y
• ### TIMEMARK TIMESTAMP FOR VOLUME GROUP sharevg_01 ###
• 20081016220547 (10/16/2008 22:05:47) 64.00 KB valid test 1
• 20081016220555 (10/16/2008 22:05:55) 64.00 KB valid test 2
• 20081016220601 (10/16/2008 22:06:01) 64.00 KB valid test 3
• Enter TimeMark timestamp to rollback Terayon-sharevg_01-Protection
• 20081016220601
• Unmounting /dev/lv00 on Node vaix3...
• Removing Mirror for Volume Group sharevg_01 on Node vaix3...
• Unassigning Virtual Disk Terayon-sharevg_01-Protection vid 261 from
vaix2...
• Unassigning Virtual Disk Terayon-sharevg_01-Protection vid 261 from
vaix3...
• Rolling back Terayon-sharevg_01-Protection to timestamp (10/16/2008
22:06:01)...
• Assigning Virtual Disk Terayon-sharevg_01-Protection to vaix2...
• Assigning Virtual Disk Terayon-sharevg_01-Protection to vaix3...
• Re-creating Mirror for Volume Group sharevg_01 on Node vaix3...
• Running fsck on /dev/lv00 on Node vaix3...
• Mounting /dev/lv00 to /mnt/0 on Node vaix3...
• Synchronizing Volume Group sharevg_01 on Node vaix3....

CDP Administration Guide 150


Data Recovery

Recover a Volume Group using a TimeView (recover_vg) for AIX LVM

1. Run the recover_vg script to recover a disk using the FalconStor CDP TimeMark
feature. Ex. "recover_vg -tv vg01"

2. Select the TimeMark timestamp for the TimeView to be created. (See the usage
example below.)
Usage example of the recover_vg script using a TimeView for AIX LVM:

# recover_vg 10.6.7.240 -tv vg01


• Cleaning Up Offline Disk...
• ### AVAILABLE TIMEVIEW TIMESTAMP FOR VOLUME GROUP
vg2GB ###
• 20081224124544 (04/24/2008 12:45:44) 64.00 KB valid
20081224124549 (12/24/2008 12:45:49) 64.00 KB valid
• Enter TimeMark timestamp for TimeView on etlaix2-vg01-Protection
20081224150401
• Creating TimeView etlaix2-vg01-Protection with timestamp (12/24/2008
15:04:01)...
• Assigning Virtual Disk etlaix2-vg01-Protection vid 451 TO etlaix2...
• Creating Volume Group for TimeView etlaix2-vg01 tv1-Protection on
hdisk11...
• Running fsck on dev/tv1/lvol1
• Running fsck on dev/tv1/lvol2
• Running fsck on dev/tv1/lvol3

3. Mount the Volume Group vg01 TimeView to a mount point.


mount /dev/tv1_lv01 <mount point>
mount /dev/tv1_lv02 <mount point>
mount /dev/tv1_lv03 <mount point>

You may now verify the contents of each mount point to confirm the data is valid.

CDP Administration Guide 151


Data Recovery

Recover a Logical Volume using a TimeView (recover_lv) for AIX LVM

1. Run the recover_lv script to recover a logical volume using the FalconStor CDP
TimeMark feature. Ex. "recover_lv -tv vg01"

2. Select the TimeMark timestamp for the TimeView to be created.


The script will automatically break the mirror(s), roll back the FalconStor CDP
LUN, and then recreate/sync the mirror for one or more logical volumes. See the
usage example below.)
An example of the recover_lv script to create a TimeView for the AIX Logical
Volume:
• # recover_lv <IP address> -tv fslv05
• ### AVAILABLE TIMEMARK TIMESTAMP FOR LOGICAL VOLUME
fslv05 ###
• 20081215191203 (12/15/2008 19:12:03) 64.00 KB valid
• 20081215191705 (12/15/2008 19:17:05) 64.00 KB valid
• 20081215192000 (12/15/2008 19:20:00) 2.14 mb valid
• 20081215191705 (12/15/2008 19:30:05) 64.00 KB valid
• 20081215192000 (12/15/2008 19:40:00) 64.00 KB valid
• Enter TimeMark timestamp for TimeView on etlaix3-vg01-Protection
• 20081215192000
• Creating TimeView etlaix3-vg01 tv1-Protection with timestamp (12/15/
2008 19:40:00)...
• Assigning Virtual Disk etlaix3-vg01 tv1-Protection vid to etlaix3...
• Creating Volume Group for TimeView etlaix3 -vg01 tv1-Protection on
hdisk3...
• Running fsck on Logical Volume /dev/tv1 fslv05...

3. Mount the Volume Group vg01 TimeView to a mount point.


mount /dev/tv1_fslv05 <mount point>
mount /dev/tv1_lv02 <mount point>
mount /dev/tv1_lv03 <mount point>

You may now verify the contents of each mount point to confirm the data is valid.

CDP Administration Guide 152


Data Recovery

Recover a Volume Group using a TimeView (recover_vg_ha) for HACMP LVM

1. Run the recover_vg script to recover a disk group using the FalconStor CDP
TimeMark feature. Ex. "recover_vg vaix3 -tv vg01"

2. Select the TimeMark timestamp for the TimeView to be created. (See the usage
example below.)
An example of the recover_vg_ha script using a TimeView for the AIX HACMP
Shared Volume Group:
.# recover_vg_ha 192.168.15.96 vaix3 -tv sharevg_01
• ### AVAILABLE TIMEVIEW TIMESTAMP FOR VOLUME GROUP
sharevg_01 ###
• 20081016220547 (10/16/2008 22:05:47) 64.00 KB valid test 1
• 20081016220555 (10/16/2008 22:05:55) 64.00 KB valid test 2
• 20081016220601 (10/16/2008 22:06:01) 2.30 MB valid test 3
• 20081016221000 (10/16/2008 22:10:00) 64.00 KB valid
• Enter TimeMark timestamp for TimeView on Terayon-sharevg_01-Protection
• 20081016221000
• Creating TimeView Terayon-sharevg_01_tv1-Protection with timestamp (10/
16/2008 22:10:00)...
• Assigning Virtual Disk Terayon-sharevg_01_tv1-Protection vid 262 to vaix3...
• Rescanning DynaPath Devices on Node vaix3...
• Creating Volume Group for TimeView Terayon-sharevg_01_tv1-Protection on
hdisk7...
• Running fsck on /dev/tv1_lv00 on Node vaix3...

3. Mount the Volume Group sharevg_01 TimeView to a mount point.


mount /dev/tv1_lv00 <mount point>
You may now verify the contents of each mount point to confirm the data is valid.

CDP Administration Guide 153


Data Recovery

Recover a Volume Group using a TimeView (mount_vg)

1. List the available TimeMark timestamps for the TimeView to be created.


mount_vg -ls vg01

Note: The script will only list the TimeMark timestamps that do not yet have a
TimeView.

Usage example of the mount_tv script to list timestamps:

# mount_vg <IP Address> -ls vg01


• ### AVAILABLE TIMEVIEW TIMESTAMP FOR VOLUME GROUP
vg01 ###
• 20080427210912 (04/27/2008 21:09:12) 64.00 KB valid
• 20080427210917 (04/27/2008 21:09:17) 64.00 KB valid
• 20080427210920 (04/27/2008 21:09:20) 64.00 KB valid
• 20080427210929 (04/27/2008 21:09:29) 111.00 KB valid
• 20080427234000 (04/27/2008 23:40:00) 64.00 KB valid
• 20080427235000 (04/27/2008 23:50:00) 1.45 GB valid
• 20080428000000 (04/28/2008 00:00:00) 123.00 KB valid
• 20080428002000 (04/28/2008 00:20:00) 64.00 KB valid

2. Create and mount a TimeView based on the timestamp provided.


mount_vg <IP Address> -mt vg01 20080427235000
Usage example of the mount_vg script to mount a TimeView:
# mount_tv <IP Address> -mt vg01 20080427210912
• Creating TimeView etlaix2-vg01_tv2-Protection with timestamp (04/
27/2008 21:09:12)...
• Assigning Virtual Disk etlaix2-vg01_tv2-Protection to etlaix2...
• Scanning for new disk which could take time depending on your
system...
• Importing vg01_tv2 with /dev/dsk/c7t0d3...

• Running Command: fsck -y /dev/vg01_tv2/lvol1...
• Running Command: fsck -y /dev/vg01_tv2/lvol2...
• Running Command: fsck -y /dev/vg01_tv2/lvol3...

• Mounting /dev/vg01_tv2/lvol1 to /mnt/vg01_tv2_lvol1...
• Mounting /dev/vg01_tv2/lvol2 to /mnt/vg01_tv2_lvol2...
• Mounting /dev/vg01_tv2/lvol3 to /mnt/vg01_tv2_lvol3...

You may now verify the contents of each mount point to confirm the data is valid.

CDP Administration Guide 154


Data Recovery

Recover a Logical Volume using a TimeView (mount_lv) for AIX LVM

1. List the available TimeMark timestamps for the TimeView to be created.


mount_lv <IP Address> -ls fslv05

Note: The script will only list the TimeMark timestamps that do not yet have a
TimeView.

Usage example of the mount_lv script to list timestamps:


# mount_lv 10.6.7.236 -ls fslv05
• ### AVAILABLE TIMEVIEW TIMESTAMP FOR LOGICAL VOLUME
fslv05 ###
• 20081227210912 (12/27/2008 21:09:12) 64.00 KB valid
• 20081227210917 (12/27/2008 21:09:17) 64.00 KB valid
• 20081227210920 (12/27/2008 21:09:20) 64.00 KB valid
• 20081227210929 (12/27/2008 21:09:29) 111.00 KB valid
• 20081227234000 (12/27/2008 23:40:00) 64.00 KB valid
• 20081227235000 (12/27/2008 23:50:00) 1.45 GB valid
• 20081228000000 (12/28/2008 00:00:00) 123.00 KB valid
• 20081228002000 (12/28/2008 00:20:00) 64.00 KB valid

2. Create and mount a TimeView based on the timestamp provided.


mount_vg <IP Address> -mt vg01 20081227235000
Usage example of the mount_vg script to mount a TimeView:

# mount_lv 10.6.7.236 -mt fslv05 20081227210912


• Creating TimeView etlaix3-vg01_tv2-Protection with timestamp (12/
27/2008 21:09:12)...
• Assigning Virtual Disk etlaix3-vg01_tv2-Protection vid 459 to
etlaix3...
• Creating Volume Group for TimeView etlaix3-vg01_tv2-Protection
on hdisk5...
• Running fsck on Logical Volume /dev/tv2_fslv05
• Mounting Logical Volume /dev/tv2 fslv05 to /mnt/vg01_tv2_fslv05...

You may now verify the contents of each mount point to confirm the data is valid.

CDP Administration Guide 155


Data Recovery

Recover an HACMP Shared Volume Group using a TimeView (mount_vg_ha)

1. List the available TimeMark timestamps for the TimeView to be created.


mount_vg_ha <IP Address> <other node> -ls sharevg_01

Note: The script will only list the TimeMark timestamps that do not yet have a
TimeView.

Usage example of the mount_vg_ha script to list timestamps:


# mount_vg_ha 192.168.15.96 vaix3 -ls sharevg01
• ### AVAILABLE TIMEVIEW TIMESTAMP FOR VOLUME GROUP
sharevg_01 ###
• 20081016220547 (10/16/2008 22:05:47) 64.00 KB valid test 1
• 20081016220555 (10/16/2008 22:05:55) 64.00 KB valid test 2
• 20081016220601 (10/16/2008 22:06:01) 02.30MB valid test 3
• 20081016222000 (10/16/2008 22:20:00) 64.00 KB valid

2. Create and mount a TimeView based on the timestamp provided.


mount_vg <IP Address> vaix3 -mt sharevg_01 20081016222000
Usage example of the mount_vg_ha script to mount a TimeView for an AIX
HACMP Shared Volume Group.:
# mount_vg_ha 192.168.15.96 vaix3 -ls sharevg01 01 20081016222000
• Creating TimeView Terayon-sharevg_01_tv1-Protection with
timestamp (10/16/2008 22:10:00)...
• Assigning Virtual Disk Terayon-sharevg_01_tv1-Protection vid 262
to vaix3...
• Rescanning DynaPath Devices on Node vaix3...
• Creating Volume Group for TimeView Terayon-sharevg_01_tv1-
Protection on hdisk7...
• Running fsck on /dev/tv1_lv00 on Node vaix3...

You may now verify the contents of each mount point to confirm the data is valid.

CDP Administration Guide 156


Data Recovery

Remove a TimeView Volume Group (umount_vg)

Let's assume we would like to clean up TimeView volume group vg00_tv2:


Unmount and remove TimeView volume group vg00_tv2.
umount_tv vg00_tv2
Usage example of the umount_tv script to clean up a TimeView:

# umount_tv -mt vg00_tv2


Unmounting /mnt/vg00_tv2_lvol1...
Unmounting /mnt/vg00_tv2_lvol2...
Unmounting /mnt/vg00_tv2_lvol3...
Deactivating Volume Group vg00_tv2...
Exporting Volume Group vg00_tv2...
Unassigning TimeView Resource etlaix2-vg00_tv2-Protection from etlaix2...
Deleting TimeView Resource etlaix2-vg00_tv2-Protection...
Cleaning Up Offline Devices...

Remove a TimeView Logical Volume (umount_lv)

Let's assume we would like to clean up TimeView logical volume tv2_fslv05:


Unmount and remove TimeView logical volume tv2_fslv05.
umount_lv <IP Address> tv2_fslv05
Usage example of the umount_lv script to clean up a TimeView:

# umount_lv 10.6.7.236 tv2_fslv05


Unmounting /dev/tv2_fslv05...
Varying Off Volume Group vg01_tv2
Exporting Volume Group vg01_tv2...
Removing TimeView Disk hdisk5...
Unassigning TimeView Resource etlaix3-vg01_tv2-Protection vid 459 from
etlaix3...
Deleting TimeView Resource etlaix3-vg01_tv2-Protection...

CDP Administration Guide 157


Data Recovery

Remove a TimeView HACNMP Volume Group (umount_vg_ha)

Let's assume we would like to clean up TimeView volume group sharevg_01_tv1.


Unmount and remove TimeView volume group sharevg_01_tv1.
umount_vg <IP Address> vaix3_sharevg_01_tv1
Usage example of the umount_vg script to clean up a TimeView AIX HACMP
Volume Group:

# umount_vg_ha 192.168.15.96 sharevg_01_tv2


• Unmounting /dev/tv2_lv00 on Node vaix3...
• Varying Off Volume Group sharevg_01_tv2 on Node vaix3
• Exporting Volume Group sharevg_01_tv2 on Node vaix3...
• Removing TimeView Disk hdisk8 on Node vaix3...
• Unassigning Virtual Disk Terayon-sharevg_01_tv2-Protection vid
263 from vaix3...
• Deleting TimeView Resource Terayon-sharevg_01_tv2-Protection...

Pause and resume mirroring for an AIX Volume Group

Let's assume we want to pause the CDP mirror on volume group vg01.
Pause the CDP mirror for volume group vg01
set_mirror_vg <IP address> -p vg01
Usage example of set_mirror_vg to pause the CDP mirror feature:

# set_mirror_vg 10.6.7.236 -p vg01


• # set_mirror_vg 10.6.7.236 -p vg01
• This process will break redundancy on Volume Group vg01. Do you want
to continue? (y)es or (n)o. y
• Pausing Mirror for Volume Group vg01...

Now let’s assume we want to resume the mirror on volume group vg01.
Pause the CDP mirror for volume group vg01
set_mirror_vg <IP address> -r vg01
Usage example of set_mirror_vg to resume the CDP mirror:

# set_mirror_vg 10.6.7.236 -r vg01


• Resuming Mirror for Volume Group vg01...

CDP Administration Guide 158


Data Recovery

Recover a Volume Group on a disaster recovery host

1. Promote the replica resource to be a primary disk.


Refer to your CDP Reference Guide for more details regarding promoting replica
resources.

2. Assign the promoted replica resource to the AIX DR host.


Refer to your CDP Reference Guide for more details regarding assigning a
promoted replica.

3. On the AIX DR host, scan for the assigned IPStor disk.


cfgmgr

4. Install the special device file for the new IPStor disk.
insf -eC disk

5. Run /usr/local/ipstorclient/bin/vg_drtv to import any exported


Volume Group on the host.
The script will create a Volume Group named falcvg[1-9]. It then runs fsck on each
of the logical volumes. You may mount the logical volumes for verification purposes
after fsck completes.

CDP Administration Guide 159


Data Recovery

Recover HP-UX Logical Volumes


This section describes how you can use the FalconStor CDP solution to recover HP-
UX logical volumes in the case of primary disk errors. There are two methods to
recover your data: TimeMark Rollback and TimeViews.
Scenario 1: If there is a catastrophic logical volume error due to primary disk outage
(physical errors), the HP-UX LVM logic will automatically switch off the primary disk,
and the FalconStor CDP mirror LUNs will stand-in for the failed disks without any
application downtime. When the failed primary disk is repaired, simply re-activate
the primary disks. This triggers a reverse synchronization of the mirrors back to the
primary.
Use the following commands to restore after a physical disk failure:

1. Deactivate the failed primary disk.


pvchange -a n /dev/dsk/<raw device path>

2. Replace the failed primary disk.

3. Restore the LVM Volume Group information from the backup.


vgcfgrestore -n <volume group name> <raw device path>

4. Reactivate the Volume Group to resync data.


vgchange -a y <volume group name>

Scenario 2: If there is a catastrophic logical volume error that results in total volume
data loss, the TimeMark Rollback method should be used. Generally, this scenario is
used whenever it is decided that the current primary logical volume is useless, and a
full "restore" is necessary to reset it back to a known good point-in-time.
Scenario 3: For minor data loss, such as inadvertent deletion of a file or directory, it
is NOT desirable to use TimeMark Rollback because all of the "good changes" are
also rolled back. Therefore, for this case, it is more desirable to create a virtual view
of the protected Volume Group as of a known good point-in-time and then mount the
logical volume in this view in order to copy back the deleted file(s). This virtual view
is called a TimeView, which is created using a specified TimeMark. This TimeView is
an exact representation of the entire Volume Group, and contains every logical
volume inside the group. The Volume Group name of the TimeView is identical to
the primary Volume Group except with a "_tv" appended to the name. After copying
back the data, the TimeView is then discarded. Because the TimeView is totally
virtual, there is no massive copying of data (no extra storage is required) and the
time to mount the TimeView Volume Group is fast.

CDP Administration Guide 160


Data Recovery

Recover a Volume Group using TimeMark Rollback for HP-UX LVM

1. Run the recover_vg script to recover a volume group using the FalconStor CDP
rollback feature. Ex. "recover_vg -rb vg01"

2. Type y to confirm that you want to continue.


Caution: This is an irreversible process that will destroy any existing data on the
LUN.

3. When prompted, select the TimeMark timestamp from the list.


The script will automatically break the mirror(s), roll back the FalconStor CDP
LUN, and then recreate/sync the mirror for one or more logical volumes.
An example of the recover_vg output is as follows:

# recover_vg -rb vg01


This is a destructive process. Do you want to continue? (y)es or (n)o. y
### TIMEMARK TIMESTAMP FOR VOLUME GROUP vg01 ###
20080423194453 (04/23/2008 19:44:53) 322.00 KB valid deletion of all
files20080423200000 (04/23/2008 20:00:00) 64.00 KB
valid20080423201000 (04/23/2008 20:10:00) 600.22 MB
valid20080423212000 (04/23/2008 21:20:00) 434.00 KB
valid20080423213000 (04/23/2008 21:30:00) 128.00 KB valid

Enter TimeMark timestamp to rollback etlhp2-vg01-


Protection20080423213000

Unmounting /mnt/vg01_lvol1...
Unmounting /mnt/vg01_lvol2...
Unmounting /mnt/vg01_lvol3...

Deactivating Volume Group vg01...


Unassigning Virtual Disk etlhp2-vg01-Protection from etlhp2...
Rolling back etlhp2-vg01-Protection to timestamp (04/28/2008 00:00:00)...
Assigning Virtual Disk etlhp2-vg01-Protection to etlhp2...
Activating Volume Group vg01...

Running Command: fsck -y /dev/vg01/lvol1...


Running Command: fsck -y /dev/vg01/lvol2...
Running Command: fsck -y /dev/vg01/lvol3...

Mounting /dev/vg01/lvol1 to /mnt/vg01_lvol1…


Mounting /dev/vg01/lvol2 to /mnt/vg01_lvol2...
Mounting /dev/vg01/lvol3 to /mnt/vg01_lvol3...
Resynchronizing Volume Group vg01...

CDP Administration Guide 161


Data Recovery

Recover a Volume Group using TimeMark Rollback for HP-UX VxVM

1. Run the recover_dg script to recover a disk group using the FalconStor CDP
rollback feature. Ex. "recover_vg <IP Address> -rb vg01"

2. Type y to confirm that you want to continue.


Caution: This is an irreversible process that will destroy any existing data on the
LUN.

3. When prompted, select the TimeMark timestamp from the list.


The script will automatically break the mirror(s), roll back the FalconStor CDP
LUN, and then recreate/sync the mirror for one or more logical volumes.
An example of the recover_dg output is as follows:

# recover_dg <IP Address> -rb dg01


This is a destructive process. Do you want to continue? (y)es or (n)o. y
### TIMEMARK TIMESTAMP FOR DISK GROUP dg01 ###
20081223194453 (12/23/2008 19:44:53) 64.00 KB valid
20081223200000 (12/23/2008 20:00:00) 64.00 KB valid delete all files

Enter TimeMark timestamp to rollback etlhp4-dg01-Protection


20081223213308

Stopping Volume vol01 on Disk Group dg01...


Stopping Volume vol02 on Disk Group dg01...
Stopping Volume vol03 on Disk Group dg01...

Unmounting /mnt/dg01_lvol2...
Unmounting /mnt/dg01_lvol3...

Unassigning Virtual Disk etlhp4-dg01-Protection vid 448 from etlhp4...


Rolling back etlhp4-dg01-Protection to timestamp (12/23/2008 00:00:00)...
Assigning Virtual Disk etlhp4-dg01-Protection vid 448 from etlhp4...
Starting All Volumes on Disk Group dg01...

Running Command: fsck -F vxfs -y /dev/vx/dsk/dg01/lvol1...


Running Command: fsck -F vxfs -y /dev/vx/dsk/dg01/lvol2...
Running Command: fsck -F vxfs -y /dev/vx/dsk/dg01/lvol3...
Resynchronizing Disk Group dg01...

CDP Administration Guide 162


Data Recovery

Recover a Volume Group using a TimeView (recover_vg) for HPUX LVM

1. Run the recover_vg script to recover a volume group using the FalconStor CDP
TimeMark feature. Ex. "recover_vg -tv vg01"

2. Select the TimeMark timestamp for the TimeView to be created.


An example of the recover_vg script using a TimeView for HP-UX LVM is as follows:

# recover_vg <IP address> -tv vg01


• Cleaning Up Offline Disk...
• ### TIMEMARK TIMESTAMP FOR VOLUME GROUP vg01 ###
• 20080424124544 (04/24/2008 12:45:44) 64.00 KB valid
• 20080424124549 (04/24/2008 12:45:49) 64.00 KB valid
• 20080424124555 (04/24/2008 12:45:55) 64.00 KB valid
• 20080424124601 (04/24/2008 12:46:01) 64.00 KB valid
• 20080424140817 (04/24/2008 14:08:17) 1.45 GB valid
it is a mirror
• 20080424150401 (04/24/2008 15:04:01) 64.00 KB valid
mirror completion
• Enter TimeMark timestamp for TimeView on etlhp2-vg01-
Protection
• 20080424150401
• Creating TimeView for etlhp2-vg01-Protection with timestamp
(04/24/2008 15:04:01)...
• Assigning Virtual Disk etlhp2-vg01-Protection TimeView to
etlhp2...
• Scanning for new disk which could take time depending on
your system...
• Importing vg01_tv1 with /dev/dsk/c7t0d1
• Running Command: fsck -y /dev/vg01_tv1/lvol1
• Running Command: fsck -y /dev/vg01_tv1/lvol2
• Running Command: fsck -y /dev/vg01_tv1/lvol3

3. Mount the vg01 TimeView to a mount point.


mount /dev/vg01_tv1/vol011v <mount point>
mount /dev/vg01_tv1/vol012v <mount point>
mount /dev//g01_tv1/vol013v <mount point>

You may now verify the contents of each mount point to confirm the data is valid.

CDP Administration Guide 163


Data Recovery

Recover a Volume Group using a TimeView (recover_vg) for HP-UX VxVm

1. Run the recover_dg script to recover a disk group using the FalconStor CDP
TimeMark feature. Ex. "recover_dg -tv vg01"

2. Select the TimeMark timestamp for the TimeView to be created. (See the usage
example below.)
Usage example of the recover_vg script using a TimeView for HP-UX
VxVM:

# recover_dg -tv vg01


• AVAILABLE TIMEVIEW TIMESTAMP FOR VOLUME GROUP dg01 ###
• 20081225223301 (12/25/2008 12:45:44) 64.00 KB valid
• 20081224124549 (12/24/2008 12:45:49) 1.69 MB valid
delete all files
• 20081224124555 (12/24/2008 12:45:55) 64.00 KB valid
another test
• Enter TimeMark timestamp for TimeView on etlhp4-dg01-
Protection 20081224150401
• Creating TimeView for etlhp4-dg01-Protection with timestamp
(12/24/2008 15:04:01)...
• Assigning Virtual Disk etlhp4-dg01-Protection vid 449 to
etlhp4...
• Scanning system for new disk ..
• Initializing c7t0d2 for use with VxVM...
• Creating Disk Group dg01 tv1 with c7t0d2...
• Initializing Volume vol01 for Disk Group dg01 tv1...
• Initializing Volume vol02 for Disk Group dg01 tv1...
• Initializing Volume vol03 for Disk Group dg01 tv1...
• Running Command: fsck -F vxfs -y /dev/vx/dsk/dg01_tv1/lvol1
• Running Command: fsck -F vxfs-y /dev/vx/dsk/dg01_tv1/lvol2
• Running Command: fsck -F vxfs-y /dev/vx/dsk/dg01_tv1/lvol3

3. Mount the dg01 TimeView to a mount point.


mount /dev/vx/dsk/dg01/dg01_tv1/lvol1 <mount point>
mount /dev/vx/dsk/dg01/dg01_tv1/lvol2 <mount point>
mount /dev/vx/dsk/dg01/dg01_tv1/lvol3 <mount point>

You may now verify the contents of each mount point to confirm the data is valid.

CDP Administration Guide 164


Data Recovery

Recover a Volume Group using a TimeView (mount_tv)

1. List the available TimeMark timestamps for the TimeView to be created.


mount_tv -ls vg01

Note: The script will only list the TimeMark timestamps that do not yet have a
TimeView.

Usage example of the mount_tv script to list timestamps:

# mount_tv -ls vg01


• ### AVAILABLE TIMEVIEW TIMESTAMP FOR VOLUME GROUP vg01 ###
• 20080427210912 (04/27/2008 21:09:12) 64.00 KB valid
• 20080427210917 (04/27/2008 21:09:17) 64.00 KB valid
• 20080427210920 (04/27/2008 21:09:20) 64.00 KB valid
• 20080427210929 (04/27/2008 21:09:29) 111.00 KB valid
• 20080427234000 (04/27/2008 23:40:00) 64.00 KB valid
• 20080427235000 (04/27/2008 23:50:00) 1.45 GB valid
• 20080428000000 (04/28/2008 00:00:00) 123.00 KB valid
• 20080428002000 (04/28/2008 00:20:00) 64.00 KB valid

2. Create and mount a TimeView based on the timestamp provided.


mount_tv -mt vg01 20080427235000
Usage example of the mount_tv script to mount a TimeView:

# mount_tv -mt vg01 20080427210912


• Creating TimeView etlhp2-vg01_tv2-Protection with timestamp
(04/27/2008 21:09:12)...
• Assigning Virtual Disk etlhp2-vg01_tv2-Protection to
etlhp2...
• Scanning for new disk which could take time depending on
your system...
• Importing vg01_tv2 with /dev/dsk/c7t0d3...

• Running Command: fsck -y /dev/vg01_tv2/lvol1...
• Running Command: fsck -y /dev/vg01_tv2/lvol2...
• Running Command: fsck -y /dev/vg01_tv2/lvol3...

• Mounting /dev/vg01_tv2/lvol1 to /mnt/vg01_tv2_lvol1...
• Mounting /dev/vg01_tv2/lvol2 to /mnt/vg01_tv2_lvol2...
• Mounting /dev/vg01_tv2/lvol3 to /mnt/vg01_tv2_lvol3...

You may now verify the contents of each mount point to confirm the data is valid.

CDP Administration Guide 165


Data Recovery

Remove TimeView Volume Group (umount_tv)

Let's assume we would like to clean up TimeView volume group vg01_tv2.


Unmount and remove TimeView volume group vg01_tv2.
umount_tv vg01_tv2
Usage example of the umount_tv script to clean up a TimeView:

# umount_tv -mt vg01_tv2


• Unmounting /mnt/vg01_tv2_lvol1...
• Unmounting /mnt/vg01_tv2_lvol2...
• Unmounting /mnt/vg01_tv2_lvol3...
• Deactivating Volume Group vg01_tv2...
• Exporting Volume Group vg01_tv2...
• Unassigning TimeView Resource etlhp2-vg01_tv2-
Protection from etlhp2...
• Deleting TimeView Resource etlhp2-vg01_tv2-
Protection...
• Cleaning Up Offline Devices...

Recover a Volume Group on a disaster recovery host

1. Promote the replica resource to be a primary disk.


Refer to your CDP Reference Guide for more details regarding promoting replica
resources.

2. Assign the promoted replica resource to the HP-UX DR host.


Refer to your CDP Reference Guide for more details regarding assigning a
promoted replica.

3. On the HP-UX DR host, scan for the assigned IPStor disk.


ioscan -fnC disk

4. Install the special device file for the new IPStor disk.
insf -eC disk

5. Run /usr/local/ipstorclient/bin/vg_drtv to import any exported


Volume Group on the host.
The script will create a Volume Group named falcvg[1-9]. It then runs fsck on each
of the logical volumes. You may mount the logical volumes for verification purposes
after fsck completes

CDP Administration Guide 166


Data Recovery

Disaster Recovery in a Solaris environment


This section explains how to protect your data in a Solaris environment using the
Solaris Volume Manager (SVM) to mirror a Solaris disk.

Prepare the Solaris machine

The first step to recovery is to have set up a mirror for the Solaris machine.

1. Use the FalconStor Management Console to create a SAN Resource that is


large enough to be used as the mirror disk.

2. Add the Solaris machine as a client and assign the client to the SAN Resource.

3. Use the devfsadm command to perform a device scan on Solaris and then use
the format command to verify that the client claimed the device.

4. Create a metadb of your primary disk.


#metadb -a -f -c 1 /dev/dsk/c2t8d0s2
The metadb is a repository that tracks the state of each logical device.

5. Create two stripes for the two sub-mirrors as d21 and d22:
#metainit d21 1 1 c2t6d0s2
#metainit d22 1 1 c2t7d0s2

6. Specify the primary disk that is to be mirrored by creating a mirror device (d20)
using one of the sub-mirrors (d21):
#metainit d20 -m d21

7. Attach the primary disk (d20) to the mirror disk (d22):


#metattach d20 d22

8. Set a TimeMark policy on the mirror disk.


Refer to your CDP Administrators Guide for more information about configuring
TimeMark.

CDP Administration Guide 167


Data Recovery

Break the mirror for rollback

When you want to perform a roll back, the primary disk and the mirror disk (the CDP
virtual device), will be out of sync and the mirror will need to be broken. In Solaris
SVM this can be achieved by placing the primary and mirror device into a logging
mode.

1. Disable the remote mirror software and discard the remote mirror:
rmshost1# sndradm -dn -f /etc/opt/SUNWrdc/rdc.cf

2. Edit the rdc.cf file to swap the primary disk information and the secondary
disk information. Unmount the remote mirror volumes:
rmshost1# umount mount-point

3. When the data is de-staged, mount the secondary volume in read-write mode so
your application can write to it.

4. Configure your application to read and write to the secondary volume.


The secondary bitmap volume tracks the volume changes.

5. Fix the "failure" at the primary volume by disabling logging mode using the re
synchronization command.

6. Quiesce your application and unmount the secondary volume.


You can now resynchronize your volumes.

7. Roll back the secondary volume to its original pre-disaster state to match the
primary volume by using the sndradm -m copy or sndradm -u update
commands.
Keep the changes from the updated secondary volume and resynchronize so
that both volumes match using the sndradm -m r reverse copy or by using
sndradm - u r reverse update commands.

CDP Administration Guide 168


Data Recovery

Additional FalconStor disaster recovery tools


Recovery Agents

FalconStor Recovery Agents offer recovery solutions for your database and
messaging systems.
FalconStor® Message Recovery for Microsoft® Exchange (MRE) and Message
Recovery for Lotus Notes/ Domino (MRN) expedite mailbox/message recovery by
enabling IT administrators to quickly recover individual mailboxes from point-in-time
snapshot images of their messaging server.
FalconStor® Database Recovery for Microsoft® SQL Server expedites database
recovery by enabling IT administrators to quickly recover a database from point-
intime snapshot images of their SQL database.
IntegrityTrac is a validation tool that allows you to check the application data
consistency of snapshots taken from Microsoft Exchange servers before using them
for backup and recovery.
FalconStor® Recovery Agent for Microsoft® Volume Shadow-Copy Service (VSS)
enables IT administrators to restore volumes and volume groups from point-in-time
snapshots created by the FalconStor® Snapshot Agent for Microsoft® VSS.
Refer to the Recovery Agents User Guide for more information regarding how to
recover your data using the following products:
• Message Recovery for Microsoft Exchange
• Message Recovery for Lotus Notes/Domino
• Database Recovery for Microsoft SQL Server
• IntegrityTrac
• Recovery Agent for VSS

RecoverTrac

RecoverTrac allows you to create scripts that manage the recovery process for
multiple host machines in a group or "farm". In the event of an emergency,
RecoverTrac can quickly recover the hosts and help you bring them back online in
the required sequence, simultaneously or sequentially, to the best recovery point.
Refer to the FalconStor RecoverTrac User Guide for more information regarding the
FalconStor RecoverTrac disaster recovery tool.

CDP Administration Guide 169


Host-based CDP Getting Started Guide

Index
A Pre-installation 20
About this document 6 Client Throughput Report 129
Acceptable throughput 36 Clone Agent 141
Access control columns, showing/hiding/re-ordering 28
SAN Client 116 Configuration repository 86
Access rights Configuration wizard 83
IPStor Admins 96 Connectivity 98
IPStor Users 96 Console 81
SAN Client 116 Administrator Management 95
Accounts Change password 98
Manage 95 Connectivity 98
ACSL Custom menu 118
Change 116 Definition 2
Activity Log 90 Discover IPStor Servers 82, 86
Adapters Import a disk 109
Rescan 15, 108 Log 118
Administrator Logical Resources 112
Management 95 Options 117
Advanced custom settings 33 Physical Resources 104
agent trace log 125 Replication 114
AIX Rescan adapters 15, 108
Logical Volume Manager 75 SAN Clients 115
Alias 110 Save/restore configuration 86
Allocate Disk 30 Search 85
Auto Save 87, 94 Server properties 89
automatic expansion policy 44 Start 81
System maintenance 101
B User interface 85
Balance performance and coverage 33 Continuous Data Protector (CDP) 1
Benefits Continuous mirror mode 33
24 x 7 server protection 5 Continuous mode 32
Block devices 108
boot CD (see recovery CD) D
booting remotely 142–146 data
accessing after system failure 142–146
C sorting 28
Cache Deterioration threshold 36
Write 113 Device Management option 137
CDP journaling 57, 71, 80 Direct Attached Storage (DAS) 1
Central Client Manager (CCM) 2 Disaster recovery
CHAP authentication 45 Import a disk 109
CHAP secret 98 Save/restore configuration 86
Client Disk
Definition 3 Foreign 109
Installation IDE 108
Linux 20 Import 109
Windows 20 System 105

CDP Administration Guide 170


Disk Management 134 G
Disk Protection Wizard 29 Groups 40
Disk Usage History Report 129 GUID 110, 113
Disks
resuming protection 43 H
suspending protection 43 Halt server 103
disks Hardware/software requirements 5
accessing after system failure 142–146 HBA 3, 132, 142
restoring using DiskSafe 132 Hostname
restoring while booting remotely 145 Change 84, 102
DiskSafe 2, 29, 95 HP-UX 25
Limit I/O throughput option 36 recovering logical volumes 147, 160
Protect disk 29
DiskSafe Restore Wizard 132 I
dsinstall.sh script 44 IDE drives 108
dynamic disk 134 Import
Disk 109
E Installation
Eligible mirror disks 32 IPStor SAN Client
Eligible primary storage 29 Linux 20
Encryption 36 Pre-installation 20
Enterprise Volume Management System (EVMS) Windows 20
5, 51 Intelligent Management Agent (IMA) 20, 121
Event Log 85, 127 Introduction 1
Filter information 128 Invoke snapshot agents 37
Refresh 128 IPMI 103
Sort information 128 Filter 104
Events 121 Monitor 103
EVMS IPStor
CLI 55 Console 2, 81
GUI 54 Licensing 88
EVMS GUI 55, 57 SAN Client 3
Exchange Extensible Storage Engine (ESE) 123 SAN Client installation 20
Server 2
F IPStor Admins
FalconStor Management Console 71, 80 Access rights 96
Faulty Object 57 IPStor Server
Fibre Channel 132 Connect in Console 82
Fibre Channel Target Mode Definition 2
Initiator mode 13 Discover 82, 86
QLogic ports 13 Import a disk 109
Target mode 13 Network configuration 101
FileSafe 95 Properties 89
Filtered Server Throughput Report 130 Save/restore configuration 86
Force full restore 133 IPStor Users
Foreign disk 109 Access rights 96
fsck command 159, 166 ipstorconsole.log 118

CDP Administration Guide 171


J Aliasing 110
Java shortcut 117 Mutual CHAP 98, 99
JBOD 2
Journal resource 31 N
Jumbo frames 102 navigating DiskSafe 27
Network configuration 83, 101
K NIC Port Bonding 3
Keycodes 88
O
L Optimize data copy 33
libdevmapper 60 Optimize data mirror coverage 33
Licensing 83, 88
Linux Volume Manager (LVM). 51 P
Logical Resource 2 Partitions
Logical Resources 112 resuming protection 43
Logical resources suspending protection 43
Icons 113 partitions
Status 113 accessing after system failure 142–146
Logical Volume Manager (LVM) 5, 69 restoring using DiskSafe 132
Logs 127 restoring while booting remotely 145
Activity log 90 Passwords
Console 118 Add/delete administrator password 95
Event log refresh 118 Change administrator password 95, 98
ipstorconsole.log 118 Patch
LUNs 3 Apply 100
LVM2 60 Rollback 100
tool description 61 Performance
Mirror 92
M Replication 92
markfaulty plug-in function 57 Periodic mirror mode 33
Menu Periodic mode 33
Customize Console 118 Persistent binding 105
menus 28 Physical 129
metadb 58, 167 Physical device
Microscan 93 Prepare 106
Microsoft Exchange 29 Rename 107
Minimize performance impact to primary I/O 33 Repair 111
Mirror Test throughput 110
setting up in Solaris 58, 167 Physical Resource 3
setting up in SUSE Linux 52 Physical resources 104
Storage Selection 30 Icons 105
mirror relationship IDE drives 108
confirming 63 Prepare disks 106
Mirroring Protect
Performance 92 Disk or partition 29
Resynchronization 93 Group of disks 40
mount_tv script 66, 68, 74, 154, 155, 165 Servers 27, 131
MTU 102 protect_vg script 66, 68, 69, 74
Multipathing 110 Protection

CDP Administration Guide 172


resuming 43 Adapters 15, 108
suspending 43 Rescan Disk 140
pure-ftpd package 101 Resource 129
pvlist command 62 Resource IO Activity Report 129
pvscan command 62 Restore disk signature 133
Resume protection 43
Q right pane
QLogic displaying data 28
Ports 13 selecting items 29
Quota sorting data 28
Group 97, 98
User 96, 98 S
SAN Client 115
R Access control 116
RAID 2 Definition 3
rdc.cf file 59, 168 Installation
Reboot server 102 Linux 20
recover_vg script 66, 68, 74, 148, 161, 162 Windows 20
Register hosts 29 Pre-installation 20
remote boot 9, 142–146 SAN Client Usage Distribution Report 130
Rename SAN Resource 3
Physical device 107 SAN Resources
Repair Prepare Disk 16
Paths to a device 111 Schedule 41
Replication Scheduled tasks 33
Console 114 SCSI
Microscan 93 Aliasing 110
Performance 92 SCSI Channel Throughput Report 129
Throttle 93 SCSI Device Throughput Report 130
Timeout 93 Segment Manager 53
Reports Server
Client Throughput 129 Definition 2
Creating Discover 82, 86
Global replication 130 Import a disk 109
Disk Space Usage 129 Network configuration 101
Filtered Server Throughput 130 Properties 89
Physical Resource Allocation 129 Save/restore configuration 86
Physical Resources Allocation 129 Server Throughput Report 130
Physical Resources Configuration 129 Service-enabled devices 3
SAN Client Usage Distribution 130 shortcut Icon 9
SAN Client/Resources Allocation 130 Snapshot 3
SAN Resource Usage Distribution 130 Advanced Settings 31
SAN Resources Allocation 130 consolidation 38
SCSI Channel Throughput 129 options 41
SCSI Device Throughput 130 preserving pattern 38
Server Throughput 129 Status 43
Types 129 Verify 43
Global replication 130 Snapshot Agents 8
Rescan Snapshot agents 4, 12

CDP Administration Guide 173


Install 12 U
Lotus Notes/Domino status 126 unmount_tv script 66, 68, 74
Microsoft Exchange
configuration 29 V
status 123 vgcreate command 62
Microsoft SQL Server virtualization 2
status 124 Volume Group
Oracle Protect 69, 75
status 125 Volume Group recovery 159, 166
Snapshots node 120
SNMP W
Traps 91 Webstart 116
Software updates Windows Event log 121, 124
Add patch 100 Write caching 113
Rollback patch 100
Solaris Volume Manager (SVM) 58, 167 Y
Storage Area Network (SAN) 1 YaST 101
Storage pools 4
Storage quota 96
Synchronization options 35
Synchronize Out-of-Sync Mirrors 93
System
Disk 105
System maintenance 101
Halt 103
IPMI 103
Network configuration 101
Reboot 102
Restart IPStor 102
Restart network 102
Set hostname 102

T
Task creation 34
TCP 93
Thin Provisioning 1, 4, 19, 31
Throttle 93
Throughput
Test 110
TimeMark 44
policy 59, 167
Rollback 147, 160
timestamp 151, 164
TimeView 147, 160
mount_tv 154, 155, 165
recover_vg script 151, 164
Troubleshooting 120

CDP Administration Guide 174

You might also like