0% found this document useful (0 votes)
18 views11 pages

EMC - RecoverPointTM - Service & Maintenance v4

EMC RecoverPoint/SE is a data protection solution that safeguards against data loss due to various failures and allows for continuous data protection without impacting application performance. It utilizes a network-based approach to recover data to any point in time through a history journal and consistency groups ensure application consistency during replication. The document outlines procedures for recovering data, preparing hosts, and managing production recovery from a disaster recovery site.

Uploaded by

kwakutse20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views11 pages

EMC - RecoverPointTM - Service & Maintenance v4

EMC RecoverPoint/SE is a data protection solution that safeguards against data loss due to various failures and allows for continuous data protection without impacting application performance. It utilizes a network-based approach to recover data to any point in time through a history journal and consistency groups ensure application consistency during replication. The document outlines procedures for recovering data, preparing hosts, and managing production recovery from a disaster recovery site.

Uploaded by

kwakutse20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

EMC2 RecoverPointTM

Contents
Overview.................................................................................................................................................................... 1
Diagram:.....................................................................................................................................................................1
Consistency groups.....................................................................................................................................................1
Recovering data from point in time procedure:.........................................................................................................2
Prerequisites:..........................................................................................................................................................2
Procedure:..............................................................................................................................................................2
Host readiness:.......................................................................................................................................................6
Recover production:...................................................................................................................................................7

Overview
EMC RecoverPoint/SE is a comprehensive data protection solution. It offers protection for
data loss due to server failures, data corruption, software errors, viruses, and user errors
as well as protecting against catastrophic events.
Unlike other replication products, RecoverPoint/SE is appliance-based, which enables it to
protect large amounts of information without impacting performance.
RecoverPoint/SE uses a lightweight splitting technology residing on either the host or the
CLARiiON CX3/CX4 to send copies of writes to a RecoverPoint Appliance (RPA) that
resides outside of the primary data path. This out-of-band approach enables
RecoverPoint/SE to deliver continuous data protection and/or replication without
impacting an application’s I/O operations. Because of its network-based approach to data
protection, RecoverPoint/SE can instantly recover data to any point in time by leveraging
a history journal that tracks all data changes and bookmarks identifying application-
specific events.

Diagram:

As seen in the diagram, there are 2 sites; Main and DR.


In each site there are 2 RPA (RecoverPoint Appliances) that communicate between themselves.
Each RPA has a LAN IP and a WAN IP + a management IP.
For a list of IP addresses please check the site book.
Consistency groups
- Consistency group is a logical unit that has LUNs is in to be replicated, either local or remote and a set of
definitions, such as protection time, compression, journal and more.
- The usage of consistency group is to assure application consistency for all LUNs of the system.
- At ADB, only the bank’s database is being protected using the RP technology.
- Each LUN of the database has its own consistency group.
- Each local replication has a policy of 1 hour protection window.
- Each remote replication has a policy of 2 hours protection window.

Recovering data from point in time procedure:


The following procedure will describe how to recover data from a point in time to the database server that is in
the DR site.

Prerequisites:
1. JRE 6 must be installed on the management server
2. Login to http://10.10.1.20
3. Use authorized user and password.

Procedure:
1. Make sure all consistency groups are in the “Active” state:
2. Select the consistency group from the left pane windows. Click the down arrow (marked in red arrow in
below image) and select the “Enable Image Access”:

3. From the consistency group of the “Archive” only, we will use the latest point in time:

4. Select “Logged Accessed (Physical)”:


5. Press “Finish”:

6. For the next consistency groups, the process is the same except for the point0in0time selection:

7. Select “Select an image from the list”:


8. Next, we will select the bookmark from the list. This bookmark will assure us that all the LUNs that will be
selected, were taken at the exact same second, and thus are consistent. Make sure that you select the
EXACT same bookmark for all the other consistency groups. You might need to scroll down the list to find
them:
9. Repeat sections 7 and 8 for all other consistency groups.
10. Once completed, the host now has an access to the LUNs, which are the exact copy of the production’s
LUNs and are consistent.
11. Next the step is preparing the host.

Host readiness:
1. Login to the adb-oramain-db1 database host using ssh client, such as Putty, as root:
2. Look for the connected HBA and scan it for the new LUNs available to it:
- root@adb-oramain-db1-dr # luxadm -e port
o /devices/pci@3,700000/SUNW,qlc@0/fp@0,0:devctl NOT CONNECTED
o /devices/pci@3,700000/SUNW,qlc@0,1/fp@0,0:devctl CONNECTED
3. root@adb-oramain-db1-dr # luxadm -e forcelip /devices/pci@3,700000/SUNW,qlc@0,1/fp@0,0
4. let the OS be aware of the changes:
- root@adb-oramain-db1-dr # devfsadm -Cv
5. configure the PowerPath software:
- root@adb-oramain-db1-dr # powermt config
- root@adb-oramain-db1-dr # powermt check force
- root@adb-oramain-db1-dr # powermt save

6. Repeat steps 1 to 5 for adb-oramain-db2 server.

Starting Database
1. Log into the DR legato server (172.27.222.22) with username root and password root.
2. [root@adb-legato-dr ~]# cd /root
3. [root@adb-legato-dr ~]# ./start_stop_db_dr.sh begin

Let the script run till you see the following output that shows the database is up:

NOTE:
The production database and the DR database are now both open.
The replication between the 2 sites is still running and all the production changed blocks are being saved in the
journal area. When the journal area will be filled, the DR LUNs will not be available anymore and you’ll get
error message on the DR server!
To avoid this from happening, there’s an option to allow direct access to the LUN at the DR site:
This will stop the replication between the sites and when returning to normal mode, a full synchronization will
have to take place.

Recover production:
In order to recover the production from the DR site, you can select the consistency group and recover from it:

This will copy all the changes made at the DR site to the production site.
Note that this action will overwrite the production data!
ROLLBACK – Discard DR changes
Once you do not ‘Enable Direct Access’, all change to the DR database can be discarded.

To discard changes,
1. Log into the DR legato server (172.27.222.22) with username root and password root.
2. [root@adb-legato-dr ~]# cd /root
3. [root@adb-legato-dr ~]# ./start_stop_db_dr.sh end

Let the script run till you get the following output:

4. Disable image access for all the consistency groups in the RPA appliance

a.
APPENDIX

start_stop_db_dr.sh
#!/bin/bash

if [ $# -ne 1 ] ; then
echo "Usage: $0 {begin|end}"
exit -1
fi
export PATH=${PATH}:/opt/Navisphere/bin

SRCDB1=172.27.222.24
SRCDB2=172.27.222.26

case "$1" in

begin)

#Start DB1 and DB2 database after stopping them and rescan LUNs
ssh oracle@${SRCDB1} ". .bash_profile ; /export/home/oracle/start_stop.sh
stop"
ssh oracle@${SRCDB2} ". .bash_profile ; /export/home/oracle/start_stop.sh
stop"

#Stop and start ASM


ssh grid@${SRCDB1} ". .bash_profile ; /export/home/grid/start_stop_asm_dg.sh
stop"
sleep 30
ssh grid@${SRCDB1} ". .bash_profile ; /export/home/grid/start_stop_asm_dg.sh
start"

ssh ${SRCDB1} /rescan_LUNs.sh


ssh ${SRCDB2} /rescan_LUNs.sh

ssh oracle@${SRCDB1} ". .bash_profile ; /export/home/oracle/start_stop.sh


start"
ssh oracle@${SRCDB2} ". .bash_profile ; /export/home/oracle/start_stop.sh
start"

#start working
;;

end)

#Stop DB
ssh oracle@${SRCDB1} ". .bash_profile ; /export/home/oracle/start_stop.sh
stop"
ssh oracle@${SRCDB2} ". .bash_profile ; /export/home/oracle/start_stop.sh
stop"
#Stop and start ASM
ssh grid@${SRCDB1} ". .bash_profile ; /export/home/grid/start_stop_asm_dg.sh
stop"

;;

*)
echo $"Usage: $0 {begin|end}"
exit
;;
esac

*)
echo "Usage: $0 (start|stop)"
;;

esac

You might also like