Redp 5730
Redp 5730
Redpaper
Draft Document for Review January 15, 2025 11:53 pm 5730edno.fm
IBM Redbooks
January 2025
REDP-5730-00
5730edno.fm Draft Document for Review January 15, 2025 11:53 pm
Note: Before using this information and the product it supports, read the information in “Notices” on page v.
This edition applies to IBM Storage Defender Data Protect Version 7.1.1 and 7.1.2.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Chapter 1. Introduction to
IBM Storage Defender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Overview of IBM Storage Defender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Overview of IBM Defender Data Protect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Overview of IBM Defender Data Management Service. . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 IBM Defender Data Protect and Database workloads . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.1 DB integration and agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.2 Remote Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4.3 Universal Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4.4 SmartTarget . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4.5 Logs backed up as part of the policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4.6 Megafile and Minion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.7 Full to tape via Storage Protect S3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.8 Archive to S3 in the cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation, registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at “Copyright
and trademark information” at https://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM Cloud® QRadar®
DB2® IBM FlashSystem® Redbooks®
Enterprise Design Thinking® IBM Security® Redbooks (logo) ®
Guardium® IBM Spectrum® Storwize®
IBM® PowerPC® XIV®
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
Ceph, Red Hat, are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United
States and other countries.
VMware, and the VMware logo are registered trademarks or trademarks of VMware, Inc. or its subsidiaries in
the United States and/or other jurisdictions.
Other company, product, or service names may be trademarks or service marks of others.
Preface
This IBM Redpaper publication introduces the new IBM Storage Defender offering for
enterprise data management and protection. This IBM Redpaper publication will help you
install, tailor and configure this solution for the protection and repid recovery of databases like
Oracle, Oracle (OVM), Oracle RAC, SAP HANA, SAP Oracle, SAP DB2®, SAP MS SQL,
SAP Sybase ASE, Sybase IQ & ASE, IBM DB2, MS SQL, Hadoop, IRIS and Cache for EPIC
applications.
Authors
This paper was produced by a team of specialists from around the world working at IBM
Redbooks, Poughkeepsie Center.
Christian Burns is a Principal Worldwide Storage Data Resiliency Architect and IBM
Redbooks Platinum Author based in New Jersey. As a member of the Worldwide Storage
Technical Sales Team at IBM, he works with clients, IBM Business Partners, and IBMers
around the globe, designing and implementing solutions that address the rapidly evolving
cyber and data resiliency challenges facing enterprises today. He has decades of industry
experience in the areas of sales engineering, solution design, and software development.
Christian holds a BA degree in Physics and Computer Science from Rutgers College.
Phillip Gerrard is a Project Leader for the International Technical Support Organization
working out of Beaverton, Oregon. As part of IBM for over 15 years he has authored and
contributed to hundreds of technical documents published to IBM.com and worked directly
with IBM's largest customers to resolve critical situations. As a team lead and Subject Matter
Expert for the IBM Spectrum® Protect support team, he is experienced in leading and
growing international teams of talented IBMers, developing and implementing team
processes, creating and delivering education. Phillip holds a degree in computer science and
business administration from Oregon State University.
Juan Carlos Jimenez is IBM’s world-wide Data Resiliency Product Manager. He is focused
on defining roadmaps, initiatives, and strategy within the various data resiliency software
products that he manages alongside his team. Juan Carlos brings an end-to-end view to
cyber resilience leveraging his expertise in both storage and security. Juan Carlos developed
our Cyber Resiliency Assessment Tool which has been helping numerous enterprises identify
and close gaps in their IT environments.
Julien Sauvanet has been working in IT for 15+ years, covering many different areas
including networking, systems, storage, and for the past 10 years data protection. He
continues to share his knowledge and expertise as a contributing author to IBM Redbooks
since 2013. Being involved with the ever evolving data protection world, he continues to
expand his knowledge beyond the usual focus on data backup. As an SME focused on overall
Data Resilience, keeping up to date with various techniques helps him with designing
solutions which create additional value beyond just protecting backup data (data reuse,
automation and orchestration of recoveries, infrastructure resiliency).
Kenneth Salerno is an Open Group Certified Distinguished Technical Specialist working for
IBM in the USA. He has 27 years of experience in Information Technology. Prior to joining
IBM, he worked 7 years as a Senior Infrastructure Engineer and Architect on Wall Street
managing and supporting multiple data centers for mission-critical online financial services.
He holds a degree in Computer Science from CUNY Queens College in New York City. His
areas of expertise include operating systems, storage, security, networking, middleware,
databases, containers and enterprise data center operations. He has contributed code to
various open source projects, currently he is one of the maintainers of the Jenkins CI Docker
images, and is also Linux and Cisco certified.
Jack Tedjai is an IBM Certified Expert IT Specialist and IBM Systems subject matter expert,
working in the Northern Europe Infrastructure Lab Expert Services organization. He joined
IBM in 1998, and has more than 25 years of experience in the delivery of Storage, Storage
Virtualize, Backup and Cyber Resilience services for Open Systems. He is mostly involved in
architecture and deployments world-wide for IBM Lab Expert Services, with a focus on IBM
Storage Protect, IBM Storage Protect Plus and IBM Cloud® Object Storage.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this paper or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface ix
5730pref.fm Draft Document for Review January 15, 2025 11:53 pm
Chapter 1. Introduction to
IBM Storage Defender
Just a few decades ago, considerations for data resilience were a much simpler. If a company
lost or damaged an important file or folder, they’d simply load up the previous day’s backup
tape, retrieve a copy of the missing data, and return to operating normally from there.
Those days are long gone. Today, the volume of data and diverse range of workloads have
made backup and restore operations much more complex. Regardless of their size, industry,
or location, every organization must have an active security perimeter to keep out bad actors,
plus effective recovery mechanisms to get back up and running quickly when an attack gets
through.
Although the current world of IT may seem like a dangerous place with new and creative
attempts to exploit vulnerabilities, careful planning and execution of appropriate data security
and data resilience processes can enable organizations to gracefully recover from otherwise
dire situations. This Redbooks publication provides guidance on one of IBM's solutions
dedicated to these use cases, enabling customers to recover rapidly, and at scale.
In this chapter:
1.1, “Overview of IBM Storage Defender” on page 2
1.2, “Overview of IBM Defender Data Protect” on page 2
1.3, “Overview of IBM Defender Data Management Service” on page 4
1.4, “IBM Defender Data Protect and Database workloads” on page 5
Flexible licensing
Licensing is based on resource units (RUs), providing a cloud-like, utility-based
consumption model for organizations to consume any service within IBM Storage
Defender.
IBM Storage Defender is designed to integrate with other IBM Storage and IBM Security®
solutions, including IBM QRadar®, IBM Guardium®, FlashSystem, IBM Storage Scale, IBM
Storage Ceph, and IBM Storage Fusion. It also includes copy data management tools to
manage and orchestrate application-integrated, hardware snapshots by making copies
available when and where users need them for instant data recovery, or data reuse,
automatically cataloging and managing copy data across hybrid cloud infrastructures.
In this Redbook we will deep dive into how this solution protects the most critical workloads for
modern enterprises.
IBM Storage Defender Data Protect boasts a scale-out architecture comprised of clusters.
These clusters can be deployed virtually, in the cloud, or on promise through physical nodes.
These physical nodes include CPU, Memory, Storage, Network, Operating System, File
System, and the Backup Software. An example of these nodes is the IBM Defender Ready
Node. By leveraging this cluster and node architecture, Defender Data Protect can execute
data management operations like backups, cloning, and restores rapidly, at scale. This is
possible by equally spreading the workload or action among all nodes in a cluster. Lastly,
upgrades, and expansions can be done easily and non-disruptively by simply adding more
nodes to a cluster.
Some of the key capabilities of IBM Storage Defender Data Protect that will be covered in this
document are:
Integrated Cybersecurity:
The solution has been designed on zero-trust principles to prevent internal attacks, and
threats. It has ransomware, virus and vulnerability detection built in. It can protect data
through its Immutable architecture as well as protect data on immutable targets.
Encryption is available both at-rest and in-flight, as well as integration with SIEM solutions
like QRadar, Splunk and others.
Fast Cloning:
Extremely fast cloning of large databases for devOps, testing, and other development use
cases. For example cloning a 2 TB SAP HANA database in under a minute.
Users can connect Defender Data Protect Clusters, IBM Storage Protect Servers, IBM
FlashSystems, and other assets into the service to drive end-to-end data resiliency
operations from a single pane of glass interface.
Security Advisor:
The security advisor enables users to view the security posture of your implementation
and provides actionable insights so that you can modify the security settings based on the
best practice and business needs.
Simulations:
This functionality offers predictive planning models that can make projections about
utilization and storage consumption. This capability is based on historical usage,
workloads, and user-defined what-if scenarios. This empowers users to proactively plan
for various situations, such as acquiring new nodes, integrating new workloads, optimizing
current workloads, and more. Simulations can be created with scenarios using specific
clusters and time periods to help better understand and plan environment changes.
Reports:
This function allows users to create and view an overall summary of the data protection
jobs and storage systems. Additionally, users can analyze data at the granular level using
powerful filtering options. Filter, schedule, email, and download reports to ensure users
who needs detailed information on the environment and its status get what they need
when they need it.
There is a variety of options for protecting a DB workload, IBM Storage Defender Native DB
integration currently includes the following:
Amazon RDS
Amazon Aurora
Cassandra
MongoDB
Microsoft SQL Server
Oracle Database
This integration provides the ability to write directly to the IBM Storage Defender Cluster as a
target.
There are additional options for protecting Database workloads which can leverage an agent
based approach, leveraging either the Remote Adapter or the Universal adapter.
1.4.4 SmartTarget
IBM Storage Defender Data Protect also includes the ability to backup to a File System target,
such as an NFS or a CIFS/SMB target. This provides the ability to maintain existing scripts
and practices that exist in an enterprise and update the backup target to the cluster. This can
be leveraged for the DB workloads with native integration as well as those with agent based
integration, leveraging the available agents in both scenarios. The advantage to this approach
over traditional backups to standard filesystem mounts that then need to be backed up
includes, speed to ingest to the backup system and simplified process among other benefits.
Minion or MinionBlob provides quick backup for small files. Minion file metadata is
consolidated for groups of small files (8MB or smaller) into single logical metadata objects.
IBM Data Protect Cluster can protect MS SQL databases on physical servers by utilizing the
following IBM Defender Data Protect Agent-based methods:
Volume-based backup utilizes VSS to take backup of all MS SQL databases running on a
volume (or all volumes) on MS SQL server
File-based backup utilizes VSS to take backup of ONLY specified MS SQL databases
running on a MS SQL servers
VDI-based backup allows the IBM Defender Data Protect to execute SQL server Native
backup and restore commands via native VDI API calls
There are three accounts you must consider when installing the IBM Data Protect Agent:
Installation Account: The account you use to log in to the host and run the installer.
Service Account: The account under which the IBM Data Protect Agent service runs on
the SQL Server host.
SQL Server login account: The SQL Server account by which the IBM Data Protect Agent
has access to the databases. (Configured after installation.)
You can use either the host LOCAL SYSTEM account or an account that meets the
requirements to install the IBM Data Protect agent. It is recommended to run the IBM Data
Protect Agent service with an Active Directory domain user account that is a member of the
local administrator of the SQL Server host. The AD domain user account must be a member
of the SQL sysadmin server role. The user account must have log-on rights to the SQL Server
host in the local security policy of the SQL Server host.
If you do not use the LOCAL SYSTEM account, ensure the following for the chosen account:
The account must be a member of the local Windows Administrators group and local
Windows Administrators group on the SQL server.
The account must have Log on as a service in the User Rights Assignment on the MS
SQL server to install the IBM Data Protect agent.
The account must have the sysadmin role in the MS SQL Server instance.
2. Ports Used for Communication:
On physical servers or VMs with an ephemeral or installed agent, open the ports 445,
11113, 11117, and 50051.
3. If the Windows Firewall is used:
Inbound rules:
Add a rule to accept SQL Server traffic and TCP connections on local port 1433.
Set Remote Port to All Ports.
4. Outbound rules (for MS SQL Server 2016 running on Windows 2016):
Update the "Block network access for R local user accounts in SQL server instance
MSSQLSERVER" rule by going to General tab > Action window > select "Allow the
connection”.
If the IBM Data Protect (Cohesity) agent is not yet installed click ‘Download Cohesity Protection
Service’:
Ensure that the agent has been copied over to the appropriate server.
As an AD Domain Admin, run the executable and complete the installation wizard
Volume CBT (Changed Block Tracker): Install this component for the best incremental backup
performance. Installing this component requires a onetime reboot to load the IBM Data
Protect Volume CBT driver.
File System CBT (Changed Block Tracker); the reboot is not required but not recommended.
Service Account Credentials: The service can run as the “Local System” account with
Exchange admin credentials.
If the SQL requirement are not meet, then you may receive the following message issued:
Note: When setting the Protection Policy for a DB backup, such as MS SQL, the Log
backups can be included. It provides the option to backup the logs to a granularity of hours
or minutes to provide additional protection and recovery points. An example of this
configuration is shown in Figure 2-5 on page 14.
Best Practice: For VDI backups, the backup and restore retention requirements are
identical to SQL native dumps. This means that you will need to perform a full backup,
incremental backup (equivalent to SQL Server Differential backup), and T-log backups for
PIT recoveries.
For example, if you have a retention requirement of 7 days for a SQL Server DB backup,
your VDI protection job policy should ensure that the retention period that encompasses
the full and incremental retention period is greater than 7 days. Similar, to SQL native
dumps, a full and incremental backup is required for restore. Setting the retention period
for longer than 7 days to encompass the full, incremental, and even t-log backups will
ensure that there is no hole in the recovery when using VDI.
Finish configuring the protection group by clicking the protect button. Once this is complete,
monitor the running task to confirm the selected items are successfully protected (Figure 2-6
on page 14).
Figure 2-7 Recover Microsoft SQL Database step, server options selection panel
5. Select the Microsoft SQL Database which you like to recover [5]
6. Click Next for more Recover Options [6]
7. Select to Recover as new Database or Overwrite Original Database [7]
8. Select the Microsoft SQL Instance [8]
Figure 2-8 Recover Microsoft SQL Database, recovery options selection panel
When log backups are enabled as part of the Protection Policy recoveries are able to be time
adjusted based on desired recovery point. This provides for a recovery that has been pre-set
to the designed time stamp.
After the recover task is finished open Microsoft SQL Server Management Studio and confirm
is the database is restored.
Figure 2-11 Confirming SQL DB restore with the Server Management Studio
Select the three dots on the desired database and choose the “Clone” option:
Figure 2-13 Clone a database instance option from the Microsoft SQL specialty page
Select ‘Clone’ and ‘Database’ from the top right corner drop down menu:
From this page, use the search field to find the desired DB you wish to clone:
Note: for use in a later step, the name of the IBM Storage Defender Data Protect Cluster in
this example is ‘STS-POK-DP-2’
Clone Point option: One of the selection for creating the database clone is the ability to
select the point in time the have the clone restored to, as mentioned above in the recovery
part of this chapter, a further example is included below, on selecting the Clone Point, by
being able to select the backup and the time stamp to automatically recover the logs to.
Figure 2-18 Clone Point options panel and point in time selection
Once the Clone Point is selected, and other restore options are set, select “Clone” to
generate the clone of the database.
Next, the clone process will being and the progress and task details page will appear. You are
also able to select “Show Subtasks” to check the status of clone process:
One the clone process is completed, the task history will update with current DB information
and the status will update to Success.
Figure 2-22
Once the status of Success is reported, the DB clone is then accessible via the MS SQL
server.
Figure 2-23 Cloned db access via the MS SQL Server Management Studio
The clone of the database is available for testing, or other activities. The database Properties
will also provide additional information highlighting where the data resides.
It also highlights the mount path which is includes the name of the IBM Storage Defender
Data Protect Cluster; STS-POK-DP-2’.
Once the tear down clone button is selected, the confirmation panel will be shown:
Once the tear down action is confirmed, IBM Storage Defender DMS will confirm the activity
is being performed by showing a “Destroying” Status:
As the tear down task is completed the Defender DMS GUI will update to show a status of
“Destroyed” along with details about the task execution time and duration of the task:
The clone history for DB actions will also be updated and can be reviewed on the Test & Dev
page. This will show a history of cloning actions by date as well the status of the cloned DB:
Figure 2-31 Test and Dev activities history and cloned DB status
When selecting the Oracle Adapter, this allows for a simplified backup and recovery process
with the use of the Data Protect GUI. This also allows for the use of powerful restore
capabilities such as, Instant Recovery that automates the instantiation and recovery of an
Oracle database.
The Remote Adapter is available as an alternative to the Oracle Adapter for DBAs who wish
to have full control of the backup and recovery of their database environment. By writing or
reusing their own RMAN scripts, DBAs can set the Data Protect cluster as the target, which
allows Data Protect to not only catalog backups but take advantage of IBM Storage Defender
features like immutability and anomaly detection.
For recovery, a restore of the immutable snapshot is presented to RMAN over NFS as the
repository for the backup sets. The advantage of this approach results in only one single full
backup image taken, eliminating the need for periodic full backups.
Using the Oracle Adapter for backup and recovery requires an Agent to be installed on each
database host you intend to backup and each host you intend to restore to.
To register an Oracle Source host with IBM Storage Defender Data Protect select the
following options in the WEB GUI:
Data Protection
Sources
Register
Then select Databases
Oracle Source
Figure 3-1 on page 29 shows the Register Oracle Source dialog panel to specify the Oracle
host address and authentication type.
Data Protect detects and supports Oracle Block Change Tracking (BTC) to improve backup
performance and reduced backup window size for incremental backups. The use of BTC
avoids scanning the datafiles for changed blocks by collecting a record of changed blocks
from Oracle via a log file.
Note: The Oracle Adapter agent mounts an NFS share from each Data Protect cluster
node, then instructs RMAN to allocate channels to each share. For this reason, by default,
the number of RMAN channels are set to be equal to number of Data Protect nodes.
Once registered, you can customize the number of RMAN channels from the Protection
Group in the source options (Figure 3-2 on page 30).
Figure 3-2 Customize number of RMAN channels from Protection Group settings panel
Note: The agent sets the RMAN section size for datafiles to 200G to divide large Oracle
datafiles for parallel transfer to the Data Protect nodes.
At the start of each backup, the Oracle Adapter runs an RMAN crosscheck and deletes
expired backups of the following items:
Controlfile
SPFile
Database
Next, the Oracle Adapter creates an incremental copy of the database files
Allocates a channel for each Data Protect cluster node serving the NFS mounts for
parallelism
Creates incremental datafile copy with section size of 200G to parallelize large individual
datafiles
Flush current redo log to archived redo logs
The Oracle Adapter updates the level 0 copy of the database files with the incremental
updates:
recovery copy of database with tag ‘cohesity_nnnnn’;
Finally, an immutable snapshot of NFS share is then taken by Data Protect. An example
database backup command run by the agent can be found in both agent logs, located under
/var/log/cohesity/oracle_rman_logs/ and from GUI under Protection screen:
The separate Oracle Adapter log backup procedure runs on an independent schedule you select to
create archived redo log backupsets:
SPFile
Controlfile
backup force tag ‘cohesity_nnnnn’ archivelog from time ‘DD:MM:YYYY-HH:MM:SS’;
backup force tag ‘cohesity_nnnnn’ archivelog until time ‘DD:MM:YYYY-HH_MM_SS’ not
backed up 1 times;
Controlfile
1. Create an Oracle source with Data Protect, create a Policy for a daily incremental and
periodic archived log backups: <<
Note: When configuring this remember, a periodic full backup is not needed with the
Oracle Adapter
2. Next you will need to create a policy to determine the schedule and what type of backup to
perform (Full (not required for Oracle Adapter), Incremental, Log) and if additional copies
should be replicated and where.
Create a Policy for a daily incremental backup by selecting Data Protection / Policies /
Create Policy. Build a backup policy as shown in Figure 3-4 by providing a Policy name and
selecting the desired backup frequency and options:
Note: Periodic full backups are not required when performing backups using the Oracle
Adapter.
3. Once the policy is created, assign a Protected Group to the new Policy. Protection Groups
determine the start time to execute the policy on the selected sources. Create the Protection
group by selecting the following:
Data Protection
Protection
Protect
Databases
Oracle Databases
Figure 3-6 on page 34 shows an example of setting up the Source, Policy and start time for the
backup:
4. Click Save and you can either choose to wait for the scheduled run or select Run now from
the Protection screen. Once this is complete the backup will run at the time scheduled.
To view details related to a backup run (Figure 3-7) select the following in the web GUI:
Data Protection
Protection
{Desired Policy Name}
From here you can review the run details of the backup policy including success or failure, run
times and size of the backups.
From the Oracle Server panel, select the object you wish to recover and continue to recovery
options.
Once the object is selected, select the desired recovery point (Figure 3-9 on page 36).
Selectable recovery points may be viewed by either list or by timeline view:
Next, choose the location where the data will be restored. An alternative DB or PDB, overwrite
the original DB or PDB, are options. It is also possible to perform a rapid recovery which
creates an NFS view with DB files, or with Instant Recovery perform a rapid recovery that
instantiates the Oracle database in addition to creating an NFS view with DB files. With
Instant Recovery, the background migration of datafiles can either be immediate or manually
selected later.
Note: A rapid recovery with an NFS view will instantly mount a snapshot of the DB files
and start the instance with the option to copy the DB files to the host in the background.
Finally, customize the parameters for the recovery DB (Figure 3-10 on page 37):
The Recovery wizard will automatically generate a PFILE based on the source databases
PFILE.
When the target host for the restore has different resource characteristics, there are some
important parameters to customize for the target host. These can be found in the generated
PFILE. The following settings should be reviewed and adjusted as needed on the target:
SGA_TARGET
DB_RECOVERY_FILE_DEST_SIZE
DB_CREATE_ONLINE_LOG_DEST_1
DB_RECOVERY_FILE_DEST
DB_CREATE_FILE_DEST
CONTROL_FILES
DB_WRITER_PROCESSES
MAX_DUMP_FILE_SIZE
PGA_AGGREGATE_TARGET
Below (Example 3-2 on page 38) is a sample of a customized PFILE generated for an Instant
Recovery of a large, 10 TiB Oracle database. This database is from a host with many
processors and a large amount of RAM, and being restored to a smaller host with modest
resources:
Figure 3-11 shows a GUI panel with options for instant recovery of a DB using the Oracle
Adapter and the ability to edit the generated PFILE:
You can select to restore to a different host than the source by selecting the drop-down list of
Oracle hosts. When performing a Disaster Recovery, it might be desirable to recover to the source
host, but most cases you would want to recover an entire database to a different target host,
whether for testing or data reuse purposes in addition to surgical restores.
Note: For a host to appear in the drop-down list, it must be registered with the Oracle
Adapter.
Further down the form you can add Shell Environment variables to pass to the recovery
process. One useful variable is SKIP_NID_STEP, which when set to 1 (TRUE), will not run the
Oracle new ID utility (NID). The purpose of running the NID utility is to assign a new DBID to
the instance (useful if you intend to permanently keep the recovered instance and need
RMAN to catalog both this new instance and the source instance it was recovered from
simultaneously to have a unique DBID).
For temporary or isolated recovered database instances however, this NID step is unnecessary and
can cost a lot of time for large database instance recoveries with a large amount (>1,000) datafiles.
Figure 3-12 Shell Environment variables in the Instant Recovery panel with Oracle Adapter
Note: For large databases with many datafiles, you can save time on the recovery by
skipping the NID utility step that would have reassigned a new DBID that could be
unnecessary depending on your intentions for the recovered instance.
Once the instant recovery is complete, the results of the Instant Recovery job for this 10 TiB
Oracle database to a new host can be reviewed in the job log. Figure 3-13 below shows an
example log file:
Figure 3-13 Instant Recovery job log using Oracle Adapter example
Example 3-3 shows the df output listing the NFS mounts for the snapshot DB files. These
mounts will automatically be created on the target host as part of the recovery process when
running a Instant Recovery job:
4.7T 51%
/opt/cohesity/mount_paths/nfs_oracle_mounts/oracle_437150_23_path7
Example 3-4 shows the location of the online Redo logs, changetracking file, temp tablespace
datafiles and FRA are written to local storage locations specified in PFILE:
Example 3-4 displaying file location for .log files related to recovery process
[oracle@oracle2 ~]$ find /pocdb -type f
/pocdb/orafra/fast_recovery_area/KEN1/KEN1/autobackup/2024_05_11/o1_mf_s_116869853
2_m3zg747o_.bkp
/pocdb/orafra/fast_recovery_area/KEN1/KEN1/autobackup/2024_05_11/o1_mf_s_116869978
2_m3zhg6o5_.bkp
/pocdb/orafra/fast_recovery_area/KEN1/KEN1/autobackup/2024_05_11/o1_mf_s_116870033
8_m3zhzm45_.bkp
/pocdb/orafra/KEN1/onlinelog/o1_mf_1_m3zgjpod_.log
/pocdb/orafra/KEN1/onlinelog/o1_mf_2_m3zgovrf_.log
/pocdb/orafra/KEN1/onlinelog/o1_mf_3_m3zgv1z2_.log
/pocdb/orafra/KEN1/onlinelog/o1_mf_4_m3zh0mp5_.log
/pocdb/orafra/KEN1/onlinelog/o1_mf_5_m3zh6hx9_.log
/pocdb/oradata/KEN1/changetracking/o1_mf_m3zg75df_.chg
/pocdb/oradata/KEN1/datafile/o1_mf_temp_m3zhk0mg_.tmp
/pocdb/oradata/KEN1/datafile/o1_mf_temp_m3zhk2hg_.tmp
/pocdb/oradata/KEN1/datafile/o1_mf_temp_m3zhk2n6_.tmp
Example 3-5 Confirm the ORACLE_SID that was specified in PFILE is running and open:
Example 3-5 Oracle commands to confirm instance creation and running status
[oracle@oracle2 ~]$ lsnrctl status
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oracle2)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date 09-FEB-2024 11:57:24
Uptime 92 days 2 hr. 14 min. 5 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File
/u01/app/oracle/product/19.0.0/dbhome_1/network/admin/listener.ora
Listener Log File
/u01/app/oracle/diag/tnslsnr/oracle2/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oracle2)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle2)(PORT=5500))(Security=(my_walle
t_directory=/u01/app/oracle/admin/KEN1/xdb_wallet))(Presentation=HTTP)(Session=RAW
))
Services Summary...
Service "KEN1" has 1 instance(s).
Instance "KEN1", status READY, has 1 handler(s) for this service...
Service "ff0b60e27b816bb9e05358672881609d" has 1 instance(s).
Instance "KEN1", status READY, has 1 handler(s) for this service...
Service "ibmpdb" has 1 instance(s).
Instance "KEN1", status READY, has 1 handler(s) for this service...
The command completed successfully
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
Session altered.
FILE_NAME
--------------------------------------------------------------------------------
/opt/cohesity/mount_paths/nfs_oracle_mounts/oracle_437150_23_path4/BKP_6_436557_
data_D-IBMDB_I-2755005093_TS-SYSTEM_FNO-9_ee2q58kh
/opt/cohesity/mount_paths/nfs_oracle_mounts/oracle_437150_23_path6/BKP_1_436557_
data_D-IBMDB_I-2755005093_TS-SYSAUX_FNO-10_eb2q58iu
/opt/cohesity/mount_paths/nfs_oracle_mounts/oracle_437150_23_path6/BKP_7_436557_
data_D-IBMDB_I-2755005093_TS-UNDOTBS1_FNO-11_e82q5890
/opt/cohesity/mount_paths/nfs_oracle_mounts/oracle_437150_23_path1/BKP_2_436557_
data_D-IBMDB_I-2755005093_TS-USERS_FNO-12_ej2q58l2
FILE_NAME
--------------------------------------------------------------------------------
/opt/cohesity/mount_paths/nfs_oracle_mounts/oracle_437150_23_path3/BKP_7_436557_
data_D-IBMDB_I-2755005093_TS-IBMPOCTAB01_FNO-333_al2q4mrk
/opt/cohesity/mount_paths/nfs_oracle_mounts/oracle_437150_23_path3/BKP_0_436557_
data_D-IBMDB_I-2755005093_TS-IBMPOCTAB02_FNO-334_am2q4mrk
/opt/cohesity/mount_paths/nfs_oracle_mounts/oracle_437150_23_path6/BKP_1_436557_
data_D-IBMDB_I-2755005093_TS-IBMPOCTAB03_FNO-335_an2q4mrk
/opt/cohesity/mount_paths/nfs_oracle_mounts/oracle_437150_23_path0/BKP_2_436557_
…….
MIB
----------
9975036.75
SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 -
Production
Version 19.3.0.0.0
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
COUNT(1)
----------
90
COUNT(1)
----------
13469880
SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 -
Production
Version 19.3.0.0.0
Once the instant restore is initiated, until “migrate” is selected or if you selected Instant
Migration, the datafiles continue to reside on the NFS mounts. Figure 3-14 show an example
of the migration options available for mounted Recoveries.
When finished with the Instant Recovery database, first select the Teardown option on the
Recovery, then cleanup the admin/diag and fast_recovery_area of your target host
Once Teardown is selected, the recoveries page as show in Figure 3-15 will update to show
the NFS paths have been unmounted and the database is destroyed.
Example 3-6 df output showing removal of temporary mounts completed after teardown
[oracle@oracle2 ~]$ df -Th
Filesystem Type Size Used Avail Use% Mounted on
Devtmpfs devtmpfs 32G 0 32G 0% /dev
Tmpfs tmpfs 32G 0 32G 0% /dev/shm
Tmpfs tmpfs 32G 474M 31G 2% /run
Tmpfs tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/mapper/rhel-root xfs 36G 16G 20G 44% /
/dev/sda1 xfs 1014M 183M 832M 19% /boot
/dev/mapper/orafra-lv1 xfs 8.0T 36M 8.0T 1% /pocdb/orafra
Verify the database instance is no longer running and the local datafiles are gone:
Figure 3-16 shows an example of the PDB recovery options in Database recovery panel.
Figure 3-16 Recovery of pluggable database (PDB) options with Oracle Adapter
When choosing to restore data to an alternate DB or PDB, select the target server from the
drop-down menu:
To restore only the database archive logs, select log sequence to restore by selecting:
Data Protection
Recoveries
Recover
Databases
Oracle
Archive Logs
As demonstrated above, the Oracle Adapter is a great choice to automate the backup and
recovery of Oracle databases without the need to maintain custom RMAN scripts or run
manual Oracle commands to restore.
When choosing to perform restores via the Remote Adapter, first create a Policy that matches
your RMAN script requirements. In this example, we want to retain one week of backups, however
When running differential or cumulative incremental backups that require a periodic full backup,
be sure to keep your full backup an extra week for restoring the prior week’s incrementals as
well as keeping an extra week of differential incrementals (if not using cumulative
incrementals, because differential rman restores must be applied in sequence from the last
full backup that preceeded them). Also select archive log backups in your Policy to create the
script input field in the Protection Group.
Next, create a Protection group for the Oracle Remote Adapter based backups (Figure 3-20
on page 50) by selecting:
Data Protection
Protection
Protect
then select Remote Adapter
Figure 3-20 Protection group creation example for Remote Adapter backup
Enter the IP Address or Hostname of the Oracle host to generate an SSH Public Key. This key
will need to be copied to your host to allow the connection between the Data Protect and the
Oracle host.
Once generated, copy the Cluster SSH Public Key to the file
/home/oracle/.ssh/authorized_keys on the Oracle host and make file readable only by the file
Owner:
In the Protection Group settings for the Remote Adapter (Figure 3-22 on page 52), fill in fields
for the location and parameters of your incremental and full RMAN backup and your archive log
backup scripts that you wrote and placed on your database host as shown in Figure 3-22 on
page 52.
Figure 3-22 Protection group creation example for Remote Adapter backup
It is likely that the View permissions (Figure 3-21 on page 40) will need to be customized to
allow an NFS mount to be created on the host. If using an older version of Oracle which
requires an older Linux version, you may also need to set the protocol to NFS version
3 rather than 4.1. To edit the View that was created for the Remote Adapter, navigate
to your Defender URL and append /platform/views: e.g.
https://usea-prod.storage-defender.ibm.com/platform/views
Figure 3-23 Editing the View settings for Oracle backup target settings
In order to use your View, you must Mount NFS View on backup source:
Example 3-10 creating mount locations for data protect view on host
mkdir /mnt/ora-1-1 /mnt/ora-1-2 /mnt/ora-1-3 /mnt/ora-1-4 /mnt/ora-1-5
/mnt/ora-1-6 /mnt/ora-1-7 /mnt/ora-1-8
Next, you will need to know the Virtual IP Addresses of your Data Protect cluster nodes for
the NFS mounts on your host:
Settings
Networking
VIPs
Figure 3-24 on page 54 shows an example of the networking page to collect this information.
Figure 3-24 Data Protect cluster node VIPs for NFS mounts
Add the following to /etc/fstab to automatically mount on reboot. Must specify NFS
option_netdev in fstab to avoid a panic on boot if the NFS server is not available:
Mount NFS View using the ‘mount -a’ command. Then verify NFS View is mounted correctly:
Create subfolder with appropriate permissions for backups on NFS View so that the oracle
user has permission to write backup files to the folder:
Now you are ready to start writing backups to your target View location.
For reference, the following are the RMAN full, incremental and archived redo log backup
scripts used in this Remote Adapter example:
ORACLE_SID=DEMODB
ORACLE_HOME=/u01/app/oracle/product/19.3.0.0.0/dbhome_1
PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_HOME ORACLE_SID
c=1
while [ ! -z "$4" ]; do
CHANNELS="$CHANNELS allocate channel c$c device type disk
format '$4/%U';
"
shift
c=$(($c + 1))
done
ORACLE_SID=DEMODB
ORACLE_HOME=/u01/app/oracle/product/19.3.0.0.0/dbhome_1
PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_HOME ORACLE_SID
c=1
while [ ! -z "$4" ]; do
CHANNELS="$CHANNELS allocate channel c$c device type disk
format '$4/%U';
"
shift
c=$(($c + 1))
done
ORACLE_SID=DEMODB
ORACLE_HOME=/u01/app/oracle/product/19.3.0.0.0/dbhome_1
PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_HOME ORACLE_SID
c=1
while [ ! -z "$4" ]; do
CHANNELS="$CHANNELS allocate channel c$c device type disk
format '$4/%U';
"
shift
c=$(($c + 1))
done
You can also create a Clone View from the Test & Dev screen: Figure 3-25
To create the clone view from the Recovery menu screen, select the following options in the
Data Protect GUI:
1. Select Data Protection
2. Recoveries
3. Cohesity view
4. select Clone View
5. Finally, search for the name of the View you want to clone (Figure 3-26)
Once the specific View to clone is selected, this will bring up the Clone View panel
(Figure 3-27) allowing options to be customized for the Clone View:
As shown in Figure 3-28 on page 59, select from the options presented to customize the
desired point-in-time to create the Clone View from:
After selecting the desired point in time, click Save and the Clone View is created. Next,
manually mount the NFS view on the target host and proceed with the DB recovery using
RMAN or any custom scripts you have written.
Example 3-17 shows manual creation of mount points for Clone View on target host:
mount -a
Confirm NFS mount points of Clone View are attached Example 3-18:
Step 1: Capture the DBID of the original database we are restoring to the new host:
This can be done by either finding the DBID in the job log, located in the messages for each
backup of the source database as shown in Figure 3-29
Or, connect to the database via rman and connect to the source database which was backed
up as shown in Example 3-19
Note: Depending on the RMAN settings, a control file backup file name may contain the
DBID as well (e.g. cfc-4137911356-20240513-09).
Step 2: Create a database parameter file (PFile) on the target host with your desired instance
name, set as both DB_UNIQUE_NAME in your PFile and your ORACLE_SID environment
variable, and you must set DB_NAME and DBID from the database you want to restore:
This serves as an example that is customized for our target host set for a restore of
DBNAME=DEMODB with DBID 4137911356 where we have chosen an ORACLE_SID
instance name of KEN2.
Your customizations for your target host may vary depending on the exact environment you
are attempting to restore to. You could also simply save a copy of your PFile as part of your
RMAN backup script or copy directly from the source host to edit by hand.
Example 3-20 Customizing the KEN2 Oracle DEMODB restore parameter file config for the target host
DBNAME=DEMODB
DATADIR=/demodb/oradata
LOGDIR=/demodb/orafra
ORACLE_SID=KEN2
export ORACLE_SID
PFILE=$ORACLE_HOME/dbs/init$ORACLE_SID.ora
SGA=$(free | head -2 | tail -1 | awk '{
printf("%dG", $2/1024/1024/2/1.5)
}')
PGA=$(free | head -2 | tail -1 | awk '{
printf("%dG", $2/1024/1024/2/1.5/2)
}')
THREADS=$(grep -c processor /proc/cpuinfo)
echo "db_name='$DBNAME'
memory_target=0
processes = 1000
parallel_max_servers=$(($THREADS * 20))
db_block_size=8192
db_domain=''
db_recovery_file_dest='$LOGDIR/fast_recovery_area'
db_recovery_file_dest_size=200G
diagnostic_dest='$ORACLE_BASE'
dispatchers='(PROTOCOL=TCP) (SERVICE=${ORACLE_SID}XDB)'
open_cursors=500
remote_login_passwordfile='EXCLUSIVE'
undo_tablespace='UNDOTBS1'
# You may want to ensure that control files are created on separate physical
# devices
control_files = ($DATADIR/$ORACLE_SID/controlfile/${ORACLE_SID}_control1,
$LOGDIR/$ORACLE_SID/controlfile/${ORACLE_SID}_control2)
compatible ='19.0.0'
db_create_file_dest='$DATADIR'
db_create_online_log_dest_1='$LOGDIR'
enable_pluggable_database=TRUE
db_unique_name=$ORACLE_SID
filesystemio_options=setall
db_writer_processes=$THREADS
db_files=1024
max_dump_file_size=2G
recyclebin=off
sga_target=$SGA
pga_aggregate_target=$PGA" >$PFILE
This (Example 3-21) is the result of this example initKEN2.ora pfile we generated above in
Example 3-20:
# devices
control_files=(/demodb/oradata/KEN2/controlfile/KEN2_control1,
demodb/orafra/KEN2/controlfile/KEN2_control2)
compatible ='19.0.0'
db_create_file_dest='/demodb/oradata'
db_create_online_log_dest_1='/demodb/orafra'
enable_pluggable_database=TRUE
db_unique_name=KEN2
filesystemio_options=setall
db_writer_processes=16
db_files=1024
max_dump_file_size=2G
recyclebin=off
sga_target=5G
pga_aggregate_target=2G
Create the required adump, BCT (for both the new SID and temporarily for the source
DBNAME) and FRA directories for your new instance in advance of the restore attempt to
avoid RMAN failing to open the new database instance:
The following Example 3-24 shows the restored DEMODB database on our target host with
instance KEN2: non-OMF datafile names need to be renamed manually.
/demodb/oradata/KEN2/FFFC6A9752B245FBE055025056B152C9/datafile/o1_mf_temp_m3zhnf
wb_.tmp
/demodb/oradata/DEMODB/datafile/ibmpoc01.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc02.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc03.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc04.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc05.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc06.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc07.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc08.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc09.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc10.dbf
/demodb/orafra/KEN2/controlfile/KEN2_control2
/demodb/orafra/KEN2/onlinelog/o1_mf_1_m3zhlc6c_.log
/demodb/orafra/KEN2/onlinelog/o1_mf_2_m3zhlc76_.log
/demodb/orafra/KEN2/onlinelog/o1_mf_3_m3zhlc85_.log
/demodb/orafra/KEN2/onlinelog/o1_mf_4_m3zhlc91_.log
/demodb/orafra/KEN2/onlinelog/o1_mf_5_m3zhlc9w_.log
[oracle@sts-pok-rhel7-oracle-4 ~]$ export ORACLE_SID=KEN2
[oracle@sts-pok-rhel7-oracle-4 ~]$ sqlplus /nolog
SQL*Plus: Release 19.0.0.0.0 - Production on Sat May 11 15:00:01 2024
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Session altered.
SQL> startup;
Pluggable Database opened.
SQL> alter pluggable database demopdb save state;
CON_NAME
------------------------------
DEMOPDB
/demodb/oradata/KEN2/FFFC432AA58638E4E055025056B152C9/datafile/o1_mf_undotbs1_
m3 zgplm4_.dbf
/demodb/oradata/KEN2/FFFC432AA58638E4E055025056B152C9/datafile/o1_mf_undotbs1_
m3 zsdbf8s_.dbf
/demodb/oradata/KEN2/FFFC432AA58638E4E055025056B152C9/datafile/o1_mf_undotbs1_
m3 zis9skm_.dbf
/demodb/oradata/KEN2/FFFC6A9752B245FBE055025056B152C9/datafile/o1_mf_users_m3z
hkkdf_.dbf
FILE_NAME
/demodb/oradata/DEMODB/datafile/ibmpoc01.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc02.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc03.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc04.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc05.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc06.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc07.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc08.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc09.dbf
/demodb/oradata/DEMODB/datafile/ibmpoc10.dbf
14 rows selected.
SQL>
[oracle@sts-pok-rhel7-oracle-4 ~]$ lsnrctl status
Connecting to
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=sts-pok-rhel7-oracle-4)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date 16-OCT-2024 08:58:41
Uptime 5 days 1 hr. 27 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File
/u01/app/oracle/product/19.0.0/dbhome_1/network/admin/listener.ora
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=sts-pok-rhel7-oracle-4)(PORT=5500))(Sec
urity=(my_wallet_directory=/u01/app/oracle/admin/KEN2/xdb_wallet))(Presentation=HT
TP)(Session=RAW))
Services Summary...
Service "KEN2" has 1 instance(s).
Instance "KEN2", status READY, has 1 handler(s) for this service...
Service "KEN2XDB" has 1 instance(s).
Instance "KEN2", status READY, has 1 handler(s) for this service...
Service "demopdb" has 1 instance(s).
Instance "KEN2", status READY, has 1 handler(s) for this service...
Service "fffc6a9752b245fbe055025056b152c9" has 1 instance(s).
Instance "KEN2", status READY, has 1 handler(s) for this service...
SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 -
Production
Version
19.3.0.0.0
The above example illustrates the steps for a typical standalone Oracle database restore and
recovery from backup to a new host. The Remote Adapter gives complete flexibility how you
choose to backup and restore your particular environment and is a good choice for
experienced DBAs who require this level of control.
In contrast a backup and restore with the Oracle Adapter, as we saw in the previous section
3.3 where no detailed knowledge of RMAN was required and Oracle commands to create and
recover the database is automated, is a suitable choice for most environments compared to
the Remote Adapter.
The IBM Data Protect Cluster solution for Active Directory (AD) includes many features that
make your backups much more valuable, including:
Flexibility:
IBM Data Protect Cluster gives you the ability to browse and search across all your
snapshots, and the desired data to different locations on different servers.
The agent is lightweight and has a small memory footprint. This agent carries out the tasks
that are defined in the IBM Data Protect Cluster Protection Group. The agent ties together
technologies and capabilities already in Windows, like Windows VSS with new technologies,
like Changed Block Tracker (CBT), which allows the system to tackle data management more
efficiently (Figure 4-1 on page 71).
The Volume CBT (Changed Block Tracker) component is required to perform incremental
backups and requires a reboot. Until you reboot, you can only perform volume-based full
backups.
2. Register Active Directory as a Data source with IBM Defender Data Protect.
From the Data Protection web GUI Select the following from the left hand side menu
Sources
Register
Applications
Active Directory
4. Select the desired Active Directory source to complete the registration process
Once presented with the Active Directory panel, configure the desired options for the backup
as shown in Figure 4-6 on page 73.
4. Search and Select the Active Directory Object to restore [4]
5. Click on Protection Group and provide a meaningful protection name [5]
With the Agent installed, the option exists to restore specific objects from the Active Directory
DB rather than just performing a full restore. This is referred to as a Granular recovery.
6. The Browse Snapshot feature (Recover AD) will mark the differences between the backup
set objects (snapshot) and the live Active Directory objects (Figure 4-9) [6]
7. You can use the search bar for text searches [7]. In this example the account ‘co-operator’
is missing [8] and we can click Recover [9]
Best Practice: Search queries are executed against the currently selected entity hierarchy
level. To search the entire hierarchy, ensure the top level is selected before running a
search.
Once the password is set, select the recovery button to begin the ‘Recover’ process.
9. After a recovery, the status of the AD object will display as “Recovered” if the process was
successful.
With the removal of the client Access server role in Exchange 2016, only the Mailbox server
role is supported with Data Protect. The Mailbox server role hosts the on-premises recipient
mailboxes and communicates with the Exchange Online organization by proxy via the
on-premises Client Access server. By default, a dedicated Send connector is configured on
the Mailbox server role to support secure hybrid mail transport.
The IBM Data Protect agent service logon should be running as a specific AD account (not
Local System or local computer account such as local administrator) which has sufficient
privileges to run Exchange Management PowerShell and query AD for Exchange objects.
Exchange Management PowerShell is required for executing the following sets of cmdlets to
get the Exchange server, DAG, and database topologies:
Get-ExchangeServer
Get-Mailboxdatabase
Get-Mailbox
Get-DatabaseAvailabiltyGroup
Ensure the following requirements are met to register and backup Exchange Servers:
Service account permissions:
– Is a member of the Backup Operators group on AD Domain
– Is a member of the Exchange Servers and Organization Management groups under
the Microsoft Exchange Security Groups Organizational Unit
– Is a member of the Local Administrators group on the Exchange Server
– The Exchange server must have joined the same AD domain as the IBM Data Protect
cluster for SMB authentication.
Software Prerequisites:
– IBM Data Protect Agent on the Microsoft Exchange Servers
– Exchange mailbox recovery tooling
– A Windows server with 32-bit Microsoft Outlook installed
Best Practice: IBM recommended to install the Exchange recovery tooling on a remote
management server and not on the Exchange server its self.
Microsoft Exchange also offers the following built-in data loss prevention options:
Deleted item retention:
Whenever a user permanently deletes items in their mailbox database, these items are not
purged immediately. Depending on the deleted item retention of the Mailbox Database
(default 14 days) this deleted item is kept in the Mailbox Database and available for
self-service restores.
Deleted User retention:
Comparable to the deleted item retention, user mailboxes that are deleted from a Mailbox
Databases are still kept for a specific number of days in this Mailbox Database (default 30
days).
Database availability groups:
Database availability groups are a great feature to avoid service interruption if a Mailbox
Server needs a downtime, is corrupted, or even lost. In this case, the Mailbox database is
activated on another copy and the users can access their mailboxes without any interruption.
IBM Data Protect Cluster adds data protection capabilities that can be used whenever the
built-in solutions are not satisfying or in case of a disaster.
Note: If the IBM Data Protect (Cohesity) agent is not installed yet please click the
‘Download Cohesity Protection Service’ link to download and install the agent. If required,
ensure that the agent has been copied over to the appropriate server.
As an AD Domain Admin, run the executable and complete the installation wizard.
Figure 5-2 shows the add-on component options when installing the IBM Storage Defender
(Cohesity) Agent.
Volume CBT (Changed Block Tracker): Install this component for the best incremental
backup performance. Installing this component requires a onetime reboot to load the IBM
Data Protect Volume CBT driver.
File System CBT (Changed Block Tracker): the reboot is not required, but is
recommended.
Service Account Credentials: The service can run as the “Local System” account with
Exchange admin credentials.
Once the agent is installed, select the following options to configure protection for the
Exchange Server Databases:
1. Expand to Data Protection
2. Select Protection
3. Click Protect
4. Click on Add Object to select the already register Exchange Server [4]
5. Click on Protection Group and provide a meaningful protection name [5]
6. Click on Policy and select an existing SLA policy [6]
7. Click on Storage Domain [7]
8. Click the ‘Protect’ button on the bottom right corner to finish and trigger the protection job
A recovery database (RDB) is a special kind of mailbox database that allows for the
temporary mount of a restored mailbox database to extract data from the restored database
as part of a recovery operation. You can use the ‘New-MailboxRestoreRequest’ cmdlet to
extract data from an RDB. After extraction, the data can be exported to a folder or merged into
an existing mailbox. The use of an RDB enables the recovery of data from a backup or copy
of a database, without disturbing user access to current data.
6. Select the desired Exchange Mail Database to recover followed by the next button. For
more recovery options, fill in the DNS record field with the address of the server where the
recovery tool is running [6].
9. After the recovery is no longer needed, select the Tear Down button on the Recovery view
panel to remove the view.
10.Using the mlink command, create a hardlink to the mounted exchange data location that
can then be manage with Powershell or Recovery tooling.
[PS] C:\Windows\system32>Get-MailboxDatabase
Name Server Recovery ReplicationType
---- ------ -------- ---------------
Mailbox Database 1926874835 W2016-CSM01 False None
SMTP W2016-CSM01 False None
[PS] C:\Windows\system32>Get-MailboxDatabase
Name Server Recovery ReplicationType
---- ------ -------- ---------------
Mailbox Database 1926874835 W2016-CSM01 False None
SMTP W2016-CSM01 False None
RDB01 W2016-CSM01 True None
11.Confirm the required data was successfully mounted and is accessible via the Exchange
recovery tools.
Figure 5-7 Recovered mail information shown with Exchange recovery tools
12.Once the data is confirmed as being accessible, continue to use the Exchange recovery
tools to access and restore any individual Exchange objects.
PostgreSQL comes with many features aimed to help developers build applications, protect
data integrity, build fault-tolerant environments and help administrators manage data no
matter how big or small the dataset.
This chapter will explain what features IBM Storage Defender Data Protect brings to secure
PostGreSQL database on x86-64 platforms.
Releases:
EDB Postgres 11.x, Red Hat Enterprise Linux
Releases 12.x, (RHEL) 7.x, and 8.x.
14.x
Each Data
Local Agent - Required
50051 PostGreSQL Protect Bidirectional Tcp/ip
for backup and recovery
Cluster node
Note: Port 59999 is required when your PostGreSQL deployment is comprised of multiple
nodes (such as a high availability configuration).
When planning to use a non-root user to perform the agent and connector installation, add
the below line to the /etc/sudoers configuration file:
Example 6-1 Sudo privileges for non-root user required to install local PostgreSQL connector
cohesityagent ALL=NOPASSWD:SETENV: /bin/chmod, /bin/chown, /bin/mkdir, /bin/rm,
/bin/psql, /usr/bin/ps, /usr/sbin/runuser, /bin/java, /usr/bin/netstat
Table 6-3 List of required packages and commands, used by the Linux Agent
RHEL SUSE CentOS Ubuntu Debian
cp cp cp cp cp
rm rm rm rm rm
ls ls ls ls ls
Note: nfs-utils is Required for Instant Volume Mount, file-folder recovery from block-based
backup and VMware backup.
Note: Ensure that the root or non-root user used to install the Linux agent and PostgreSQL
Connector has read access to the generated certificate config file.
3. While registering the PostgreSQL source, provide the path of the certificate config file in
the SSL Settings field.
As part of the initial registration, one of the scripts relies on the jq command to parse the
output of some local commands. JQ might not be installed by default on your Linux platform.
Refer to your operating system documentation to install jq package.
To change this, you must modify the default settings in the PostGreSQL configuration file,
generally located into the database directory, for example
/var/lib/pgsql/16/data/postgresql.conf. Find the line containing the parameter
listen_addresses and change the value to allow non local connection, as shown below:
For more security you can vconfigure the local Linux firewall to allow very specific IP or set of
IPs addresses allowed to connect remotely to the PostGreSQL database.
Once the value changed, restart the PostGreSQL service using the operating system
command systemctl.
Configuration will require multiple steps which are summarized in the figure below.
The left side of the figure shows the configuration steps to be executed within the host where
the database is running.
The right side of the figure shows the configuration steps to be performed using the IBM
Storage Data Protect graphical user interface and its configuration wizards.
Later in this chapter we will go into the details of each step, numbered from 1 to 5 in the figure
below (Figure 6-1).
Step 1: Download and install the Linux Agent
Step 2: Download and install the PostGreSQL Connector
Step 3: Register the PostGreSQL host machine as a source of data in Data Protect
Step 4: Create and Protection Group and Protection Policy
Step 5: Perform backup and recovery activities as required
The following recovery methods are available, either on the same host or to an alternate host:
Full recovery (Regular and Instant Restore methods are available)
Point in time recovery
This recommended backup schema will be illustrated in 6.4, “practical deployment example”
on page 98, showing how it translates into a Defender DataProtect Protect Policy.
In the case of the HA cluster, the backup is executed from the active node only.
In case of a failover in the HA cluster, the next backup run will be a full backup.
Restore across different PostgreSQL versions is not supported.
For optimum log backups performance, it is recommended to set the WAL size to 1GB or
higher.
When the incremental backup chain is broken, a point-in-time recovery (PITR) is possible
only after a subsequent successful full or incremental backup is completed.
Data Protect supports backup and restore of PostgreSQL databases running on
dual-stack (IPv4 and IPv6) mode or single-stack (only IPv6) mode.
Note that the backup schema FULL + Incremental + Logs is the recommended strategy.
If the logs are being backed up, they will be taken after the FULL or INCREMENTAL backup
operations, automatically being triggered by Defender DataProtect. LOGS are also backed up
as per the LOG schedule and not necessarily after a database backup, as defined in the
Defender DataProtect protection policy.
Full backup
First backup will always be a FULL backup. Beyond the first backup, it is important to make
regular FULL backup as PostGreSQL rely on a FULL backup to be able to recover any
database.
When a backup is triggered, some checks are being done before actually transferring the
data. Data Protect PostGreSQL connector ensures that the database is in correct state to
perform the backup. It checks whether there is a recovery on going, where the logs are,
where the data files are in the local host. Here under is the list of pre backup queries that
ensure the database is in appropriate state for backup, as well as allows the PostGreSQL
connector to gather required information to properly configure the backup command.
Example 6-5
QUERY : select pg_is_in_recovery()
QUERY : checkpoint
QUERY : show log_directory
QUERY : show data_directory
Command : /usr/sbin/runuser -l postgres -c /usr/pgsql-16/bin/pg_ctl -VExitcode = 0
QUERY : SELECT system_identifier FROM pg_control_system()
QUERY : select substring(pg_walfile_name(pg_current_wal_lsn()), 1, 8) as timeline
QUERY : show archive_mode
QUERY : show archive_command
The database backup is an actual file copy, from the host where the database is located up to
a specific location (in the SpanFS structure) onto the Data Protect cluster. File transfer is
happening using the gRPC protocol. All files being identified by the initial backup process are
being transferred over to the Data Protect cluster local storage, on a specific and dedicated
view that is then being snapshotted at the end of the data transfer, therefore creating a
specific point in time copy of this database.
This instruction prepares the server to begin an on-line backup. The only required parameter
is an arbitrary user-defined label for the backup. In the case of Data Protect, it is a backup
start timestamp. The second parameter given is true, it specifies executing
pg_backup_start as quickly as possible. This forces an immediate checkpoint which will
cause a spike in I/O operations, slowing any concurrently executing queries.
At the end of the backup, the pg_backup_stop(True) statement is used. It is used to inform
that the system can do the different tasks required to finish an on-line backup. Specifying the
“True” argument in the pg_backup_stop call implies we will wait for WAL to be archived when
archiving is enabled.
Incremental Backup
Incremental backup is using the same workflow as the FULL database backup when it starts.
Getting the information on the database paths used to locate data files.
Then the postgresql adapter is comparing the inventory of the current actual data files with
the last backup.
Once identified, the files that has changed (length and/or last updated timestamp) are being
transferred over gRPC to Data Protect local storage, making the incremental backup point in
time copy using the snapshot feature of the SpanFS.
As for the FULL backup, the PostGreSQL is aware of the backup and take appropriate action
when the PostGreSQL connector uses the pg_backup_start() and pg_backup_stop
procedure() call.
Log backup
In a PostgreSQL database system, the actual database 'writes' to an addition file called
write-ahead log (WAL) which is located on disk storage. These logs contains a record of the
write actions which were made in the database. In case of crash, these log files can be used
to repaired/recover the database. Protection and maintaining access to these files is
important, as it allows for point in time recovery of the database.
Note: Postgres manages its log backups outside of Data Protect. It is optional to
schedule Log backups for Postgres in Data Protect. It is important only if you want
to copy these log files outside of the production system, to your backup
environment.
To check if the archive log is enabled on the PostGreSQL database, use the below command,
logged as PostGreSQL user.
(1 row)
Note: The archivelog mode will be automatically set to ON when you enable the Log
backup as part of the Protection Policy on IBM Storage Defender Data Protect
configuration wizard.
The PostGreSQL configuration update can be seen into the first FULL backup log,
available in /var/log/Cohesity/uda/full-backup.xxx.STDOUT:
2024-06-19 08:27:30.535:Updating Postgres archive command.
2024-06-19 08:27:30.535:QUERY : alter system set archive_command
= '/opt/cohesity/postgres/scripts/stream-log.sh %p %f
/opt/cohesity/postgres/scripts/archive_config/199374'
2024-06-19 08:27:30.541:QUERY : select pg_reload_conf()
2024-06-19 08:27:31.542:QUERY : show archive_command
Additionally, to confirm what script and configuration is being used locally to transfer the log
onto the Data Protect Cluster, the command below provides the details of what is being
executed each time a log backup is initiated:
This script queries the LOG file location and current LSN, to determine what LOG files must
be transferred from the last log backup.
The file transfer is happening between the local server and the Data Protect cluster using the
gRPC protocol. As for the FULL backup, a dedicated view is being used to store and
snapshotted to record the point in time LOG backup.
Database backups, differentials, and logs depend on a FULL backup to perform a database
restore.
Postgres databases require that you start with a FULL backup recovery before applying any
transaction logs. This means your backup retention policy must keep a FULL backup along
with its LOG backups to successfully restore a database.
It is recommended to retain two sets of FULL backups with their DIFFERENTIAL.
Recovering a PostGreSQL database consist in sequentially adding the captured changes
to the database: FULL+DIFF+Log1+Log2+… + LogN = Restored database.
Independently of the two methods Regular or Instant, the same recovery schema applies
Note: When restoring a database, an empty target database must be created first
From there select the Recover drop down button and select Universal Data Adapter.
Search for your PostGreSQL Data Protection Group and choose the appropriate date for your
recovery as shown in Figure 6-2:
Figure 6-2 Data Protect Universal Data Adapter REcovery wizard – Select resource to recover
Click Next, and specify the other recovery option, such as:
Host where you would like to recover. If different from the original, the PostGreSQL source
must be registered and prepared beforehand.
The date directory location where you would like Data Protect to copy data back on the
database host.
Request Data Protect to start the PostGreSQL instance after the recovery completed.
Note that the instance will be started using the data path as specified in the previous
option (Data directory for restore)
Number of streams can be tuned depending on your environment
Figure 6-3 Data Protect Universal Data Adapter Recovery wizard - restore settings panel
Note: For Full recovery to be successful, the PostGreSQL database service must
be stopped. Port 5432 (default) must be available otherwise recovery fails. Reason
being that as part of the recovery process, Data Protect restart the PostGreSQL
instance using the data path where you recovered the data, as specified in the
recovery wizard.
Instant recovery gives the database administrator, instantaneously access to the data from
the backup repository.
Data Protect is mounting two mount points, through NFS protocol, between the Data
Protection cluster local storage and the database host. The data are then accessible for any
operations, including writes. Local copy commands can then be used to copy the data from
the backup repository another storage local to the database host.
Here are the steps done by Data Protect, during an instant recovery operation for
PostGreSQL database:
Creates clone of the backup corresponding to the specified dates, for both the data files
and the log files
Creates a view and exposes this view as a NFS mount point to the PostGreSQL host
Assignes proper privileges to the mounted NFS resources (chown -R postgres and
chmod 700 commands)
Start the PostGreSQL database on the target host using the mounted NFS resources as
data files & log files location for the database
The Instant Recovery procedure stops here, and the database is available for use, in read
write mode, from the PostGreSQL host.
The mounted resources can be used for testing, or copy, or any other scenarios that require
access to the database.
Figure 6-4 and Figure 6-5 show an example PostGreSQL host when the instant recovery is
running. You see the mounted resources and the PostGreSQL server running on these
mounted resources.
When database administrator has completed his operations, the Instant Recovery must be
dismounted from the host. When dismounting, all modifications being done on the mounted
resources will be lost.
To dismount and clean up the Instant Recovery, use the Cancel button, from the Storage
Defender Data Management Service interface, selecting the Data Protect cluster where the
instant recovery is running, and then navigating the DataProtection > Recoveries
menu, locating your recovery task, finally use the three dots menu on the same job line to
select the Cancel option as shown in Figure .
Figure 6-6 Data Protect recovery menu - Cancel a PostGreSQL database instant recovery
A Cancel recovery popup will appear, as a confirmation. This operation will dismount the
volumes from the PostGreSQL host, and delete the cloned backup from the Data Protect
cluster storage repository.
Note: For instant recovery to be successful, the PostGreSQL database service must be
stopped. Port 5432 (default) must be available otherwise Instant recovery fails. Reason
being that as part of the recovery process, Data Protect restart the PostGreSQL instance
using the mounted resources as data path for the recovered database.
To achieve this, go to the Data Protection > Recoveries menu and click the Recover
button.
Then Select Universal Data Adapter, enter the name of the PostGreSQL protection group ,
then select the protection group and use the pen icon next to the backup date so you can
access the recovery point wizard as shown in Figure 6-7. Be sure to select the Timeline view
so you can navigate and select a very specific date and time.
Note:
The blue dots represent Full or Incremental database backup points.
The green line represents possible point in time selections to restore, down to specific
second granularity. This is made possible via the use of database logs that have been
protected as part of the backup strategy.
On that environment a small database has been created, containing a table filled with data
using pgbench utility. Pgbench is calling a simple set of sql instruction, executed every 30min,
to add, remove and update entries in this database to simulate workload, this allows us to
generate few log files to better illustrate the LOG backup process.
6.4.1 Download and install the Linux and PostGreSQL Connector agents
Perform the following steps on the host where the PostGreSQL database is running.
The Linux agent is available for different installer packages, providing support for multiple
Linux distributions. Depending on selection in the download page, you will find RPM (for
RHEL and its derivative), Suse RPM or Script installer (All supported Linux operating
systems).
At the time of writing this publication, the agent binaries are not available through the IBM
Defender Data Management Service portal. You must connect to the local UI interface to
Download the Linux Agent.
Once connected to the local User Interface, navigate through the menu:
1. Data Protection
2. Sources
3. Click the Register button at the top right of the screen
4. From the drop down menu that appears, select Universal Data Adapter menu
5. Click the link Download Agent, as show in the figure Figure 6-8 below
Figure 6-8 Linux Agent Download Screen from the local UI.
Note: The agents are also, always available from the local IBM Storage Defender Data
Protect Cluster UI.
There are two packages uploaded into the /home/spectrum folder of this example machine.
Next, check whether the required packages are installed (as documented in chapter 6.1.4,
“Local Command Requirements” on page 87).
Example 6-9 Check that all needed system packages are present prior to agent install
[root@jsa-rhel-01 spectrum]#for c in rsync mount lsof umount cp chown chmod mkdir
rm tee hostname stat blkid ls losetup dmsetup timeout lvs vgs lvcreate lvremove
lvchange wget; do which $c ; done
/usr/bin/rsync
/usr/bin/mount
/usr/bin/lsof
/usr/bin/umount
alias cp='cp -i'
/usr/bin/cp
/usr/bin/chown
/usr/bin/chmod
/usr/bin/mkdir
alias rm='rm -i'
/usr/bin/rm
/usr/bin/tee
/usr/bin/hostname
/usr/bin/stat
/usr/sbin/blkid
alias ls='ls --color=auto'
/usr/bin/ls
/usr/sbin/losetup
/usr/sbin/dmsetup
/usr/bin/timeout
/usr/sbin/lvs
/usr/sbin/vgs
/usr/sbin/lvcreate
/usr/sbin/lvremove
/usr/sbin/lvchange
/usr/bin/wget
[root@jsa-rhel-01 spectrum]#
Once it is confirmed that the required packages installed or present on the host and the
required commands are available for the RHEL v8 environment (libpcap-progs not
required for RHEL 8). proceed with the Agent and PostgreSQL connector installation,
using the 2 rpm packages.
Note: Using root to install the packages in this example, the local agent and PostGreSQL
connector will be running as root user. If you would like to use non-root user for installation
and service owner, create a dedicated user account for this on the host and grant
appropriate sudo privileges to allow this non-root user to run the required commands.
For PostGreSQL connector, the sudo configuration shown in Example 6-10 is required,
assuming the non-root user that was created is named cohesityagent (please note that
the path to this command might differ in your environment. If needed, use the which
command as shown in Example 6-9 on page 99 to get the right path to commands).
At this point the Agent is ready for install either as root or the desired service user id.
Once installed, check the status of the agent service to confirm that the installation went as
expected and the agent service was successfully started. At this time the Agent should be
ready and listening on the host:
At this point all components should be installed and ready to be configured for use by Data
Protect.
To register the PostGreSQL database as a source, use the left-hand side menu, under the
DataProtection > Sources menu.
Select the Source Type as PostGreSQL from the drop down list and select the appropriate
host type (Linux in our example).
Specify the IP address and the Datasource agent installation path, which by default is
pointing to /opt/Cohesity/postgres/scripts.
Then specify with which user PostGreSQL related commands will be executed. In our
example shown in Figure 6-10 on page 103 a dedicated user named “postgres” has been
created to interact with PostGreSQL database. This user creation is not covered in this
document, you can find information about this user in the PostGreSQL database installation
documentation.
Finally in the Source Settings section of the source registration wizard, give the source a
meaningful name , PostGreSQL1 in our example, specify the IP address of the PostGreSQL
controlling node and the port that is used for listening to the external connections.
Specify the path where the PostGreSQL binaries are located. These binaries are used by the
PostGreSQL connector to perform database and logs backups as well as recoveries.
Figure 6-11 on page 104
Note: It is possible that by default the PostGreSQL database listener is accepting only
local connection. For the database backup and recovery operations with Data Protect, it is
mandatory that non-local connections are allowed by the listener.
To do this, you need to update the postgresql.conf file (located under the installation
directory, for example /var/lib/pgsql/16/data/postgresql.conf) and allow specific or all IPs.
See 6.1.6, “Other Requirements” on page 88 earlier on this document for detailed
explanation.
Accessing the Protect Policy creation via the left hand side bar, under the Data Protection >
Policies menu. The Protect Policy is illustrated in the figure below.
In the Protection Policy named “PGSQL”, we have configured a regular FULL database
backup, every Week on Saturday, a Daily backup (INCREMENTAL), and a LOG backup every
hour. All of this being kept for 2 weeks on the local Data Protect cluster storage (Primary
Copy=Local) Figure 6-12 on page 105
Figure 6-12 Defender Data Protect Protecton Policy for PostGreSQL database
Once the protection Policy created, Protection group can be configured to associate the
defined PostgreSQL source with the newly configured Protection Policy.
One way of doing it is to use the Protection menu from the DataProtection >
Protection panel.
From there, select Universal Data Adapter. From drop down list that is appearing,
select the Registered Source corresponding to your PostGreSQL environment, as show in
figure below, PostGreSQL1 in our example. Figure 6-13
Figure 6-13 Defender Data Protection New Protection Group for PostGreSQL
Select the appropriate source (PostGreSQL1 in our example) and specify a meaningful
object name (PSQL-DB1 in our example). Then Select the appropriate Protection Policy (the
one created just before, PGSQL in our example).
Once you defined this Protection Group and its associated policy, Data Protection will trigger
the first backup, which will be a FULL database backup, immediately followed by a LOG
backup (If you enabled the LOG backup)
From this DMS view, you can access all the details and logs. Execution logs are also available
on the system where the database is running, under the /var/log/cohesity/uda folder.
All the backup and recovery activity are being managed from the Data Management Service
portal.
Using the Data Protection > Protection menu, there is a list of all protection
activities, including the PostGreSQL backup we just configured.
Figure 6-17 shows a screenshot taken from the recovery wizard, indicating the ability to
recover to a specific point in time, as we configured the backup to take logs.
The Recovery wizard and workflow is explained in 6.3.4, “Recovery Workflows” on page 93.
6.5 Troubleshooting
The following section contains information about the various logs related to the data protection
components and the database protection process. Beside the information which are gathered
and presented in Defender Data Management Service interface, there is a way for you to
investigate with very detailed logs located in the database host itself.
Hereafter are the different logs and their location you can consult when deeper investigations
are required.
The installation path may differ depending on the configuration of the user which deployed the
agent.
Local agent creates file named linux_agent_exec.* which contains detailed messages
regarding the local backup and recovery activities.
The installation path may differ depending on the configuration of the user which deployed the
agent.
In this folder a log file is created for each scheduled backup activity type being handled by the
specific adapter (PostGreSQL in this case). The types include full, incremental and log
backups.
The *PULSE* log file, will contain detailed messages regarding action and commands that
the PostGreSQL agent is executing to perform the given operation. In the example below, a
full database backup action is being taken:
Example 6-14 Universal Adapter Agent PostGreSQL PULSE log for full backup
[root@jsa-rhel-01 uda]#less
full-backup.5219051150900661-1690887961521-177730.PULSE.log
AgentInput [databases=[mytest], ParallelObjects=6, Concurrency=8, objects=[],
restoreObjectsMap={}, dataView=5219051150900661-65964-177730,
logView=5219051150900661-65964-3683-log, connectorType=POSTGRES,
allowIncrementalBackup=false, opType=FULL_BACKUP, userName=julien,
targetRestoreDir=null, customConnProps=null, truststorePassword=null,
truststorePath=null, startTime=Mon May 13 03:33:14 EDT 2024, host=1.2.3.4,
port=5432, VIP's=[1.2.3.5], s3Endpoint=null, backupHangTimeOut=1200,
convertIncrToFullBackupIfError=true, retentionPeriod=0, kerberosConfigFile=null,
pitrTime=0 : Wed Dec 31 19:00:00 EST 1969, createDatabase=false, overwrite=false,
instantRestore=false, startServer=true, dataViewMount=null, useSecureGrpc=false,
certificateConfigPath=null, enableDedupWrite=false, enableDedupRead=false,
maxIOBytes=4194304, ioThreadCount=64, rpcTimeoutMsecs=0,
maxGrpcMessageBytes=41943040, postgresCLIOptions=, applyPermissions=false,
defaultPermission=null, jobDataServicePort=0, archivalDataServicePort=0,
offlineBackup=false, deactivateDatabase=false, activateDatabase=false,
total 420
-rw-------. 1 postgres postgres 58828 May 7 10:09 postgresql-Tue.log
-rw-------. 1 postgres postgres 58828 May 8 10:09 postgresql-Wed.log
-rw-------. 1 postgres postgres 58828 May 9 10:09 postgresql-Thu.log
-rw-------. 1 postgres postgres 58828 May 10 10:09 postgresql-Fri.log
-rw-------. 1 postgres postgres 58828 May 11 10:09 postgresql-Sat.log
-rw-------. 1 postgres postgres 58828 May 12 10:09 postgresql-Sun.log
-rw-------. 1 postgres postgres 58884 May 13 03:33 postgresql-Mon.log
[root@jsa-rhel-01 log]#
REDP-5730-00
ISBN DocISBN
Printed in U.S.A.
®
ibm.com/redbooks