NetBackup105 CloudObjectStoreAdmin
NetBackup105 CloudObjectStoreAdmin
Release 10.5
NetBackup™ Cloud Object Store Administrato's Guide
Last updated: 2024-09-23
Legal Notice
Copyright © 2024 Veritas Technologies LLC. All rights reserved.
Veritas, the Veritas Logo, Veritas Alta, and NetBackup are trademarks or registered trademarks
of Veritas Technologies LLC or its affiliates in the U.S. and other countries. Other names may
be trademarks of their respective owners.
This product may contain third-party software for which Veritas is required to provide attribution
to the third party (“Third-party Programs”). Some of the Third-party Programs are available
under open source or free software licenses. The License Agreement accompanying the
Software does not alter any rights or obligations you may have under those open source or
free software licenses. Refer to the Third-party Legal Notices document accompanying this
Veritas product or available at:
https://www.veritas.com/about/legal/license-agreements
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Veritas Technologies
LLC and its licensors, if any.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, et seq.
"Commercial Computer Software and Commercial Computer Software Documentation," as
applicable, and any successor regulations, whether delivered by Veritas as on premises or
hosted services. Any use, modification, reproduction release, performance, display or disclosure
of the Licensed Software and Documentation by the U.S. Government shall be solely in
accordance with the terms of this Agreement.
http://www.veritas.com
Technical Support
Technical Support maintains support centers globally. All support services will be delivered
in accordance with your support agreement and the then-current enterprise technical support
policies. For information about our support offerings and how to contact Technical Support,
visit our website:
https://www.veritas.com/support
You can manage your Veritas account information at the following URL:
https://my.veritas.com
If you have questions regarding an existing support agreement, please email the support
agreement administration team for your region as follows:
Japan [email protected]
Documentation
Make sure that you have the current version of the documentation. Each document displays
the date of the last update on page 2. The latest documentation is available on the Veritas
website:
https://sort.veritas.com/documents
Documentation feedback
Your feedback is important to us. Suggest improvements or report errors or omissions to the
documentation. Include the document title, document version, chapter title, and section title
of the text on which you are reporting. Send feedback to:
You can also see documentation information or ask a question on the Veritas community site:
http://www.veritas.com/community/
https://sort.veritas.com/data/support/SORT_Data_Sheet.pdf
Contents
Access tier property not restored after overwriting the existing object
in the original location .............................................................. 71
Reduced accelerator optimization in Azure for OR query with multiple
tags ..................................................................................... 71
Backup failed and shows a certificate error with Amazon S3 bucket
names containing dots (.) ......................................................... 72
Azure backup jobs fail when space is provided in a tag query for either
tag key name or value. ........................................................... 73
The Cloud object store account has encountered an error ..................... 73
The bucket is list empty during policy selection ................................... 74
Creating a second account on Cloudian fails by selecting an existing
region .................................................................................. 75
Restore failed with 2825 incomplete restore operation .......................... 76
Bucket listing of a cloud provider fails when adding a bucket in the
Cloud objects tab ................................................................... 77
AIR import image restore fails on the target domain if the Cloud store
account is not added to the target domain ................................... 78
Backup for Azure Data Lake fails when a back-level media server is
used with backup host or storage server version 10.3 .................... 79
Backup fails partially in Azure Data Lake: "Error nbpem (pid=16018)
backup of client ...................................................................... 79
Recovery for Azure Data Lake fails: "This operation is not permitted
as the path is too deep" ........................................................... 79
Empty directories are not backed up in Azure Data Lake ..................... 80
Recovery error: "Invalid alternate directory location. You must specify
a string with length less than 1025 valid characters" ..................... 80
Recovery error: "Invalid parameter specified" .................................... 80
Restore fails: "Cannot perform the COSP operation, skipping the object:
[/testdata/FxtZMidEdTK]" ......................................................... 81
Cloud store account creation fails with incorrect credentials .................. 81
Discovery failures due to improper permissions .................................. 82
Restore failures due to object lock ................................................... 83
Chapter 1
Introduction
This chapter includes the following topics:
Note: Cloud vendors may levy substantial charges for data egress for moving data
out of their network. Check your cloud provider's pricing for data-out before
configuring a backup policy that transfers data out of one cloud to another cloud
region or an on-premises data center.
NetBackup can protect Azure Blob Storage, and a wide variety of S3 API-compatible
object stores like AWS S3, Google Cloud Storage (GCS), Hitachi Cloud Platform
object stores, and so on. For a complete list of compatible object stores, refer to
the NetBackup Hardware Compatibility List (HCL).
The protected objects in Azure Data Lake are referred to as files and directories,
even though the underlying object is of type blob.
Introduction 8
Features of NetBackup Cloud object store workload support
Feature Description
Integration with NetBackup's The NetBackup Web UI provides the Default cloud
role-based access control (RBAC) object store Administrator RBAC role to control
which NetBackup users can manage Cloud object
store operations in NetBackup. You do not need to be
a NetBackup administrator to manage Cloud object
stores.
Management of Cloud object store You can configure a single NetBackup primary server
accounts for multiple Cloud object store accounts, across
different cloud vendors as required.
Intelligent selection of cloud objects Within a single policy, NetBackup provides flexibility
to configure different queries for different buckets or
containers. Some buckets or containers can be
configured to back up all the objects in them. You can
also configure some buckets and containers with
intelligent queries to identify objects based on:
Feature Description
Fast and optimized backups In addition to full backup, NetBackup also supports
different types of incremental schedules for faster
backups. Accelerator feature is also supported for the
Cloud object store policies.
Restore options NetBackup restore the object store data along with
their metadata, properties, tags, ACLs, and object lock
properties.
Feature Description
Support for malware scan before You can run malware scan of the selected files/folders
recovery for recovery as part of recovery flow from Web UI and
decide the recovery actions based on malware scan
results.
Scalability support for the backup NetBackup Cloud object store protection supports
host configuring the NetBackup Snapshot Manager as a
scalable backup host for cloud deployments, along
with the media server. If you have an existing
NetBackup Snapshot Manager deployment in your
environment, you can use that as a backup host for
Cloud object store policies.
Object lock This feature lets you retain the original object lock
properties and also provides an option to customize
the object lock properties. If you use object lock
properties on the restored objects, you can't delete
those objects until the retention period is over, or the
legal holds are removed. You can use the Object lock
and retention properties without any configuration
during policy creation and backup.
Chapter 2
Managing Cloud object
store assets
This chapter includes the following topics:
Step 1 Verify the operating system See the NetBackup Compatibility Lists.
and platform compatibility.
Step 3 Configure the required See “Prerequisites for adding Cloud object store
permissions and credentials. accounts” on page 12.
Step 4 Identify the buckets and Make a list of the buckets and containers that you
containers that you want to want to protect with NetBackup, and include them
protect. in the Cloud object store accounts that you create
in Step 5.
Step 5 Create Cloud object store See “Adding Cloud object store accounts”
accounts. on page 18.
■ If you plan to use a proxy for communication with cloud endpoints, gather the
required details of the proxy server.
■ Get the Cloud account credentials, and any additional required parameters, as
per the authentication type. These credential details should have the required
permissions recommended in NetBackup documentation.
See “Permissions required for Amazon S3 cloud provider user” on page 14.
See “Permissions required for Azure blob storage” on page 15.
See “Permissions required for GCP” on page 16.
■ Make sure that the required outbound ports are open, and configurations are
done for communication from the backup host or scale-out server to the cloud
provider endpoint using REST API calls.
■ On the backup host, S3 or Azure storage URL endpoints use the HTTPS
default port 443. For a private cloud provider, this port can be any custom
port that is configured in the private cloud storage.
■ If you use a proxy server to connect to the cloud storage, you need to allow
that port. You can provide the proxy server-related details in NetBackup,
while creating a Cloud object store account.
■ The certificate revocation status check option uses the OCSP protocol, which
typically uses HTTP port 80. Ensure that the OCSP URL is reachable from
the backup host.
■ s3:PutObjectRetention
■ s3:BypassGovernanceRetention
■ s3:GetBucketObjectLockConfiguration
■ s3:Getobjectretention
{
"properties": {
"roleName": "cosp_minimal",
"description": "minimal permission required for cos
protection.",
"assignableScopes": [
"/subscriptions/<Subsfription_ID>"
],
"permissions": [
{
"actions": [
"Microsoft.Storage/storageAccounts/blobServices/read",
"Microsoft.Storage/storageAccounts/blobServices/containers/read",
"Microsoft.Storage/storageAccounts/blobServices/containers/write",
"Microsoft.ApiManagement/service/*",
"Microsoft.Authorization/*/read",
"Microsoft.Resources/subscriptions/resourceGroups/read",
"Microsoft.Storage/storageAccounts/read"
],
"notActions": [],
"dataActions": [
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/filter/action",
Managing Cloud object store assets 16
Permissions required for GCP
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/immutableStorage/runAsSuperUser/action",
],
"notDataActions": []
}
]
}
}
storage.bucketOperations.cancel
storage.bucketOperations.get
storage.bucketOperations.list
storage.buckets.create
storage.buckets.createTagBinding
storage.buckets.delete
storage.buckets.deleteTagBinding
storage.buckets.enableObjectRetention
storage.buckets.get
storage.buckets.getIamPolicy
storage.buckets.getObjectInsights
storage.buckets.list
storage.buckets.listEffectiveTags
storage.buckets.listTagBindings
storage.buckets.restore
storage.buckets.setIamPolicy
Managing Cloud object store assets 17
Limitations and considerations
storage.buckets.update
storage.multipartUploads.abort
storage.multipartUploads.create
storage.multipartUploads.list
storage.multipartUploads.listParts
storage.objects.create
storage.objects.delete
storage.objects.get
storage.objects.getIamPolicy
storage.objects.list
storage.objects.restore
storage.objects.setIamPolicy
storage.objects.update
prefix = /
prefix = /folder1
prefix = /object1
prefix = folder1//
object = /obj1
■ NetBackup does not backup an object with its name in the format: <name>/
■ Restores with object lock properties are supported for backup hosts or
scale-out servers of version 10.3 or later only.
■ Backup and restore of buckets with default retention enabled are supported
with backup hosts or scale-out servers of version 10.3 or later only.
■ For Azure, if you update a policy created with NetBackup version prior to 10.3,
with a backup host or scale-out server of version 10.3 or later, the backups fail.
As a workaround, update all the buckets to use the new format of the provided
generated ID with the existing queries. Note that you must create the associated
Cloud object store account in the policy, using NetBackup 10.3 or later, for this
workaround to be successful.
■ Discovery is supported for NetBackup version 10.3 or later, deployed on RHEL.
If no supported host is available, then discovery does not start for any of the
configured Cloud storage accounts. In this case, discovery status is not available,
and you cannot see a bucket list during policy creation. Even if you add the
buckets manually after discovery fails, your backups may fail. Upgrade at least
one supported backup host or scale-out server and create a new policy.
■ If you update a policy that is created on a NetBackup version prior to 10.3,
consider the following after a backup:
■ After backup, you may see two versions of the same buckets, for the old and
new formats. If you want to restore old data, select the bucket in the old
format. For newer backups, select the ones in the newer format.
■ The subsequent backup after the update is a full backup, irrespective of what
is configured in the policy.
■ When you upgrade to 10.3, the first Azure blob accelerated backup takes a
backup of all objects in the selection, even if the configured backup is
incremental. This full backup is required for the change in metadata properties
for the Azure blobs between NetBackup versions 10.2 and 10.3. The subsequent
incremental backups back up only the changed objects.
■ If you use a Cloud object store account created in a version older than 10.3,
NetBackup discovers the buckets with the old format, where:
uniqueName=bucketName.
require separate RBAC access rights for backup and restore. You can create
separate accounts for backup and restore to better organize the access rights.
Depending on the bucket or container which you want to protect, you must add at
least one Cloud object store account, for every cloud service provider, per region.
You may need to create multiple Cloud object store accounts, for the same cloud
service provider and region. To better organize settings like SSL, proxy, and the
type of credential to be used for the set of buckets or containers, you can create
multiple accounts.
The required permissions for backup and recovery are different. See if it is helpful
to create separate accounts for backup and recovery. You need to use something
other than the original bucket options, to restore to a different Cloud object store
account during recovery.
Note: The Cloud object store account shares the namespace with the Cloud storage
server and MSDP-C LSU name.
For Cloud object store accounts, NetBackup supports a variety of cloud providers
using AWS S3-compatible APIs (for example, Amazon, Google, Hitachi etc.), other
than Microsoft Azure. For such providers, you need to provide AWS S3-compatible
account access details to add the credentials (that is, Access Key ID, Secret Access
key) of the provider.
You need to select a validation host while creating a Cloud object store account. A
validation host is a specific backup host that validates the credentials. The validation
host is used during manual, periodic discovery, and when manual validation is
required for an existing Cloud object store account. The validation host can be
different from the actual backup host specified in the policy.
To add a Cloud object store account:
1 On the left, click Cloud object store under Workloads.
2 In the Cloud object store account tab, click Add. Enter a name for the account
in the Cloud object store name field, and select a provider from the list Select
Cloud object store provider.
3 To select a backup host or scale-out server, click Select host for validation.
The host should be NetBackup 10.1 or later, on a RHEL media server that
supports Credential validation, backup, and recovery of the Cloud object stores.
■ To select a backup host, select the Backup host option, and select a host
from the list.
Managing Cloud object store assets 20
Adding Cloud object store accounts
■ To use a scale-out server, select the Scale out server option, select a
server from the list. NetBackup Snapshot Manager servers 10.3 or later,
serve as scale-out servers.
If you have a very large number of buckets, you can also use NetBackup
Snapshot Manager as a backup host with NetBackup 10.3 or later releases.
Select the Scale out server option, and select a NetBackup Snapshot
Manager from the list.
4 Select a region from the available list of regions. Click Add above the Region
table to add a new region.
See “Adding a new region” on page 27.. Region is not available for some Cloud
object store providers.
For GCP, which supports dual-region buckets, select the base region during
account creation. For example, if a dual-region bucket is in the regions
US-CENTRAL1, US-WEST1, select US, as the region during account creation
to list the bucket.
5 In the Access settings page: Select a type of access method for the account:
■ Access credentials-In this method, NetBackup uses the Access key ID,
and the secret access key to access and secure the Cloud object store
account. If you select this method, perform the subsequent steps 6 to 10
as required to create the account.
■ IAM role (EC2)-NetBackup retrieves the IAM role name and the credentials
that are associated with the EC2 instance. The selected backup host or
scale-out server must be hosted on the EC2 instance. Make sure the IAM
role associated with the EC2 instance has required permissions to access
the required cloud resources for Cloud object store protection. Make sure
that you select the correct region as per permissions associated with the
EC2 instance while configuring the Cloud object store account with this
option. If you select this option, perform the optional steps 7 and 8 as
required, and then perform steps 9 and 10.
■ Assume role-NetBackup uses the provided key, the secret access key,
and the role ARN to retrieve temporary credentials for the same account
and cross-account. Perform the steps 6 to 10 as required to create the
account.
See “Creating cross-account access in AWS ” on page 23.
Managing Cloud object store assets 21
Adding Cloud object store accounts
■ Assume role (EC2)- NetBackup retrieves the AWS IAM role credentials
that are associated with the selected backup host or scale-out server, hosted
on an EC2 instance. Henceforward, NetBackup assumes the role mentioned
in the Role ARN to access the cloud resources required for Cloud object
store protection.
■ Credentials broker- NetBackup retrieves the credentials to access the
required cloud resources for Cloud object store protection.
■ Service principal- NetBackup uses the tenant ID, client ID, and client
secret associated with the service principal to access the cloud resources
required for Cloud object store protection. Supported by Azure.
■ Managed identity- NetBackup retrieves the Azure AD tokens, using the
managed identity that is associated with the selected backup host or
scale-out server or the user. NetBackup uses these Azure AD tokens to
access the required cloud resources for Cloud object store protection. You
can use system or user-assigned managed identities.
6 You can add existing credentials or create new credentials for the account:
■ To select an existing credential for the account, select the Select existing
credentials option, select the required credential from the table, and click
Next.
■ To use Managed identity for Azure, select System assigned or User
assigned. For the user-assigned method, enter the Client ID associated
with the user to access the cloud resources.
■ To add a new credential for the account, select Add new credentials. Enter
a Credential name, Tag, and Description for the new credential.
For cloud providers supported through AWS S3-compatible APIs, use AWS
S3-compatible credentials. Specify the Access key ID and Secret access
key.
For Microsoft Azure cloud provider:
■ For the Access key method, provide Storage account credentials,
specify Storage account.
■ For the Service principal method, provide Client ID, Tenant ID, and
Secret key.
■ If you use Assume role as the access method, specify the Amazon
Resource Name (ARN) of the role to use for the account, in the Role ARN
field.
7 (Optional) Select Use SSL if you want to use the SSL (Secure Sockets Layer)
protocol for user authentication or data transfer between NetBackup and the
cloud storage provider.
Managing Cloud object store assets 22
Adding Cloud object store accounts
■ Authentication only: Select this option if you want to use SSL only at the
time of authenticating users while they access the cloud storage.
■ Authentication and data transfer: Select this option if you want to use
SSL to authenticate users and transfer the data from NetBackup to the
cloud storage, along with user authentication.
■ Check certificate revocation (IPv6 not supported for this option): For
all cloud providers, NetBackup provides the capability to verify the SSL
certificates revocation status by OCSP protocol. The OCSP protocol sends
a validation request to certificate issuer in order to get certificates current
revocation status. If SSL is enabled and the check certificate revocation
option is enabled, each non-self-signed SSL certificate is verified with a
OCSP request. If the certificate is revoked, NetBackup does not connect
to the cloud provider.
Note: The FIPS region of the Amazon GovCloud cloud provider (that is
s3-fips-us-gov-west-1.amazonaws.com) supports only secured mode of
communication. Therefore, if you disable the Use SSL option while you
configure Amazon GovCloud cloud storage with the FIPS region, the
configuration fails.
8 (Optional) Select the Use proxy server option to use a proxy server and provide
proxy server settings. Once you select the Use proxy server option, you can
specify the following details:
■ Proxy host–Specify IP address or name of the proxy server.
■ Proxy Port–Specify port number of the proxy server.
■ Proxy type– You can select one of the following proxy types:
■ HTTP
Note: You need to provide the proxy credentials for the HTTP proxy
type.
Managing Cloud object store assets 23
Adding Cloud object store accounts
■ SOCKS
■ SOCKS4
■ SOCKS5
■ SOCKS4A
■ Private Clouds typically have a self-signed certificate. Thus, for private clouds,
Check certificate revocation is not required. Disable this check while configuring
the account; otherwise, account creation fails.
■ The OSCP URL of the CA should be present in the certificate's Authority
Information Access extension.
■ UNIX:
/usr/openv/var/global/cloud/
Note: In a cluster deployment, the NetBackup database path points to the shared
disk, which is accessible from the active node.
Note: Ensure that you do not change the file permission and ownership of the
cacert.pem file.
To add a CA
You must get a CA certificate from the required cloud provider and update it in the
cacert.pem file. The certificate must be in .PEM format.
Managing Cloud object store assets 26
Adding Cloud object store accounts
==========================
–––––BEGIN CERTIFICATE–––––
<Certificate content>
–––––END CERTIFICATE–––––
==========================
–––––BEGIN CERTIFICATE–––––
<Certificate content>
–––––END CERTIFICATE–––––
4 Specify the HTTP and HTTPS ports to use for the region.
5 Click Add. The added region appears in the Region table on the Basic
properties page.
Backup images
This section describes the procedure for scanning policy of client backup images
for malware.
To scan policy of client backup images for malware
1 On left, click Detection and reporting > Malware detection.
2 On the Malware detection page, click Scan for malware.
3 In the Search by option, select Backup images.
4 In the search criteria, review and edit the following:
■ Policy name: Only supported policy types are listed.
Managing Cloud object store assets 30
Scan for malware
■ Client name: Displays the clients that have backup images for a supported
policy type.
■ Policy type: Select the policy type as Cloud-Object-Store.
■ Type of backup
■ Copies: If the selected copy does not support instant access, then the
backup image is skipped for the malware scan.
■ Disk pool: MSDP (PureDisk), OST (DataDomain) and AdvancedDisk
storage type disk pools are listed.
■ Disk type: MSDP (PureDisk), OST (DataDomain) and AdvancedDisk disk
types are listed.
■ Malware scan status
■ For the Select the timeframe of backups, verify the date and the time
range or update it.
5 Click Search: Select the search criteria and ensure that the selected scan host
is active and available.
6 From the Select the backups to scan table select one or more images for
scan.
7 In the Select a malware scanner host pool, Select the appropriate host pool
name.
Note: Scan host from the selected scan host pool must be able to access the
instant access mount created on storage server which is configured with
NFS/SMB share type.
Note: Any backup images that fail validation are ignored. Malware scanning
is supported for the backup images that are stored on storage with instant
access capability and for the supported policy types only.
■ In progress
■ Pending
Note: You can cancel the malware scan for one or more in progress and
pending jobs.
Warning: Scan is limited to only 100 images. Adjust the date range and try
again.
10 After the scan is initiated, the Malware Scan Progress is displayed. The
following are the status fields:
■ Not scanned
■ Not infected
■ Infected
■ Failed
Note: Hover over the status to view the reason for the failed scan.
Any backup images that fail validation are ignored. Malware scanning is
supported for the backup images that are stored on storage with instant
access capability and for the supported policy types only.
■ Pending
■ In progress
For more information on the malware scan status, refer to the NetBackup Security
and Encryption Guide.
Chapter 3
Protecting Cloud object
store assets
This chapter includes the following topics:
■ Policy attributes
■ Adding conditions
Note: When you first enable a policy to use accelerator, the next backup
(whether full or incremental) is in effect a full backup. It backs up all objects
corresponding to Cloud objects queries. If that backup was scheduled as an
incremental, it may not be completed within the backup window.
■ NetBackup retains track logs for future accelerator backups. Whenever you add
a query, NetBackup does a full, non-accelerated backup for the queries that are
added to the list. The unchanged queries are processed as normal accelerator
backups.
■ If the storage unit that is associated with the policy cannot be validated when
you create the policy, it is validated later, when the backup job begins. If
accelerator does not support the storage unit, the backup fails. In the bpbrm
log, a message appears that is similar to one of the following: Storage server
Protecting Cloud object store assets 36
About accelerator support
%s, type %s, does not support image include. Storage server type %s, does
not support accelerator backup.
■ Accelerator requires that the storage have the OptimizedImage attribute enabled.
■ The Expire after copy retention can cause images to expire while the backup
runs. To synthesize a new full backup, the SLP-based accelerator backup needs
the previous backup.
■ To detect changes in metadata, NetBackup uses one or more cloud APIs per
object/blob. Hence, change detection time increases with the number of
objects/blobs to be processed. You may observe backups running longer than
expected in cases with little or no data change but a large number of objects.
■ If in your environment, for a given object, the metadata or tag is always changed
(added/removed/updated) with its data. Evaluate using incremental without
accelerator over incremental with accelerator from a performance and cost
viewpoint.
■ While creating a Cloud object store policy with multiple tag-based queries, you
can use a few simple rules to get the best effect with accelerator. Use the query
builder in the policy creation page, and create separate queries, one query per
tag. The accelerator-based policies perform best in this configuration.
Since a full backup requires more catalog space than an incremental one, replacing
incremental backups with full backups increases the catalog size.
For example: A 1 TB file system with one million objects needs approximately
701-MB track log.
Note that if you modify the backup selection or stream count in an
accelerator-enabled policy, NetBackup creates a new track log. The older track
logs remain on the backup host.
(either full or last incremental, based on the incremental schedule used), then that
object is not included in the next incremental backup. Because of this behavior,
empty paths show up in the catalog and are rendered in the browse view of restore.
an MSDP storage as the target for the first backup copy, and configure a tape
storage as the target for secondary or duplication copies.
Step 1 Gather information about Gather the following information about each
the Cloud object store bucket/container:
account.
■ The Account name: Credential and
connection details mentioned in the account
are used to access cloud resources using
REST APIs during backup. An account is
associated with a single region; hence, a
policy can contain buckets or containers
associated with that region only.
■ The bucket/container names
■ The approximate number of objects on each
bucket/container to be backed up.
■ The typical size of the objects.
Step 2 Group the objects based Divide the different objects in the accounts into
on backup requirements groups according to the different backup and
archive requirements.
Step 3 Consider the storage The NetBackup environment may have some
requirements special storage requirements that the backup
policies must accommodate.
Step 6 Select exactly what to You do not need to back up entire objects,
back up. unless required. Create queries to select and
back up only the required object(s).
You can use a scale-out server if you have a large number of buckets in your
Cloud object store. NetBackup Snapshot Manager can scale out as many data
mover containers as needed at run time, and then scale them down when the
data protection jobs are completed. You do not need to worry about configuring
multiple backup hosts, and creating multiple policies to distribute the load across
these backup hosts.
■ Evaluate the requirement for NetBackup multistreaming in your environment.
For a given bucket, NetBackup creates one stream per query defined for the
bucket in the policy. If you want to use multistreaming, you can specify this while
creating the policy. To use multistream, you also need to configure the number
of jobs for the buckets as clients in the Client attributes section, under primary
server Host properties. Add the client name and set the Maximum data
streams as required.
Define policy attributes like name, storage See “Policy attributes” on page 43.
type, job priority and so on.
Select the account and objects to backup. See “Configuring the Cloud objects tab”
on page 52.
Policy attributes
The following procedure describes how to select the attributes for the backup policy.
Protecting Cloud object store assets 44
Policy attributes
6 The Limit jobs per policy attribute limits the number of jobs that NetBackup
performs concurrently when the policy is run. By default the box is cleared and
NetBackup performs an unlimited number of backup jobs concurrently. Other
resource settings can limit the number of jobs.
A configuration can contain enough devices so that the number of concurrent
backups affects performance. To specify a lower limit, select Limit jobs per
policy and specify a value from 1 to 999.
7 In the Job priority field, enter a value from 0 to 99999. This number specifies
the priority that a policy has as it competes with other policies for resources.
The higher the number, the greater the priority of the job. NetBackup assigns
the first available resource to the policy with the highest priority.
8 The Media owner field is available when the Policy storage attribute is set
to Any Available. The Media owner attribute specifies which media server or
server group should own the media that backup images for this policy are
written to.
■ Any(default)-Allows NetBackup to select the media owner. NetBackup
selects a media server or a server group (if one is configured).
■ None-Specifies that the media server that writes the image to the media
owns the media. No media server is specified explicitly, but you want a
media server to own the media.
9 To activate the policy, select the option Go into effect at, and set the date and
time of activation. The policy must be active for NetBackup to use it. Make sure
that the date and time are set to the time that you want to resume backups.
To deactivate a policy, clear the option. Inactive policies are available in the
Policies list.
10 The Allow multiple data stream option is selected by default and is read-only.
This option allows NetBackup to divide automatic backups for each query into
multiple jobs. Because the jobs are in separate data streams, they can occur
concurrently.
Multi-stream jobs consist of a parent job to perform stream discovery and child
jobs for each stream. Each child job displays its job ID in the Job ID column in
the Activity monitor. The job ID of the parent job appears in the Parent Job
ID column, which is not displayed by default. Parent jobs display a dash (-) in
the Schedule column.
Protecting Cloud object store assets 46
Creating schedule attributes for policies
11 Select the Use Accelerator option to enable accelerator for the policy.
NetBackup Accelerator increases the speed of backups. The increase in speed
is made possible by the change detection techniques on the client. The backup
host or scale-out server uses the change detection techniques to identify the
changes that occurred between the last backup and the current state of the
Cloud object store’s objects/blobs. The client sends the changed data to the
media server in a more efficient backup stream. The media server combines
the changed data with the rest of the client's data that is stored in previous
backups.
If an object or portion of an object is already in storage and has not been
changed, the media server uses the copy in storage rather than reading it from
the client. The result is a full NetBackup backup.
12 Select the Disable for all clients option from the Client-side deduplication
options. NetBackup Cloud object store protection uses the backup host as
client.
13 The Keyword phrase attribute is a phrase that NetBackup associates with all
backups or archives based on the policy. Only the Windows and UNIX client
interfaces support keyword phrases.
Clients can use the same keyword phrase for more than one policy. The same
phrase for multiple policies makes it possible to link backups from related
policies. For example, use the keyword phrase “legal department documents”
for backups of multiple clients that require separate policies, but contain similar
types of data.
The phrase can be a maximum of 128 characters in length. All printable
characters are permitted, including spaces and periods. By default, the keyword
phrase is blank.
■ Full Backup-A complete backup of the objects that contain all of the data
objects and the log(s).
■ Differential Incremental Backup-Backup of the changed blocks since the
last backup. If you configure a differential incremental backup, you must
also configure a full backup.
■ Cumulative Incremental Backup-Backs up all the changed objects since
the last full backup. All objects are backed up if no previous backup was
done.
Specifies that the media server that writes to the media owns the media.
No media server is specified explicitly, but you want a media server to
own the media.
9 Specify a Retention period for the backups. This attribute specifies how long
NetBackup retains the backups. To set the retention period, select a period (or
level) from the list. When the retention period expires, NetBackup deletes
information about the expired backup. After the backup expires, the objects in
the backup are unavailable for restores. For example, if the retention is 2 weeks,
data can be restored from a backup that this schedule performs for only 2
weeks after the backup.
10 The Media multiplexing attribute specifies the maximum number of jobs from
the schedule that NetBackup can multiplex to any drive. Multiplexing sends
concurrent backup jobs from one or several clients to a single drive and
multiplexes the backups onto the media.
Specify a number from 1 through 32, where 1 specifies no multiplexing. Any
changes take effect the next time a schedule runs.
11 Click Add to add the attributes, or click Add and add another to add a different
set of attributes for another schedule.
Protecting Cloud object store assets 49
Configuring the Start window
Drag your cursor in the time table. Click the day and time when you'd like the
window to start and drag it to the day and
time when you'd like the window to close.
Use the settings in the dialog box. ■ In the Start day field, select the first
day that the window opens.
■ In the Start time field, select the time
that the window opens.
Drag your cursor in the time table. Click the day and time when you'd like the
window to start and drag it to the day and
time when you'd like the window to close.
Enter the duration of the time window. Enter a length of time in the Duration
(days, hours, minutes) field.
Indicate the end of the time window. ■ Select a day in the End day list.
■ Select a time in the End time field.
Consider allowing extra time in the schedule in case the schedule starts late
due to factors outside of NetBackup. (Delays due to unavailable devices, for
example.) Otherwise, all backups may not have a chance to start.
4 As necessary, do any of the following:
Click Add and Add another. To save the time window and add another.
6 In the Select objects/blobs table, select the option Include all objects/blobs
in the selected buckets/containers to backup one or more entire buckets.
7 Under Buckets with no queries, select the buckets/containers to which you
want to add queries. If a bucket is previously selected to include all queries,
that bucket does not appear in this list. Click Add condition or Add Tag
condition to add a condition or a tag condition. See “Adding conditions ”
on page 54. and See “Adding tag conditions ” on page 55. respectively, for
more details.
Adding conditions
NetBackup gives you the convenience of selectively backing up the backup
objects/containers inside the buckets/containers using intelligent queries. You can
add conditions or tag conditions to select the objects/blobs inside a bucket/container
that you want to back up.
If you enable dynamic multi-streaming, all selected buckets and containers are
completely backed up. You cannot define any queries for the buckets or containers
that you have selected.
To add a condition:
1 While creating a policy, in the Cloud objects tab, click Add query, under
Queries.
2 In the Add a query dialog, enter a name for the query, and select the bucket(s)
to which you want to apply the query. In the list of buckets, you can see only
those buckets that are not selected to include all objects.
Note: While editing a query, you can see the buckets that are selected to
include all objects, but the edit option is disabled.
The Queries table shows the queries that you have added. You can search
through the queries using values in the Query name and Queries columns.
The values of the Queries column do not include the queries with Include all
objects/blobs in the selected buckets/containers option selected.
3 Select Include all objects in the selected buckets option to back up all the
objects in the selected bucket(s).
4 To add a condition, click Add condition.
You can make conditions by using either prefix or object. You cannot use
both prefix and object in the same query. Do not leave any empty fields in a
condition.
Protecting Cloud object store assets 55
Adding tag conditions
5 Select prefix or object from the drop-down, and enter a value in the text field.
Click Condition to add another condition. You can join the conditions by the
boolean operator OR.
6 Click Add to save the condition.
■ The following blobs are tagged with "Project": "Finance" tag value
■ OrganizationData/Fin/accounts/account1/records1.txt
■ OrganizationData/Fin/accounts/account2/records2.txt
■ OrganizationData/Fin/accounts/account3/records3.txt
■ OrganizationData/Fin/accounts/monthly_expenses/Jul2022.rec
■ OrganizationData/Fin/accounts/monthly_expenses/Aug2022.rec
Copy a policy
Copying a policy lets you reuse similar policy attributes, schedules, and cloud
objects among your policies. You can also reuse complex queries by copying
policies, to save time.
Protecting Cloud object store assets 58
Managing Cloud object store policies
To copy a policy:
1 On the left, click Policies. All the policies that you have the privilege to view
are displayed in the Policies tab.
2 Click the ellipsis menu (three dots) in the row of the policy that you want to
copy. Click Copy policy.
Alternatively, select the option in the row of the policy, click Copy policy at
the top of the table.
3 In the Copy policy dialog, optionally, change the name of the policy in the
Policy to copy field.
4 Enter the name of the new policy, in the New policy field.
5 Click Copy to initiate copying.
individual objects, select all objects under a set of folder(s), or all objects
matching a set of prefixes.
■ A valid Cloud object store account is required to access the buckets, containers,
and objects/blobs. You can add the Cloud object store account-related
information to NetBackup while creating the account. The permission required
for restoring differs from the ones required for backup. If it helps, you can create
a separate Cloud object store account for recovery.
■ Ensure that you have permission to view and select the Cloud object store
account and the access host. To be able to select a recovery host for a policy,
in the Cloud objects tab.
■ If required, you can use a different recovery host than the one used for Cloud
object store account validation. Ensure that the new recovery host has the
required ports opened and configured for communication from the backup host
or scale-out server to the cloud provider endpoint, using REST API calls.
■ You can plan to start multiple restore jobs in parallel for better throughput. You
can select objects for recovery as individual objects, or using a folder or prefix.
Note: This option can incur additional cloud storage costs to hold data for a longer
time. Avoid using these options if you want a temporary copy of data that you want
to delete after browsing the objects or copying to another location.
To apply object retention locks or legal holds on the restored objects, you can select
multiple options during restoration to meet your organization's compliance and
retention requirements. You can select the options in the Recovery options page,
under Advanced restore options. See “Recovering Cloud object store assets”
on page 61.
To recover assets:
1 On the left, click Recovery. Under Regular recovery, click Start recovery.
2 In the Basic properties page, select Policy type as Cloud-Object-Store.
3 Click the Buckets/Containers field to select assets to restore.
■ In the Add bucket/container dialog, the default option displays all available
bucket/containers with completed backups. You can search the table using
the search box.
■ To add a specific bucket or container, select Add the bucket/container
details option. If you have selected an Azure Data Lake workload, select
Add files/directories.
Select the cloud provider, and enter the bucket/container name, and the
Cloud object store account name. For Azure workloads, specify the storage
account name, if available in the UI.
Note: In a rare scenario, if you cannot find the required bucket listed in the
table for selection. But you can see the same bucket listed in the catalog
view as a backup ID. You can select the bucket by manually entering the
bucket name, provider ID, and the Cloud object store account name as per
the backup ID. The backup ID is formed as
<providerId>_<cloudAccountname>_<uniquename>_<timestamp>
The following warning message is displayed when images which are not
scanned are selected for recovery:
■ (Optionally) Click Add prefix. In the Add prefix dialog, enter a prefix in the
search box to display relevant results in the table. Click Add, to select all
the matching prefixes displayed in the table for recovery. The selected
prefixes are displayed in a table below the selected objects/blobs. Click
Next.
Note: Clean file recovery (Skip infected files) as part of recovery is not
supported for Cloud-Object-Store.
8 In the Recovery options page, you can select whether you want to restore to
the source bucket of the container or use a different one. These are the Object
restore options:
■ Restore to the original bucket or container: Select to recover to the same
bucket or container from where the backup was taken.
Optionally:
■ Add a prefix for the recovered assets in the Add a prefix field.
■ If you have selected an Azure Data Lake workload, enter the Directory
to restore.
Note: If you have selected Include all objects/blobs and folders, in step
7, the Restore objects/blobs or prefixes to different destinations option
is disabled.
9 Select a Recovery host. The recovery host that is associated with the Cloud
object store account is displayed by default. If required, change the Backup
host. If the Cloud object store account uses a scale-out server, this field is
disabled.
10 Optionally, to overwrite any existing object or blobs using the recovered assets,
select Overwrite existing objects/blobs.
11 (Optional) To override the default priority of the restore job, select Override
default priority, and assign the required value.
12 In the Advanced restore options:
■ To apply the original object lock attributes from the backed-up objects,
select Retain original object lock properties.
■ To change the values of different properties, select Customize object lock
properties. From the Object lock mode list:
■ Select Compliance or Governance for Amazon or other S3 workloads.
■ Select Locked or Unlocked for Azure workloads.
Recovering Cloud object store assets 65
Recovering Cloud object store assets
■ Select a future date and time till which the object lock is valid. Note that
the recovered object is locked till this specified date and time.
■ Select Object lock legal hold status to implement it on the restored objects.
See “Configuring Cloud object retention properties” on page 61.
The Advanced restore options are not applicable to the Azure Data Lake
workload.
13 In the Malware scan and recovery options:
Note: These options are visible only when you select the Scan for malware
before recovery in the Recovery details page.
■ (Not recommended) Select If any files are infected with malware, recover
all files, including infected files option to recover files infected with
malware.
■ Select If any files are infected with malware, do not perform the
recovery job option. By default this option is selected and recommended.
■ Select the desired Scan host pool.
Note: For recovery followed by malware scan, the Allow recovery of files
infected by malware option is always enabled by default as clean recovery
is not supported for Cloud-Object-Store.
14 In the Review page, view the summary of all the selections that you made, and
click:
■ Start recovery
Or
■ Reduced acceleration during the first full backup, after upgrade to version 10.5
■ After backup, some files in the shm folder and shared memory are not cleaned
up.
■ Backup fails with default number of streams with the error: Failed to start
NetBackup COSP process.
■ Backup fails or becomes partially successful on GCP storage for objects with
content encoding as GZIP.
■ Recovery for the original bucket recovery option starts, but the job fails with
error 3601
■ Restore fails: "Error bpbrm (PID=3899) client restore EXIT STATUS 40: network
connection broken"
■ Access tier property not restored after overwriting the existing object in the
original location
■ Backup failed and shows a certificate error with Amazon S3 bucket names
containing dots (.)
■ Azure backup jobs fail when space is provided in a tag query for either tag key
name or value.
■ Bucket listing of a cloud provider fails when adding a bucket in the Cloud objects
tab
■ AIR import image restore fails on the target domain if the Cloud store account
is not added to the target domain
■ Backup for Azure Data Lake fails when a back-level media server is used with
backup host or storage server version 10.3
■ Backup fails partially in Azure Data Lake: "Error nbpem (pid=16018) backup of
client
■ Recovery for Azure Data Lake fails: "This operation is not permitted as the path
is too deep"
■ Recovery error: "Invalid alternate directory location. You must specify a string
with length less than 1025 valid characters"
■ Restore fails: "Cannot perform the COSP operation, skipping the object:
[/testdata/FxtZMidEdTK]"
■ Wait for all the running backups and restores to finish, and then restart the
backup host. This clears the shared memory.
1. Edit the older policy, check if the allow multiple data streams option is selected
by default. Save the policy.
2. Retry the operation.
using accelerator enabled Cloud object store policy, it shows a loss of acceleration
or backs up unchanged data.
This happens as the ordering of objects across multiple tags is not as expected for
accelerator. Few objects that are not found in the tracklog even if they exist in the
tracklog; hence, such objects are backed up repeatedly without getting accelerator
benefit for these objects.
Workaround
Do not use the OR condition while combining multiple tag conditions for Azure.
Instead, create a separate query per tag.
For example,
Say that you have the following query: (tagKey eq 'type' and tagValue eq 'text') or
(tagKey eq 'type' and tagValue eq 'none) with say queryname datatype
You can create two queries say by name datatype-text with query (tagKey eq 'type'
and tagValue eq 'text') and datatype-none with query (tagKey eq 'type' and tagValue
eq 'none')
Note: This results in the first backup, which is without any acceleration for these
new queries. For subsequent backups, you can see the problem is resolved.
Note: It is recommended not to use the csconfig CLI to update the alias
corresponding to the Cloud object store account. The correct way to update the
same is through the Edit workflow or create-or-update API. Aliases with the same
name as the Cloud object store account are the aliases corresponding to the Cloud
object store account.
Workaround
The NetBackup domain name must be unique across the Cloud object store account,
Cloud storage server, or MSDP-C LSU. They share a single namespace. Hence,
we can have the following usage scenarios:
Troubleshooting 74
The bucket is list empty during policy selection
Case 1: When there is no valid Cloud storage server or MSDP-C LSU with the
same name as the Cloud object store account in the environment.
■ Gather the Cloud object store account details as per your environment and
cross-check the details obtained.
■ Optionally, if the Alias corresponding to the Cloud object store account exists,
use the csconfig CLI and note down the details of the alias.
■ Use the following command to list all instances for the type and locate
the Cloud object store account and its instance:
<install-path>/csconfig cldinstance -i -pt <provider_type>
■ Use the following command to get the details of the instance and the
Cloud object store account:
<install-path>/csconfig cldinstance -i -in <instance name>
In NetBackup, you can add a Cloud object store account by adding a region entry,
without specifying a correct region location constraint. The account gets added
successfully because some private clouds might not have a region configured.
When you are using such an invalid region in an account, the bucket list may return
empty.
Workaround:
Do the following:
1 Call the getBucketLocation API on the bucket to retrieve the correct location
constraint for your account configuration.
If the API returns a blank location constraint, use 'us-east-1' as the region
location constraint.
2 Correct the region details by editing the account configuration. See “Adding
Cloud object store accounts” on page 18.
3 To edit the cloud configuration, do the following:
■ On the left, click Host Properties.
■ Select the required primary server and connect it. Click Edit primary server.
■ Click Cloud storage.
■ Optionally, enter your cloud provider name in the search field, to filter the
list.
■ In the row corresponding to your cloud provider service host, enter the
correct region details and save.
Alternatively, delete the account and recreate it with the correct region location
constraint.
location constraint field. Account created by selecting such a region from the list
fails.
Workaround:
Use the NetBackup Asset Query APIs to create an account. The region details part
can be provided in the payload:
"s3RegionDetails": [
{ "regionId": "us-east-1",
"regionName": "<region name same as listed from prior account>",
}
]
Workaround:
When the error is not fatal, the restore job is a partial success. Check the Activity
Monitor to see the list of objects that cannot be restored. Try restoring to a different
location (bucket/container or different account) to check if the problem is with the
destination cloud account or bucket settings.
When the error is fatal, the restore job fails. Check the nbcosp logs to determine
the object for which the restore has failed. Use granular object selection for the next
restore, and skip the earlier failed object while selecting the objects.
Refer to your cloud provider documentation to check if you use a feature or any
metadata that the cloud vendor does not support completely, or if it needs any more
configuration. Fix the object with the right attributes in the Cloud object store and
start a new backup job. Once this backup completes, the objects can be restored
without this workaround.
Workaround
Although the bucket list is not available, you can always manually add buckets in
the Cloud objects tab for backup.
When it is a DNS issue, you can optionally list buckets using a temporary workaround
by adding an IP hostname-mapping entry in the /etc/hosts file. When only
virtual-hosted style requests are supported, first prefix the endpoint using a random
Troubleshooting 78
AIR import image restore fails on the target domain if the Cloud store account is not added to the target domain
bucket name, when using commands like ping, dig, and nslookup to determine the
IP of the cloud endpoint. For example,
ping randombucketname.s3-fips.us-east-1.amazonaws.com
You can then add the resulting IP along with the actual endpoint name (without the
random bucket name prefix) in the /etc/hosts file.
Note that this is a temporary workaround to edit DNS entries on the computer for
bucket listing. Remove them after the policy configuration is done, unless the cloud
endpoint is a private cloud setup that can use static IP addresses permanently.
You must use a path with directory depth less than 60, excluding container.
Note: This error does not appear on the Activity Monitor. For more details, refer
to NetBackup API documentation.
Note: In this issue, files that are uploaded to Azure portal using the Upload option
are not included.
Workaround
Do any of the following:
■ Try restore operation on the same container to a different directory.
■ Try restore operation to a different container.
■ Try restore operation to a different destination.
■ Delete the original directory, and then try to restore to the same location.
{"level":"error","Error Code":"SignatureDoesNotMatch","Message":"The
request signature we calculated does not match the signature you
provided. Check your key and signing method.
","time":"2023-07-25T10:58:48.130601182Z","caller":"main.validateNBCosCreds:s3_ops.go:1634",
"message":"Error in getBucketLocation for credential validation"}
{"level":"error","errmsg":"Unable to validate creds.","storage
server":"aws-acc","time":"2023-07-
2,51216,309,366,474,1690282728130,1673,140536982484736,0:,0:,0:,2,(28|S113:ERR
- OCSD reply with error,error_code=1003 error_msg:
updateStorageConfig Failed as credential validation failed|)
2,51216,309,366,475,1690282728131,1673,140536982484736,0:,0:,0:,2,(28|S60:ERR
- operation_to_ocsd failed, storageid=aws-acc, retval=23|)
0,51216,526,366,6,1690282728131,1673,140536982484736,0:,132:Credential
validation failed for given account,
Workaround:
Update the credentials and try to create the account again.
25T11:14:14.761525555Z","caller":"main.(*OCSS3).
listBucketsDetailsCOSP:s3_ops.go:5261","message":"Unable
to listBucketsDetailsCOSP"} {"level":"debug","status code"
:403,"errmsg":"AccessDenied: Access Denied\n\tstatus code: 403,
request id: K7JVVPWAGW4KYSQ6, host id:
Workaround:
Troubleshooting 83
Restore failures due to object lock
Add the required permissions. See “Permissions required for Amazon S3 cloud
provider user” on page 14.
</Error>\n","time":"2023-07-25T05:56:00.708117368Z","caller":
"internal/logging.ExtendedLog.Log:zerolog_wrapper.go:18","message":"SDK
log entry"}
{"level":"debug","status code":403,"errmsg":"AccessDenied:
Access Denied\n\tstatus code: 403, request id: ZNT4GXHP70HX573A,
host id:
3scBmke9LmOwtuK5lnYv0ozyKgbne+ey04qXtSt6s/OQbpSCyfxiwvdi2CPG3cHU+H/ztz7C3mHeoX5Cnvb2xg==",
"time":"2023-07-25T05:56:00.708145345Z","caller":"main.s3StatusCode:s3_ops.go:8447",
"message":"s3StatusCode(): get http status code"}
{"level":"error","error":"AccessDenied: Access Denied\n\tstatus code:
Troubleshooting 84
Restore failures due to object lock
403,
request id: ZNT4GXHP70HX573A,
host id:
3scBmke9LmOwtuK5lnYv0ozyKgbne+ey04qXtSt6s/OQbpSCyfxiwvdi2CPG3cHU+H/ztz7C3mHeoX5Cnvb2xg==",
"object
key":"cudtomer35jul/squash.txt","time":"2023-07-25T05:56:00.708160142Z",
"caller":"main.(*OCSS3).commitBlockList:s3_ops.go:2655",
"message":"s3Storage.svc.PutObjectRetention Failed to Put
ObjectRetention"}
Workaround:
You must have the required permissions for object retention. These are the
necessary permissions that your role must have:
"Version": "2012-10-17",
"Statement": [
"Sid": "ObjectLock",
"Effect": "Allow",
"Action": [
"s3:PutObjectRetention",
"s3:BypassGovernanceRetention"
],
"Resource": [
"*"
}
Troubleshooting 85
Restore failures due to object lock