NetBackup1011 WebUIGuide CloudObjectStoreAdmin
NetBackup1011 WebUIGuide CloudObjectStoreAdmin
Release 10.1.1
NetBackup™ Web UI Cloud Object Store
Administrato's Guide
Last updated: 2022-12-12
Legal Notice
Copyright © 2022 Veritas Technologies LLC. All rights reserved.
Veritas, the Veritas Logo, and NetBackup are trademarks or registered trademarks of Veritas
Technologies LLC or its affiliates in the U.S. and other countries. Other names may be
trademarks of their respective owners.
This product may contain third-party software for which Veritas is required to provide attribution
to the third party (“Third-party Programs”). Some of the Third-party Programs are available
under open source or free software licenses. The License Agreement accompanying the
Software does not alter any rights or obligations you may have under those open source or
free software licenses. Refer to the Third-party Legal Notices document accompanying this
Veritas product or available at:
https://www.veritas.com/about/legal/license-agreements
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Veritas Technologies
LLC and its licensors, if any.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, et seq.
"Commercial Computer Software and Commercial Computer Software Documentation," as
applicable, and any successor regulations, whether delivered by Veritas as on premises or
hosted services. Any use, modification, reproduction release, performance, display or disclosure
of the Licensed Software and Documentation by the U.S. Government shall be solely in
accordance with the terms of this Agreement.
Technical Support
Technical Support maintains support centers globally. All support services will be delivered
in accordance with your support agreement and the then-current enterprise technical support
policies. For information about our support offerings and how to contact Technical Support,
visit our website:
https://www.veritas.com/support
You can manage your Veritas account information at the following URL:
https://my.veritas.com
If you have questions regarding an existing support agreement, please email the support
agreement administration team for your region as follows:
Japan [email protected]
Documentation
Make sure that you have the current version of the documentation. Each document displays
the date of the last update on page 2. The latest documentation is available on the Veritas
website:
https://sort.veritas.com/documents
Documentation feedback
Your feedback is important to us. Suggest improvements or report errors or omissions to the
documentation. Include the document title, document version, chapter title, and section title
of the text on which you are reporting. Send feedback to:
You can also see documentation information or ask a question on the Veritas community site:
http://www.veritas.com/community/
https://sort.veritas.com/data/support/SORT_Data_Sheet.pdf
Contents
Recovery for Cloud object store using web UI for original bucket
recovery option starts but job fails with error 3601 ........................ 50
Recovery Job does not start ........................................................... 50
Restore fails: "Error bpbrm (PID=3899) client restore EXIT STATUS
40: network connection broken" ................................................. 51
Access tier property not restored after overwrite existing to original
location ................................................................................ 51
Reduced accelerator optimization in Azure for OR query with multiple
tags ..................................................................................... 51
Backup is failed and shows a certificate error with Amazon S3 bucket
names containing dots (.) ......................................................... 52
Azure backup job fails when space is provided in tag query for either
tag key name or value. ........................................................... 53
The Cloud object store account has encountered an error ..................... 53
Bucket list empty when selecting it in policy selection ........................... 54
Creating second account on Cloudian fails by selecting existing region
........................................................................................... 55
Restore failed with 2825 incomplete restore operation .......................... 56
Bucket listing of cloud provider fails when adding bucket in Cloud
objects tab ............................................................................ 57
AIR import image restore fails on the target domain if the Cloud store
account is not added in target domain. ........................................ 58
Chapter 1
Introduction
This chapter includes the following topics:
Note: Cloud vendors may levy substantial charges for data egress for moving data
out of their network. Check your cloud provider pricing for data-out before configuring
a backup policy that transfers data out of one cloud to another cloud region or an
on-premises data center.
NetBackup can protect Azure Blob Storage, and a wide variety of S3 API-compatible
object stores like AWS S3, Google Cloud Storage (GCS), Hitachi Cloud Platform
object store, and so on. For a complete list of compatible object stores, refer to the
NetBackup Hardware Compatibility List (HCL).
8 Introduction
Features of NetBackup Cloud object store workload support
Feature Description
Integration with NetBackup The NetBackup web UI provides the Default cloud
role-based access control (RBAC) object store Administrator RBAC role to control
which NetBackup users can manage Cloud object
store operations in NetBackup. The user does not need
to be a NetBackup administrator to manage Cloud
object store operations.
Management of Cloud object store You can configure a single NetBackup primary server
accounts for multiple Cloud object store accounts, across
different cloud vendors as required.
Authentication and credentials Wide emphasis for security. For protecting Azure Blob
Storage, Storage account, and Access Key must be
specified. For all S3 API-compliant cloud vendors,
Access key and Secret Key are supported. For
Amazon S3, in addition to Access Key, IAM role and
Assume role mechanism of authentication are also
supported.
Intelligent selection of cloud objects Within a single policy, NetBackup provides flexibility
to configure different queries for different buckets or
containers. Some buckets or containers can be
configured to backup all objects in them. You can also
configure some bucket and containers with intelligent
queries to identify objects based on:
Feature Description
■ If you plan to use proxy for communication with cloud endpoints, gather the
required details of the endpoints.
■ Get the Cloud account credentials, and any additional required parameter, as
per the authentication type. These credential details should have required
12 Managing Cloud object store assets
Permissions required for Amazon S3 cloud provider user
The required permissions for backup and recovery are different. See if it is helpful
to create separate accounts for backup and recovery. You need to use other than
original bucket option, to restore to a different Cloud object store account during
recovery.
Note: Cloud object store account shares the namespace with Cloud storage server
and MSDP-C LSU name.
For Cloud object store account, NetBackup supports a variety of cloud providers
using AWS S3 compatible APIs (for example Amazon, Google, Hitachi etc.), other
than Microsoft Azure. For such providers, you need to provide AWS S3 compatible
account access details to add the credentials (that is, Access Key ID, Secret Access
key) of the provider.
To add a Cloud object store account:
1 On the left, click Cloud object store under Workloads.
2 In the Cloud object store account tab, click Add.
3 Enter a name for the account in Cloud object store name field, select a
provider from the list Select Cloud object store provider, and select a backup
host from Backup host for validation list. Credential validation, backup, and
recovery of the Cloud object stores are supported by NetBackup 10.1 or later
on RHEL media server.
4 Select a region from the available list of regions. Click Add above the Region
table to add a new region.
See “Adding a new region” on page 20.. Region is not available for some Cloud
object store providers.
For GCP, which supports dual region buckets, select the base region during
account creation. For example, if a dual region bucket is in the regions
US-CENTRAL1, US-WEST1, select US, as region during account creation to
list the bucket.
5 In Access settings page: Select a type of access method for the account:
■ Access credentials-In this method, NetBackup uses the Access key ID,
and the secret access key to access and secure the Cloud object store
account. If you select this method, perform the subsequent steps 6 to 10
as required to create the account.
■ IAM role (EC2)-NetBackup retrieves the IAM role name and the credentials
that are associated with the EC2 instance. The selected backup host must
be hosted on the EC2 instance. Make sure the IAM role associated with
EC2 instance has required permissions to access the required cloud
14 Managing Cloud object store assets
Adding Cloud object store accounts
resources for Cloud object store protection. Make sure that you select
correct region as per permissions given to EC2 instance while configuring
the Cloud object store account with this option. If you select this option,
perform the optional steps 7 and 8 as required, and then perform steps 9
and 10.
■ Assume role-In this method, NetBackup uses the provided key, the secret
access key, and the role ARN to retrieve temporary credentials for the same
account and cross account. Perform the steps 6 to10 as required to create
the account.
See “Creating cross account access in AWS ” on page 16.
■ Credentials broker- NetBackup retrieves the credentials to access the
cloud resources required for Cloud object store protection.
6 You can add existing credentials or create new credentials for the account:
■ To select an exiting credential for the account, select the Select existing
credentials option, select the required credential from the table, and click
Next.
■ To add a new credential for the account, select Add new credentials. Enter
a Credential name, Tag, and Description for the new credential.
For cloud providers supported through AWS S3 compatible APIs, use AWS
S3 compatible credentials. Specify the Access key ID and Secret access
key.
For Microsoft Azure cloud provider, provide Azure Blob credentials, specify
Storage account and Access key.
■ If you use Assume role as the access method, specify the Amazon
Resource Name (ARN) of the role to use for the account, in the Role ARN
field.
7 (Optional) Select Use SSL if you want to use the SSL (Secure Sockets Layer)
protocol for user authentication or data transfer between NetBackup and cloud
storage provider.
■ Authentication only: Select this option, if you want to use SSL only at the
time of authenticating users while they access the cloud storage.
■ Authentication and data transfer: Select this option, if you want to use
SSL to authenticate users and transfer the data from NetBackup to the
cloud storage, along with user authentication.
■ Check certificate revocation (IPv6 not supported for this option): For
all the cloud providers, NetBackup provides a capability to verify the SSL
certificates against the CRL (Certificate Revocation List). If SSL is enabled
and the CRL option is enabled, each non-self-signed SSL certificate is
Managing Cloud object store assets 15
Adding Cloud object store accounts
verified against the CRL. If the certificate is revoked, NetBackup does not
connect to the cloud provider.
8 (Optional) Select the Use proxy server option to use proxy server and provide
proxy server settings. Once you select the Use proxy server option, you can
specify the following details:
■ Proxy host–Specify IP address or name of the proxy server.
■ Proxy Port–Specify port number of the proxy server.
■ Proxy type– You can select one of the following proxy types:
■ HTTP
Note: You need to provide the proxy credentials for HTTP proxy type.
■ SOCKS
■ SOCKS4
■ SOCKS5
■ SOCKS4A
■ None– Authentication is not enabled. User name and password are not
required.
■ Basic–Username and password needed.
■ NTLM–Username and password needed.
User name is the username of the proxy server.
Password can be empty. You can use maximum 256 characters.
9 Click Next.
10 In the Review page, review the entire configuration of the account, and click
Finish to save the account.
NetBackup creates the Cloud object store accounts only after validation of the
associated credentials with the connection information provided. If you face an
error, update the settings as per the error details. Also, check if the provided
connection information and credentials are correct. The backup host that you assign
for validation, can connect to cloud provider endpoints using the provided
information.
■ UNIX:
/usr/openv/var/global/cloud/
18 Managing Cloud object store assets
Adding Cloud object store accounts
Note: In a cluster deployment, NetBackup database path points to the shared disk,
which is accessible from the active node.
Note: Ensure that you do not change the file permission and ownership of the
cacert.pem file.
To add a CA
You must get a CA certificate from the required cloud provider and update it in the
cacert.pem file. The certificate must be in .PEM format.
1 Open the cacert.pem file.
2 Append the self-signed CA certificate on a new line and at the beginning or
the end of the cacert.pem file.
Add the following information block:
Certificate Authority Name
==========================
–––––BEGIN CERTIFICATE–––––
<Certificate content>
–––––END CERTIFICATE–––––
==========================
–––––BEGIN CERTIFICATE–––––
<Certificate content>
–––––END CERTIFICATE–––––
3 Select the endpoint access style for the cloud service provider. If your cloud
service provider additionally supports virtual hosting of URLs, select Virtual
Hosted Style, otherwise select Path Style.
4 Specify the HTTP and HTTPS ports to use for the region.
5 Click Add. The added region appears in the Region table in the Basic
properties page.
■ Setting up attributes
■ Adding conditions
Note: When you first enable a policy to use accelerator, the next backup
(whether full or incremental) is in effect a full backup: It backs up all objects
corresponding to Cloud objects queries. If that backup was scheduled as an
incremental, it may not complete within the backup window.
■ NetBackup retains track logs for future accelerator backups. Whenever you add
a query, NetBackup does a full non-accelerated backup for the queries that are
added in the list. The unchanged queries are processed as normal accelerator
backups.
■ If the storage unit that is associated with the policy cannot be validated when
you create the policy, it is validated later when the backup job begins. If
accelerator does not support the storage unit, the backup fails. In the bpbrm
log, a message appears that is similar to one of the following: Storage server
%s, type %s, does not support image include. Storage server type %s, does
not support accelerator backup.
26 Protecting Cloud object store assets
About accelerator support
■ Accelerator requires that the storage has the OptimizedImage attribute enabled.
■ The Expire after copy retention can cause images to expire while the backup
runs. To synthesize a new full backup, the SLP-based accelerator backup needs
the previous backup.
■ To detect change in metadata, NetBackup uses one or more cloud APIs per
object/blob. Hence, change detection time increases with number of object/blobs
to be processed. You may observe backups running longer than expected, for
cases with small or no data change but having a large number of objects.
■ If in your environment, for a given object, the metadata or tag are always changed
(added/removed/updated) with its data. Evaluate using incremental without
accelerator over incremental with accelerator from performance and cost view
point.
■ While creating Cloud object store policy with multiple tag-based queries, you
can use few simple rules to get best effect with accelerator. Use the query builder
in the policy creation page, create separate queries, one query per tag. The
accelerator-based policies perform best in this configuration.
your incremental backups to fulls, you must evaluate the advantage of accelerator
full backups against the greater catalog space that full backups require as compared
to incremental backups.
Step 1 Gather information about Gather the following information about each
the Cloud object store bucket/container:
account.
■ The Account name: Credential and
connection details mentioned in account are
used to access cloud resources using REST
APIs during backup. An account is
associated with a single region, hence a
policy can contain bucket/containers
associated with that region only.
■ The bucket/container names
■ The approximate number of objects on each
bucket/container to be backed up.
■ The typical size of the objects.
Step 2 Group the objects based Divide the different objects in the accounts into
on backup requirements groups according to the different backup and
archive requirements.
Protecting Cloud object store assets 29
Planning for policies
Step 3 Consider the storage The NetBackup environment may have some
requirements special storage requirements that the backup
policies must accommodate.
Step 5 Select exactly what to You do not need to backup entire objects,
backup. unless required. Create queries to select and
back up only the required object(s).
30 Protecting Cloud object store assets
Prerequisites for Cloud object store policies
Define policy attributes like name, storage See “Setting up attributes” on page 31.
type, job priority and so on.
Select the account and objects to backup. See “Configuring the Cloud objects tab”
on page 38.
Setting up attributes
To set up attributes:
1 On the left, click Policies, under Protection.
2 Enter a name of the policy in the Policy name field.
3 In the Destination section, configure the following data storage parameters:
■ Select the Cloud-Object-Store option from the Policy type drop-down.
■ The Data classification attribute specifies the classification of the storage
lifecycle policy that stores the backup. For example, a backup with a gold
classification must go to a storage unit with a gold data classification. By
default, NetBackup provides four data classifications: platinum, gold, silver,
and bronze.
This attribute is optional and applies only when the backup is to be written
to a storage lifecycle policy. If the list displays No data classification, the
policy uses the storage selection that is displayed in the Policy storage list.
If a data classification is selected, all the images that the policy creates are
tagged with the classification ID.
■ The Policy storage attribute specifies the storage destination for the policy’s
data. You can override these selections from the Schedule tab.
■ Any available-If you select this option, NetBackup tries to store data
on locally-attached storage units first. Select NetBackup or DataStore
from the Policy volume pool drop-down. The Policy volume pool
attribute specifies the default volume pool where the backups for the
policy are stored. A volume pool is a set of media that is grouped for
use by a single application. The volume pool is protected from access
by other applications and users.
32 Protecting Cloud object store assets
Setting up attributes
8 To activate the policy, select the option Go into effect at, and set the date and
time of activation. The policy must be active for NetBackup to use the policy.
Make sure that the date and time are set to the time that you want to resume
backups.
To deactivate a policy, clear the option. Inactive policies appear are available
in the Policies list.
9 Select the Allow multiple data streams option to allow NetBackup to divide
automatic backups for each query into multiple jobs. Because the jobs are in
separate data streams, they can occur concurrently.
Multistreamed jobs consist of a parent job to perform stream discovery and
children jobs for each stream. Each child job displays its own job ID in the Job
ID column in the Activity monitor. The job ID of the parent job appears in the
Parent Job ID column, which is not displayed by default. Parent jobs display
a dash (-) in the Schedule column.
10 Enable or disable Disable client-side deduplication:
■ Enable-The clients do not deduplicate their own data and do not send their
backup data directly to the storage server. The NetBackup clients send
their data to a deduplication media server. That server deduplicates the
data and then sends it to the storage server.
■ Disable-The clients deduplicate their own data. They also send it directly
to the storage server. Media server deduplication and data transport are
bypassed.
34 Protecting Cloud object store assets
Creating schedule attributes for policies
11 Select the Use accelerator option to enable accelerator for the policy.
NetBackup accelerator increases the speed of backups. The increase in speed
is made possible by change detection techniques on the client. The backup
host uses the change detection techniques to identify the changes occurred
between last backup and the current state of Cloud object store’s object/blobs.
The client sends the changed data to the media server in a more efficient
backup stream. The media server combines the changed data with the rest of
the client's data that is stored in previous backups.
If an object or portion of an object is already in storage and has not been
changed, the media server uses the copy in storage rather than reading it from
the client. The end result is a full NetBackup backup.
12 The Keyword phrase attribute is a phrase that NetBackup associates with all
backups or archives based on the policy. Only the Windows and UNIX client
interfaces support keyword phrases.
Clients can use the same keyword phrase for more than one policy. The same
phrase for multiple policies makes it possible to link backups from related
policies. For example, use the keyword phrase “legal department documents”
for backups of multiple clients that require separate policies, but contain similar
types of data.
The phrase can be a maximum of 128 characters in length. All printable
characters are permitted including spaces and periods. By default, the keyword
phrase is blank.
9 Specify a Retention period for the backups. This attribute specifies how long
NetBackup retains the backups. To set the retention period, select a time period
(or level) from the list. When the retention period expires, NetBackup deletes
information about the expired backup. After the backup expires, the objects in
the backup are unavailable for restores. For example, if the retention is 2 weeks,
data can be restored from a backup that this schedule performs for only 2
weeks after the backup.
10 The Media multiplexing attribute specifies the maximum number of jobs from
the schedule that NetBackup can multiplex to any drive. Multiplexing sends
concurrent backup jobs from one or several clients to a single drive and
multiplexes the backups onto the media.
Specify a number from 1 through 32, where 1 specifies no multiplexing. Any
changes take effect the next time a schedule runs.
11 Click Add the add the attributes or click Add and add another to add a different
set of attributes for another schedule.
■ One for the backups that open each day for a specific amount of time.
■ Another for the backups that keep the window open all week.
Note: The permissions to get list of buckets is not required to backup the
objects in the bucket.
In the Cloud objects tab, click Remove in the row of any bucket/container
name in the Buckets/Containers table to remove it from the policy. Enter a
keyword in the search box to filter the table.
3 To add a query to the selected buckets/containers, click Add query under
Queries.
4 Enter a name for the query, and select the buckets that you want to filter using
the query.
5 In the Select objects/blobs table, select the option Include all objects/blobs
in the selected buckets/containers to backup the entire bucket(s).
6 Under Buckets with no queries, select the buckets/containers to which you
want to add queries. If a bucket is previously selected to include all queries,
that bucket does not appear in this list. Click Add condition or Add Tag
condition to add a condition or a tag condition. See “Adding conditions ”
on page 39. and See “Adding tag conditions ” on page 40. respectively, for
more details.
Adding conditions
NetBackup gives you the convenience of selectively backup the backup
objects/containers inside the buckets/containers using intelligent queries. You can
add conditions or tag conditions to select the objects/blobs inside a bucket/container
that you want to back up.
40 Protecting Cloud object store assets
Adding tag conditions
To add a condition:
1 While creating a policy, in the Cloud objects tab, click Add query, under
Queries.
2 In the Add a query dialog, enter a name for query, select the bucket(s) to
which you want to apply the query. In the list of buckets, you can see only those
buckets that are not selected to include all object.
Note: While editing a query, you can see the buckets that are selected to
include all objects, but the edit option is disabled.
The Queries table shows the queries that you have added. You can search
through the queries using values in Query name and Queries columns. The
values of Queries column do not include the queries with Include all
objects/blobs in the selected buckets/containers option selected.
3 Select Include all objects in the selected buckets option to back up all the
objects in the selected bucket(s).
4 To add a condition, click Add condition.
You can make conditions by using either prefix or object. You cannot use
both prefix and object in the same query. Do not leave any empty fields in a
condition.
5 Select prefix or object from the drop-down, enter a value in the text field. Click
Condition to add another condition. You can join the conditions by the boolean
operator OR.
6 Click Add to save the condition.
Copy a policy
Copying a policy lets you reuse similar policy attributes, schedules, and cloud
objects among your policies. You can also reuse complex queries by copying
policies, to save time.
To copy a policy:
1 On the left, click Policies. All the policies that you have privilege to view, are
displayed in the Policies tab.
2 Click the ellipsis menu (three dots), in the row of the policy that you want to
copy. Click Copy policy.
Alternatively, select the option in the row of the policy, click Copy policy at
the top of the table.
3 In the Copy policy dialog, optionally, change the name of the policy in the
Policy to copy field.
4 Enter the name of the new policy, in the New policy field.
5 Click Copy to initiate copying.
required ports opened and configured for communication from the backup host
to the cloud provider endpoint, using REST API calls.
■ You can plan to start multiple restore jobs in parallel for better throughput. You
can select objects for recovery as individual objects, or using folder or prefix.
Note: In a rare scenario, if you cannot find the required bucket listed in the
table for selection. But you can see the same bucket listed in catalog view
as backup ID. You can select the bucket by manually entering bucket name,
provider ID, and the Cloud object store account name as per the backup
ID. The backup ID is formed as
<providerId>_<cloudAccountname>_<BucketName>_<timestamp>
8 (Optionally) Click Add prefix. In the Add prefix dialog, enter a prefix in the
search box to display relevant results in the table. Click Add, to select all the
matching prefixes displayed in the table for recovery. The selected prefixes
are displayed in a table below the selected objects/blobs. Click Next.
9 In the Recovery options page, you can select whether you want to restore to
the source bucket of container or user different ones. These are the Object
restore options:
■ Restore to the original bucket or container: Select to recover to the same
bucket or container from where the backup was taken. Optionally, add a
prefix for the recovered assets in the Add a prefix field.
■ Restore to a different bucket or container: Select to recover to a different
bucket or container than the one from where the backup was taken.
■ You can select a different Cloud object store account as destination,
from the list above.
■ Select a destination Bucket/Container name. You can use different
Cloud object store accounts that can access the original bucket. This
method also helps you to make accounts with limited and specific
permissions for backup and restore. In this case, you can provide the
same bucket as original to restore to original bucket/container.
■ Optionally, add a prefix for the recovered assets in the Add a prefix
field.
Note: If you have selected Include all objects/blobs and folders, in step
7, the Restore objects/blobs or prefixes to different destinations option
is disabled.
12 (Optional) To override the default priority of the restore job, select Override
default priority, and assign the required value.
13 In the Review page, check all the summary of all the selections that you made,
click Start recovery.
You can see the progress of the restore job in the Activity monitor.
Chapter 5
Troubleshooting
This chapter includes the following topics:
■ Recovery for Cloud object store using web UI for original bucket recovery option
starts but job fails with error 3601
■ Restore fails: "Error bpbrm (PID=3899) client restore EXIT STATUS 40: network
connection broken"
■ Access tier property not restored after overwrite existing to original location
■ Backup is failed and shows a certificate error with Amazon S3 bucket names
containing dots (.)
■ Azure backup job fails when space is provided in tag query for either tag key
name or value.
■ Bucket listing of cloud provider fails when adding bucket in Cloud objects tab
■ AIR import image restore fails on the target domain if the Cloud store account
is not added in target domain.
50 Troubleshooting
Recovery for Cloud object store using web UI for original bucket recovery option starts but job fails with error
3601
This happens as the ordering of objects across multiple tags is not as expected for
accelerator. Few objects that are not found in the tracklog even if they exist in
tracklog and hence backed up repeatedly without getting accelerator benefit for
these objects.
Workaround
Do not use OR condition while combining multiple tag conditions for Azure. Instead
create separate query per tag.
For example,
Say that you have the following query (tagKey eq 'type' and tagValue eq 'text') or
(tagKey eq 'type' and tagValue eq 'none) with say queryname datatype
You can create two queries say by name datatype-text with query (tagKey eq 'type'
and tagValue eq 'text') and datatype-none with query (tagKey eq 'type' and tagValue
eq 'none')
Note: This results in first backup which is without any acceleration for these new
queries. For subsequent backups you can see the problem is resolved.
Workaround
The NetBackup domain name must be unique across Cloud object store account,
Cloud storage server, or MSDP-C LSU. They share single namepace. Hence we
can have following usage scenarios:
Case 1: When there is no valid Cloud storage server or MSDP-C LSU with same
name as Cloud object store account in the environment.
■ Gather the Cloud object store account details as per your environment and
cross-check the details obtained.
54 Troubleshooting
Bucket list empty when selecting it in policy selection
■ Optionally, if the Alias corresponding to the Cloud object store account exists,
use csconfig CLI and note down details of alias.
■ Use following command to list all instances for the type and locate the
Cloud object store account and its instance:
<install-path>/csconfig cldinstance -i -pt <provider_type>
■ Use following command to get the details of instance and the Cloud object
store account:
<install-path>/csconfig cldinstance -i -in <instance name>
When you are using such an invalid region in an account, the List bucket may return
empty.
Workaround:
Do the following:
1 Call the getBucketLocation API on the bucket, to retrieve the correct location
constraint for your account configuration.
If the API returns a blank location constraint, use 'us-east-1' as region location
constraint.
2 Correct the region details by editing the account configuration. See “Adding
Cloud object store accounts” on page 12.
3 To edit cloud configuration, do the following:
■ On the left, click Host Properties.
■ Select the required primary server and connect it. Click Edit primary server.
■ Click Cloud storage.
■ Optionally, enter your cloud provider name in the search field, to filter the
list.
■ In the row corresponding to your cloud provider service host enter the
correct region details and save.
Alternatively, delete the account and recreate it with the correct region location
constraint.
"s3RegionDetails": [
{ "regionId": "us-east-1",
"regionName": "<region name same as listed from prior account>",
}
]
Workaround:
When the error is not fatal, the restore job is a partial success. Check the Activity
Monitor to see the list of objects that cannot be restored. Try restoring to a different
location (bucket/container or different account) to check if the problem is with the
destination cloud account or bucket setting.
When the error is fatal, the restore job fails. Check the nbcosp logs to determine
the object for which the restore has failed. Use granular object selection for the next
restore and skip the earlier failed object while selecting the objects.
Refer to your cloud provider documentation to check if you use a feature or any
metadata that the cloud vendor does not support completely, or if it needs any more
configuration. Fix the object with right attributes in the Cloud object store and start
a new backup job. Once this backup completes, the objects can be restored without
this workaround.
Workaround
Although the bucket list is not available, you can always manually add buckets in
the Cloud objects tab for backup.
When it is a DNS issue, you can optionally list buckets using a temporary workaround
by adding IP hostname-mapping entry in the /etc/hosts file. When only
virtual-hosted style requests are supported, first prefix the endpoint using a random
bucket name, when using commands like ping, dig, nslookup to determine the IP
of the cloud endpoint. For example,
ping randombucketname.s3-fips.us-east-1.amazonaws.com
You can then add the resulting IP along with the actual endpoint name (without the
random bucket name prefix) in /etc/hosts file.
58 Troubleshooting
AIR import image restore fails on the target domain if the Cloud store account is not added in target domain.
Note that this is a temporary workaround to edit DNS entries on the computer for
bucket listing. Remove them after the policy configuration is done, unless the cloud
endpoint is a private cloud setup that can use static IP addresses permanently.