NetBackup9101 CloudPoint InstallGuide
NetBackup9101 CloudPoint InstallGuide
Release 9.1.0.1
September 2021
Veritas NetBackup CloudPoint Install and Upgrade
Guide
Documentation version:
PN:
Legal Notice
Copyright © 2019 Veritas Technologies LLC. All rights reserved.
Veritas and the Veritas Logo are trademarks or registered trademarks of Veritas Technologies
LLC or its affiliates in the U.S. and other countries. Other names may be trademarks of their
respective owners.
This product may contain third-party software for which Veritas is required to provide attribution
to the third party (“Third-party Programs”). Some of the Third-party Programs are available
under open source or free software licenses. The License Agreement accompanying the
Software does not alter any rights or obligations you may have under those open source or
free software licenses. Refer to the Third-party Legal Notices document accompanying this
Veritas product or available at:
https://www.veritas.com/about/legal/license-agreements
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Veritas Technologies
LLC and its licensors, if any.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, et seq.
"Commercial Computer Software and Commercial Computer Software Documentation," as
applicable, and any successor regulations, whether delivered by Veritas as on premises or
hosted services. Any use, modification, reproduction release, performance, display or disclosure
of the Licensed Software and Documentation by the U.S. Government shall be solely in
accordance with the terms of this Agreement.
Veritas Technologies LLC
2625 Augustine Drive.
Santa Clara, CA 95054
http://www.veritas.com
.
Technical Support
Technical Support maintains support centers globally. Technical Support’s primary
role is to respond to specific queries about product features and functionality. The
Technical Support group also creates content for our online Knowledge Base. The
Technical Support group works collaboratively with the other functional areas within
the company to answer your questions in a timely fashion.
Our support offerings include the following:
■ A range of support options that give you the flexibility to select the right amount
of service for any size organization
■ Telephone and/or Web-based support that provides rapid response and
up-to-the-minute information
■ Upgrade assurance that delivers software upgrades
■ Global support purchased on a regional business hours or 24 hours a day, 7
days a week basis
■ Premium service offerings that include Account Management Services
For information about our support offerings, you can visit our website at the following
URL:
www.veritas.com/support
All support services will be delivered in accordance with your support agreement
and the then-current enterprise technical support policy.
Customer service
Customer service information is available at the following URL:
www.veritas.com/support
Customer Service is available to assist with non-technical questions, such as the
following types of issues:
■ Questions regarding product licensing or serialization
■ Product registration updates, such as address or name changes
■ General product information (features, language availability, local dealers)
■ Latest information about product updates and upgrades
■ Information about upgrade assurance and support contracts
■ Advice about technical support options
■ Nontechnical presales questions
■ Issues that are related to CD-ROMs, DVDs, or manuals
Support agreement resources
If you want to contact us regarding an existing support agreement, please contact
the support agreement administration team for your region as follows:
Japan [email protected]
Contents
■ Verifying that specific ports are open on the instance or physical host
Two key services are RabbitMQ and MongoDB. RabbitMQ is CloudPoint's message
broker, and MongoDB stores information on all the assets CloudPoint discovers.
The following figure shows CloudPoint's micro-services model.
You can deploy CloudPoint in a NetBackup media server, but not in a NetBackup
primary server.
If you install CloudPoint on multiple hosts, we strongly recommend that each
CloudPoint instance manage separate resources. For example, two CloudPoint
instances should not manage the same AWS account or the same Azure
subscription. The following scenario illustrates why having two CloudPoint instances
manage the same resources creates problems:
■ CloudPoint instance A and CloudPoint instance B both manage the assets of
the same AWS account.
■ On CloudPoint instance A, the administrator takes a snapshot of an AWS virtual
machine. The database on CloudPoint instance A stores the virtual machine's
metadata. This metadata includes the virtual machine's storage size and its disk
configuration.
■ Later, on CloudPoint instance B, the administrator restores the virtual machine
snapshot. CloudPoint instance B does not have access to the virtual machine's
metadata. It restores the snapshot, but it does not know the virtual machine's
specific configuration. Instead, it substitutes default values for the storage size
configuration. The result is a restored virtual machine that does not match the
original.
If you host the CloudPoint server and media server in the same host, do the following
for proper functioning of the backup from snapshot jobs:
■ Assign distinct IPs and NBU client names to the CloudPoint server and the
media serve so that they can obtain different NetBackup Certificates. This is
required so as have different NetBackup host ID certificates for communication.
Use the following configuration:
■ Configure host with two network adapters
■ Edit the /etc/hosts file and make entry as mentioned in the example below:
■ Once the CloudPoint server is registered, ensure that it has a different HOST
DB entry.
■ Before performing the backup from snapshot jobs, perform the following
optimization: DISABLE SHM and NOSHM. See:
https://www.veritas.com/support/en_US/article.100016170
This will ensure that NetBackup does not use shared memory for communicating
between NetBackup data mover processes.
Category Requirement
Amazon Web Services (AWS) ■ Elastic Compute Cloud (EC2) instance type: t3.large
instance ■ vCPUs: 2
■ RAM: 8 GB
■ Root disk: 64 GB with a solid-state drive (GP2)
■ Data volume: 50 GB Elastic Block Store (EBS) volume of
type GP2 with encryption for the snapshot asset database;
use this as a starting value and expand your storage as
needed.
■ Register the RHEL instance with Red Hat using Red Hat
Subscription Manager
■ Extend the default LVM partitions on the RHEL instance
so that they fulfil the minimum disk space requirement
■ Register the RHEL instance with Red Hat using Red Hat
Subscription Manager
■ Extend the default LVM partitions on the RHEL instance
so that they fulfil the minimum disk space requirement
Preparing for CloudPoint installation 19
Meeting system requirements
■ /var
The /var file system is further used for container runtimes. Ensure that the host
on which you install or upgrade CloudPoint has sufficient space for the following
components.
CloudPoint agents and plug-ins 350 MB free space, for every CloudPoint plug-in
and agent configured
/cloudpoint 50 GB or more
Category Support
Category Support
Cloud platforms
Preparing for CloudPoint installation 23
Meeting system requirements
Category Support
Category Support
CloudPoint services that need to communicate externally via a proxy server use
these predefined environment variables that are set during the CloudPoint
installation.
Memory: 16 GB
Memory: 32 GB or more
■ n1-standard-2 2 8 200
■ n2-standard-2
■ n1-standard-4 4 16 400
■ n2-standard-4
■ n1-standard-16 8 32 500
■ n2-standard-16
■ t2.large 2 8 200
■ t3.large
■ m4.large
■ t2.xlarge 4 16 400
■ t3.xlarge
■ t3a.xlarge
■ m5.4xlarge 8 32 500
■ m4.4xlarge
Preparing for CloudPoint installation 28
CloudPoint host sizing recommendations
■ Standard_B2ms 2 8 200
■ Standard_D2s_v3
■ Standard_D2_v4,
standard_D2s_v4
■ Standard_D2d_v4,
Standard_D2ds_v4
■ Standard_B4ms 4 16 400
■ Standard_D4s_v3
■ Standard_D4_v4,
standard_D8s_v4
■ Standard_D4d_v4,
standard_D4ds_v4
■ Standard_B16ms 8 32 500
■ Standard_D16s_v3
■ Standard_D16_v4,
standard_D16s_v4
■ Standard_D16d_v4,
Standard_D16ds_v4
■ Standard_DS2_v2 2 7 200
■ Standard_D2_v2
■ Standard_DS2
■ Standard_D2
■ Standard_DS3_v2 4 14 400
■ Standard_D3_v2
■ Standard_DS3
■ Standard_D3
■ Standard_NV4as_v4
■ Standard_DS4_v2 8 28 500
■ Standard_D4_v2
■ Standard_DS4
■ Standard_D4
Preparing for CloudPoint installation 29
CloudPoint extension sizing recommendations
Note: For CloudPoint 9.1, the extensions are supported only on Azure and Azure
Stack.
Memory: 16 GB
Memory: 32 GB or more
Consider the following points while choosing a configuration for the CloudPoint
extension:
■ To achieve better performance in a high workload environment, Veritas
recommends that you deploy the CloudPoint extension in the same location as
that of the application hosts.
■ The cloud-based extension on a managed Kubernetes cluster should be in the
same VNet as that of the CloudPoint host. If it is not, then you can make use of
the VNet peering mechanism available with the Azure cloud, to make sure that
CloudPoint host and extension nodes can communicate with each other over
the required ports
■ Depending on the number of workloads, the amount of plug-in data that is
transmitted from the CloudPoint host can get really large in size. The network
latency also plays a key role in such a case. You might see a difference in the
overall performance depending on these factors.
■ In cases where the number of concurrent operations is higher than what the
CloudPoint host and the extensions together can handle, CloudPoint
automatically puts the operations in a job queue. The queued jobs are picked
up only after the running operations are completed.
Platform Description
https://docs.docker.com/install/linux/docker-ce/ubuntu/#set-up-the-repository
■ (If CloudPoint is being deployed in AWS cloud) Ensure that you enable the extra repos:
# sudo yum-config-manager --enable rhui-REGION-rhel-server-extras
■ (If CloudPoint is being deployed on-premise) Enable your subscriptions:
# sudo subscription-manager register --auto-attach
--username=<username> --password=<password>
# subscription-manager repos --enable=rhel-7-server-extras-rpms
# subscription-manager repos --enable=rhel-7-server-optional-rpms
■ Install Docker using the following command:
# sudo yum -y install docker
■ Reload the system manager configuration using the following command:
# sudo systemctl daemon-reload
■ Enable and then restart the docker service using the following commands:
# sudo systemctl enable docker
# sudo systemctl restart docker
■ If SELinux is enabled, change the mode to permissive mode.
Edit the /etc/selinux/config configuration file and modify the SELINUX parameter value
to SELINUX=permissive.
■ Reboot the system for the changes to take effect.
■ Verify that the SELinux mode change is in effect using the following command:
# sudo sestatus
The Current Mode parameter value in the command output should appear as permissive.
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html-single/
getting_started_with_containers/index#getting_docker_in_rhel_7
If the docker is using default storage driver (overlay2 or overlay) on XFS backed file system, then
ensure that XFS FS has ftype option set to 1. Use xfs_info to verify. For details, see
https://docs.docker.com/storage/storagedriver/overlayfs-driver/. Otherwise, you can use different
storage driver. For details, see https://docs.docker.com/storage/storagedriver/select-storage-driver/
Preparing for CloudPoint installation 32
Creating and mounting a volume to store CloudPoint data
Platform Description
Notes:
■ (If CloudPoint is being deployed in AWS cloud) Ensure that you enable the extra repos:
# sudo yum-config-manager --enable rhui-REGION-rhel-server-extras
■ (If CloudPoint is being deployed on-premise) Enable your subscriptions:
# sudo subscription-manager register --auto-attach
--username=<username> --password=<password>
■ If SELinux is enabled, change the mode to permissive mode.
Edit the /etc/selinux/config configuration file and modify the SELINUX parameter value
to SELINUX=permissive.
■ Reboot the system for the changes to take effect.
■ Verify that the SELinux mode change is in effect using the following command:
# getenforce
The Current Mode parameter value in the command output should appear as permissive.
Table 1-14 Volume creation steps for each supported cloud vendor
Vendor Procedure
Amazon Web 1 On the EC2 dashboard, click Volumes > Create Volumes.
Services (AWS)
2 Follow the instructions on the screen and specify the following:
■ Volume type: General Purpose SSD
■ Size: 50 GB
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/
ebs-using-volumes.html
Google Cloud ◆ Create the disk for the virtual machine, initialize it, and mount it to
Platform /cloudpoint.
https://cloud.google.com/compute/docs/disks/add-persistent-disk
Microsoft Azure 1 Create a new disk and attach it to the virtual machine.
https://docs.microsoft.com/en-us/
azure/virtual-machines/linux/attach-disk-portal
https://docs.microsoft.com/en-us/azure/virtual-machines/linux/
attach-disk-portal#use-azure-managed-disks
https://docs.microsoft.com/en-us/
azure/virtual-machines/linux/add-disk
Microsoft Azure 1 Create a new disk and attach it to the virtual machine.
Stack Hub
https://docs.microsoft.com/en-us/azure-stack/user/azure-stack-manage-vm-disks/adding-new-disks
https://docs.microsoft.com/en-us/
azure/virtual-machines/linux/add-disk
Preparing for CloudPoint installation 34
Verifying that specific ports are open on the instance or physical host
Port Description
443 The CloudPoint user interface uses this port as the default HTTPS port.
5671 The CloudPoint RabbitMQ server uses this port for communications. This
port must be open to support multiple agents, extensions, backup from
snapshot, and restore from backup jobs.
■ Restarting CloudPoint
Note: If you plan to install CloudPoint on multiple hosts, read this section carefully
and understand the implications of this approach.
Note: RedHat 8.x has replaced the Docker ecosystem with the Podman ecosystem.
Hence, for deploying CloudPoint on a RHEL8.3 or 8.4 hosts See “Installing
CloudPoint in the Podman environment” on page 41.. For RHEL 7.x hosts See
“Installing CloudPoint in the Docker environment” on page 36.
To install CloudPoint
1 Download the CloudPoint image to the system on which you want to deploy
CloudPoint. Go to the Veritas support site:
https://www.veritas.com/content/support/en_US/downloads
From the Products drop-down, select NetBackup and select the required
version from the Version drop-down. Click Explore. Click Base and upgrade
installers.
The CloudPoint image name resembles the following format:
VRTScloudpoint-docker-x.x.x.x.x.img.gz
Note: The actual file name may vary depending on the release version.
For example:
# sudo docker load -i Veritas_CloudPoint_8.3.0.8549.img.gz
Make a note of the loaded image name and version that appears on the last
line of the output. The version represents the CloudPoint product version that
is being installed. You will specify these details in the next step.
4 Type the following command to run the CloudPoint container:
If the CloudPoint host is behind a proxy server, use the following command
instead:
Parameter Description
<full_path_to_volume_name> Represents the path to the CloudPoint data volume, which typically is
/cloudpoint.
<version> Represents the CloudPoint product version that you noted in the earlier step.
Deploying CloudPoint using container images 38
Installing CloudPoint in the Docker environment
Parameter Description
<http_proxy_value> Represents the value to be used as the HTTP proxy for all connections.
<https_proxy_value> Represents the value to be used as the HTTPS proxy for all connections.
<no_proxy_value> Represents the addresses that are allowed to bypass the proxy server. You
can specify host names, IP addresses, and domain names in this parameter.
(required only if the instance uses a
proxy server) Use commas to separate multiple entries. For example,
"localhost,mycompany.com,192.168.0.10:80".
Note:
If CloudPoint is being deployed in the cloud, ensure that you set the following
values in this parameter:
If using a proxy server, then using the examples provided in the table earlier,
the command syntax is as follows:
# sudo docker run -it --rm -v /cloudpoint:/cloudpoint -e
VX_HTTP_PROXY="http://proxy.mycompany.com:8080/" -e
VX_HTTPS_PROXY="https://proxy.mycompany.com:8080/" -e
VX_NO_PROXY="localhost,mycompany.com,192.168.0.10:80" -v
/var/run/docker.sock:/var/run/docker.sock
veritas/flexsnap-cloudpoint:8.3.0.8549 install
Deploying CloudPoint using container images 39
Installing CloudPoint in the Docker environment
Note: This is a single command. Ensure that you enter the command without
any line breaks.
Parameter Description
Host name for TLS certificate Specify the IP address or the Fully
Qualified Domain Name (FQDN) of the
CloudPoint host.
6 This concludes the CloudPoint deployment process. The next step is to register
the CloudPoint server with the Veritas NetBackup primary server.
If CloudPoint is deployed in the cloud, refer to the NetBackup Web UI Cloud
Administrator's Guide for instructions. If CloudPoint is deployed on-premise,
refer to the NetBackup Snapshot Client Administrator's Guide for instructions.
Deploying CloudPoint using container images 41
Installing CloudPoint in the Podman environment
Note: If you ever need to restart CloudPoint, use the docker run command so that
your environmental data is preserved.
See “Restarting CloudPoint” on page 48.
■ Run the following commands to lock the Podman and Conmon versions to the
supported versions, so that they do not get updated with the yum update:
sudo yum install -y podman-2.2.1-7.module+el8.3.1+9857+68fb1526
sudo yum install -y conmon-2:2.0.20-2.module+el8.3.0+8221+97165c3f
sudo yum install -y python3-dnf-plugin-versionlock
sudo yum versionlock podman* conmon*
To install CloudPoint
Note: When you deploy CloudPoint, you may want to copy the commands below
and paste them in your command line interface. If you do, replace the information
in these examples with what pertains to your installation such as, the product and
build version, the download directory path, and so on.
1 Download the CloudPoint image to the system on which you want to deploy
CloudPoint.
The CloudPoint image name resembles the following format:
VRTScloudpoint-podman-9.x.x.x.x.tar.gz
# ls
VRTScloudpoint-podman-9.x.x.x.x.tar
[root@<user>-RHEL8 ec2-user]# tar -xvf VRTScloudpoint-podman-9.x.x.x.x.tar
flexsnap-cloudpoint-9.x.x.x.x.img
flexsnap-coordinator-9.x.x.x.x.img
flexsnap-agent-9.x.x.x.x.img
flexsnap-onhostagent-9.x.x.x.x.img
flexsnap-policy-9.x.x.x.x.img
flexsnap-scheduler-9.x.x.x.x.img
flexsnap-config-9.x.x.x.x.img
flexsnap-certauth-9.x.x.x.x.img
flexsnap-rabbitmq-9.x.x.x.x.img
flexsnap-api-gateway-9.x.x.x.x.img
flexsnap-notification-9.x.x.x.x.img
flexsnap-fluentd-9.x.x.x.x.img
flexsnap-nginx-9.x.x.x.x.img
flexsnap-idm-9.x.x.x.x.img
flexsnap-workflow-9.x.x.x.x.img
flexsnap-listener-9.x.x.x.x.img
flexsnap-datamover-9.x.x.x.x.img
flexsnap-mongodb-9.x.x.x.x.img
flexsnap-podman-api.service
flexsnap-podman-containers.service
flexsnap_preinstall.sh
dnsname
Deploying CloudPoint using container images 43
Installing CloudPoint in the Podman environment
4 Run the following command to prepare the CloudPoint host for installation:
# ./flexsnap_preinstall.sh
Note: This is a single command. Ensure that you enter the command without
any line breaks.
If the CloudPoint host is behind a proxy server, use the following command
instead:
Parameter Description
<http_proxy_value> Represents the value to be used as the HTTP proxy for all connections.
<https_proxy_value> Represents the value to be used as the HTTPS proxy for all connections.
Parameter Description
<no_proxy_value> Represents the addresses that are allowed to bypass the proxy server. You
can specify host names, IP addresses, and domain names in this parameter.
(required only if the instance uses a
proxy server) Use commas to separate multiple entries. For example,
"localhost,mycompany.com,192.168.0.10:80".
Note:
If CloudPoint is being deployed in the cloud, ensure that you set the following
values in this parameter:
Parameter Description
Host name for TLS certificate Specify the IP address or the Fully
Qualified Domain Name (FQDN) of the
CloudPoint host.
7 This concludes the CloudPoint deployment process. The next step is to register
the CloudPoint server with the Veritas NetBackup primary server.
If CloudPoint is deployed in the cloud, refer to the NetBackup Web UI Cloud
Administrator's Guide for instructions. If CloudPoint is deployed on-premise,
refer to the NetBackup Snapshot Client Administrator's Guide for instructions.
Deploying CloudPoint using container images 47
Verifying that CloudPoint is installed successfully
Note: If you ever need to restart CloudPoint, use the podman run command so that
your environmental data is preserved.
See “Restarting CloudPoint” on page 48.
■ Run the following command and verify that the CloudPoint services are running
and the status is displayed as UP:
For Docker environment: # sudo docker ps -a
For Podman environment: # podman ps -a
The command output resembles the following:
Restarting CloudPoint
If you need to restart CloudPoint, it's important that you restart it correctly so that
your environmental data is preserved.
To restart CloudPoint in the Docker environment
Warning: Do not use commands such as docker restart or docker stop and
docker start to restart CloudPoint. Use the docker run command described
below.
Note: Ensure that you enter the command without any line breaks.
Deploying CloudPoint using container images 49
Restarting CloudPoint
Note: Ensure that you enter the commands without any line breaks.
Error adding network: failed to allocate for range 0: 10.89.0.140 has been al
02da9e9aab2f79303c53dfb10b5ae6b6b70288d36b8fffbdfabba046da5a9afc, duplicate a
ERRO[0000] Error while adding pod to CNI network "flexsnap-network": failed t
range 0: 10.89.0.140 has been allocated to
02da9e9aab2f79303c53dfb10b5ae6b6b70288d36b8fffbdfabba046da5a9afc, duplicate a
Error: error configuring network namespace for container
02da9e9aab2f79303c53dfb10b5ae6b6b70288d36b8fffbdfabba046da5a9afc: failed to a
10.89.0.140 has been allocated to 02da9e9aab2f79303c53dfb10b5ae6b6b70288d36b8
duplicate allocation is not allowed"
The issue exists in the Podman subsystem which fails to remove the existing IP
allocated for the container from dir /var/lib/cni/networks/flexsnap-network/,
when the container is stopped.
Workaround
Deploying CloudPoint using container images 50
Restarting CloudPoint
■ Choose the CloudPoint image supported on Ubuntu or RHEL system that meets
the CloudPoint installation requirements and create a host.
See “Creating an instance or preparing the host to install CloudPoint” on page 30.
■ Verify that you can connect to the host through a remote desktop.
Deploying CloudPoint extensions 53
Installing the CloudPoint extension on a VM
See “Verifying that specific ports are open on the instance or physical host”
on page 34.
■ Install Docker or Podman container platforms on the host.
See Table 1-13 on page 31.
■ Download the OS-specific CloudPoint image from the Veritas support site.
■ For Docker environment, load the image on the host.
# sudo docker load -i CloudPoint_image_name
Note: The actual file name varies depending on the release version.
■ For the VM based extension installed on a RHEL OS the SElinux mode should
be "permissive"
■ Network Security Groups used by the host that is being protected should allow
communication from the host where the extension is installed, on the specified
ports.
See “Installing the CloudPoint extension on a VM” on page 53.
2 Then go to the NetBackup Web UI and follow the steps 7 and 8 described in
the section Downloading CloudPoint extension to generate and copy the
validation token.
See “Downloading the CloudPoint extension” on page 59.
Note: For the VM-based extension you do not need to download the extension.
Proceed directly to steps 7 and 8 to copy the token.
Parameter Description
Overview
■ Your Azure managed Kubernetes cluster should already be deployed with
appropriate network and configuration settings, and with specific roles. The
cluster must be able to communicate with CloudPoint.
The required roles are: Azure Kubernetes Service RBAC Writer, AcrPush,
Azure Kubernetes Service Cluster User Role
For supported Kubernetes versions, refer to the CloudPoint Hardware
Compatibility List (HCL).
Deploying CloudPoint extensions 56
Preparing to install the extension on a managed Kubernetes cluster
■ Use an existing Azure Container Registry or create a new one, and ensure that
the managed Kubernetes cluster has access to pull images from the container
registry
■ A dedicated nodepool for CloudPoint workloads needs to be created with manual
scaling or 'Autoscaling' enabled in the Azure managed Kubernetes cluster. The
autoscaling feature allows your nodepool to scale dynamically by provisioning
and de-provisioning the nodes as required automatically.
■ CloudPoint extension images (flexsnap-cloudpoint, flexsnap-listener,
flexsnap-workflow, flexsnap-fluentd, flexsnap-datamover) need to be
uploaded to the Azure container registry.
Prepare the host and the managed Kubernetes cluster in Azure
■ Choose the CloudPoint image supported on Ubuntu or RHEL system that meets
the CloudPoint installation requirements and create a host.
See “Creating an instance or preparing the host to install CloudPoint” on page 30.
■ Verify that the port 5671 is open on the main CloudPoint host.
See “Verifying that specific ports are open on the instance or physical host”
on page 34.
■ The public IP of the virtual machine scale set via which the node pool is
configured has to be allowed to communicate through port 22, on the workloads
being protected.
■ Install a Docker or Podman container platform on the host and start the container
service.
See Table 1-13 on page 31.
■ Prepare the CloudPoint host to access Kubernetes cluster within your Azure
environment.
■ Install Azure CLI.
https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-linux?pivots=apt
■ Install Kubernetes CLI
https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-using-native-package-management
■ Login to the Azure environment to access the Kubernetes cluster by running
this command on Azure CLI:
# az login --identity
# az account set --subscription <subscriptionID>
# az aks get-credentials --resource-group <resource_group_name>
--name <cluster_name>
Deploying CloudPoint extensions 57
Preparing to install the extension on a managed Kubernetes cluster
■ Ensure to create an Azure Container Registry or use the existing one if available,
to which the CloudPoint images will be pushed (uploaded). See Azure
documentation:
https://docs.microsoft.com/en-in/azure/container-registry/container-registry-get-started-portal
https://docs.microsoft.com/en-in/azure/container-registry/container-registry-get-started-azure-cli
■ To run the kubectl and container registry commands from the host system,
assign the following role permissions to your VM and cluster. You can assign a
'Contributor', 'Owner', or any custom role that grants full access to manage all
resources.
■ Go to your Virtual Machine > click Identity on the left > under System
assigned tab, turn the Status to 'ON' > click Azure role assignment > click
Add role assignments > select Scope as 'Subscription' or 'Resource Group'
> select Role and assign the following roles : Azure Kubernetes Service
RBAC Writer, AcrPush, Azure Kubernetes Service Cluster User Role, and
Save.
Deploying CloudPoint extensions 58
Preparing to install the extension on a managed Kubernetes cluster
■ Go to your Kubernetes cluster > click Access Control (IAM) on the left >
click Add role assignments > select Role as 'Contributor ' > Select Assign
access to as 'Virtual Machines' > select your VM from the drop-down and
Save.
■ Create a storage account in the same subscription and region your Kubernetes
cluster is in, and create a file share into it. (Follow the default settings by Azure.)
https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-create-file-share?tabs=azure-portal
■ Create a namespace for CloudPoint from the command line on host system:
# kubectl create namespace cloudpoint-system
■ Create a Kubernetes secret to access the file share. You will need to provide
this secret while configuring the CloudPoint extension.
# kubectl create secret generic <secret_name> --namespace
cloudpoint-system
--from-literal=azurestorageaccountname=<storage_account_name>
--from-literal=azurestorageaccountkey=<storage_account_key>
Pass the following parameters in the command:
Parameter Description
secret_name Specify a name for the secret that you are creating.
Example:
# kubectl create secret generic mysecret --namespace
cloudpoint-system --from-literal=azurestorageaccountname=mystorage
--from-literal=azurestorageaccountkey=IusI10S9w6n1Ve4N31pFCaWNCWWWPGMw0WzDQT....
Deploying CloudPoint extensions 59
Downloading the CloudPoint extension
3 From the desired CloudPoint server row, click the actions icon on the right and
then select Add extension.
Note: For the VM-based extension you do not need to download the extension.
Proceed directly to steps 7 and 8 to copy the token.
4 If you are installing the extension on a managed Kubernetes cluster (on Azure
cloud), then on the Add extension dialog box, click the download hyperlink.
This launches a new web browser tab.
Do not close the Add extension dialog box yet. When you configure the
extension, you will return to this dialog box to generate the validation token.
5 Switch to the new browser tab that opened and from the Add extension card,
click Download. The extension script will be downloaded.
6 Before proceeding to the next step to generate the validation token, copy the
downloaded script to the CloudPoint host, then from the command prompt run
the extension script to configure the extension.
See “Installing the CloudPoint extension on a VM” on page 53.
See “Installing the CloudPoint extension on a managed Kubernetes cluster”
on page 61.
Deploying CloudPoint extensions 61
Installing the CloudPoint extension on a managed Kubernetes cluster
7 To generate the validation token, on the Add extension dialog box, click Create
Token
8 Click Copy Token to copy the displayed token. Then provide it on the command
prompt while configuring the extension.
Note: The token is valid for 180 seconds only. If you do not use the token within
that time frame, generate a new token.
Note: Do not create the authentication token yet, as it is valid only for 180
seconds.
2 If the host from which you want to install the extension is not the same host
where your CloudPoint is installed, load the CloudPoint container images on
the extension host (flexsnap-cloudpoint, flexsnap-listener,
flexsnap-workflow, flexsnap-fluentd, flexsnap-datamover)
Parameter Description
Example: mycontainer.azurecr.io
Example: 9.0.1.0.9129
■ To tag the images, run the following command for each image, depending
on the container platform running on your host:
For Docker: # docker tag source_image:tag target_image:tag
For Podman: # podman tag source_image:tag target_image:tag
Where,
■ the source image tag is: veritas/flexsnap-cloudpoint:tag>
■ the target image tag is:
<container_registry_path>/<source_image_name>:<CloudPoint_version_tag>
Example:
4 Then to push the images to the container registry, run the following command
for each image, depending on the container platform running on your host:
For Docker: # docker push target_image:tag
For Podman: # podman push target_image:tag
Example:
5 Once the images are pushed to the container registry, execute the extension
script cp_extension_start.sh that was downloaded earlier, from the host
where kubectl is installed. The script can be executed either by providing all
the required input parameters in one command, or in an interactive way where
you will be prompted for input.
Gather the following parameters before running the script:
Parameter Description
Example:
'mycontainer.azurecr.io/veritas/flexsnap-cloudpoint:9.0.1.0.9129'
Parameter Description
Example: mysecret
fileshare_name You can find the name of your file share in your storage
account in the Azure portal. It is recommended that your
storage account and the Kubernetes cluster should be in the
same region
Example: mysharename
■ Run the installation command with all the input parameters described in
the above table:
# ./cp_extension_start.sh install -c <cloudpoint_ip> -i
<target_image:tag> -n <namespace> -p <tag_key=tag_val> -s
<secret_name> -f <fileshare_name> -t <workflow_token>
Example:
# ./cp_extension_start.sh install
-c 10.20.xx.xxx
-i mycontainer.azurecr.io/veritas/flexsnap-cloudpoint:9.0.1.0.9271
-n cloudpoint-system
-p agentpool=cpuserpool
-s mysecret
-f mysharename
-t workflow-3q3ou4jxiircp9tk0eer2g9jx7mwuypwz10k4i3sms2e7k4ee7-.....
■ When the script runs, provide the input parameters as described in the
above table:
Deploying CloudPoint extensions 65
Installing the CloudPoint extension on a managed Kubernetes cluster
namespace/cloudpoint-system configured
deployment.apps/flexsnap-cloudpoint created
serviceaccount/cloudpoint-acc created
clusterrole.rbac.authorization.k8s.io/cloudpoint-cloudpoint-system unchang
clusterrolebinding.rbac.authorization.k8s.io/cloudpoint-rolebinding-cloudp
customresourcedefinition.apiextensions.k8s.io/cloudpoint-servers.veritas.c
customresourcedefinition.apiextensions.k8s.io/cloudpoint-servers.veritas.c
cloudpointrule.veritas.com/cloudpoint-config-rule created
Note: The output examples have been formatted to fit the screen.
Option Procedure
Disable or enable the You can disable or enable the extensions from the NetBackup Web
extension: UI
■ VM-based Go to Cloud > CloudPoint Servers tab > click Advanced settings
extension > go to CloudPoint extensions tab > then disable or enable the
■ Managed extension as required, and click Save.
Kubernetes No jobs will be scheduled on the extension that is disabled.
cluster-based
extension Note: When CloudPoint is upgraded, all the extensions are
automatically disabled. Then you need to upgrade the extensions
with the same CloudPoint version and enable them manually from
the NetBackup Web UI.
Deploying CloudPoint extensions 67
Managing the extensions
Option Procedure
Stop, start, or restart Execute the following commands on the extension host VM to
the VM-based stop/start/restart the extension:
extension
For Docker:
To stop the
extension: # sudo docker run -it --rm
-v /<full_path_to_volume_name>:/<full_path_to_volume_name>
-v /var/run/docker.sock:/var/run/docker.sock
veritas/flexsnap-cloudpoint:<version> stop
For Podman
For Podman
For Podman
Option Procedure
Renew certificate for 1 Run the following command on the extension host:
a VM-based
extension # sudo docker run -it --rm
-v /<full_path_to_volume_name>:/<full_path_to_volume_nam
-v /var/run/docker.sock:/var/run/docker.sock
veritas/flexsnap-cloudpoint:<version> renew_extension
# ./cp_extension_start.sh renew
Note: Before you configure the AWS plug-in, make sure that you have configured
the proper permissions so CloudPoint can work with your AWS assets.
The following information is required for configuring the CloudPoint plug-in for AWS:
If CloudPoint is deployed on a on-premise host or a virtual machine:
Access key The access key ID, when specified with the secret
access key, authorizes CloudPoint to interact with the
AWS APIs.
Role Name The IAM role that is attached to the other AWS account
(cross account).
When CloudPoint connects to AWS, it uses the following endpoints. You can use
this information to create a allowed list on your firewall.
■ ec2.*.amazonaws.com
■ sts.amazonaws.com
■ rds.*.amazonaws.com
■ kms. *.amazonaws.com
In addition, you must specify the following resources and actions:
■ ec2.SecurityGroup.*
■ ec2.Subnet.*
■ ec2.Vpc.*
■ ec2.createInstance
CloudPoint cloud plug-ins 72
AWS plug-in configuration notes
■ ec2.runInstances
■ You cannot delete automated snapshots of RDS instances and Aurora clusters
through CloudPoint.
CloudPoint cloud plug-ins 73
AWS plug-in configuration notes
Replication failed The source snapshot KMS key [<key>] does not exist,
is not enabled or you do not have permissions to access it.
This is a limitation from AWS and is currently outside the scope of CloudPoint.
■ If a region is removed from the AWS plug-in configuration, then all the discovered
assets from that region are also removed from the CloudPoint assets database.
If there are any active snapshots that are associated with the assets that get
removed, then you may not be able perform any operations on those snapshots.
Once you add that region back into the plug-in configuration, CloudPoint
discovers all the assets again and you can resume operations on the associated
CloudPoint cloud plug-ins 74
AWS plug-in configuration notes
■ For cross account configuration, from the AWS IAM console (IAM Console >
Roles), edit the IAM roles such that:
■ A new IAM role is created and assigned to the other AWS account (target
account). Also, assign that role a policy that has the required permissions
to access the assets in the target AWS account.
■ The IAM role of the other AWS account should trust the Source Account IAM
role (Roles > Trust relationships tab).
■ The Source Account IAM role is assigned an inline policy (Roles >
Permissions tab) that allows the source role to assume the role
("sts:AssumeRole") of the other AWS account.
■ The validity of the temporary security credentials that the Source Account
IAM role gets when it assumes the Cross Account IAM role is set to 1 hour,
at a minimum (Maximum CLI/API session duration field).
See “Before you create a cross account configuration” on page 81.
■ If the assets in the AWS cloud are encrypted using AWS KMS Customer
Managed Keys (CMK), then you must ensure the following:
■ If using an IAM user for CloudPoint plug-in configuration, ensure that the
IAM user is added as a key user of the CMK.
■ For source account configuration, ensure that the IAM role that is attached
to the CloudPoint instance is added as a key user of the CMK.
■ For cross account configuration, ensure that the IAM role that is assigned
to the other AWS account (cross account) is added as a key user of the
CMK.
Adding these IAM roles and users as the CMK key users allows them to use
the AWS KMS CMK key directly for cryptographic operations on the assets.
Refer to the AWS documentation for more details:
https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html
#key-policy-default-allow-users
3 To configure the AWS plug-in for the created or edited user, refer to the plug-in
configuration notes.
See “AWS plug-in configuration notes” on page 69.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "EC2AutoScaling",
CloudPoint cloud plug-ins 77
AWS plug-in configuration notes
"Effect": "Allow",
"Action": [
"autoscaling:UpdateAutoScalingGroup",
"autoscaling:AttachInstances"
],
"Resource": [
"*"
]
},
{
"Sid": "KMS",
"Effect": "Allow",
"Action": [
"kms:ListKeys",
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncryptTo",
"kms:DescribeKey",
"kms:ListAliases",
"kms:GenerateDataKey",
"kms:GenerateDataKeyWithoutPlaintext",
"kms:ReEncryptFrom",
"kms:CreateGrant"
],
"Resource": [
"*"
]
},
{
"Sid": "RDSBackup",
"Effect": "Allow",
"Action": [
"rds:DescribeDBSnapshots",
"rds:DescribeDBClusters",
"rds:DescribeDBClusterSnapshots",
"rds:DeleteDBSnapshot",
"rds:CreateDBSnapshot",
"rds:CreateDBClusterSnapshot",
"rds:ModifyDBSnapshotAttribute",
"rds:DescribeDBSubnetGroups",
"rds:DescribeDBInstances",
"rds:CopyDBSnapshot",
"rds:CopyDBClusterSnapshot",
CloudPoint cloud plug-ins 78
AWS plug-in configuration notes
"rds:DescribeDBSnapshotAttributes",
"rds:DeleteDBClusterSnapshot",
"rds:ListTagsForResource",
"rds:AddTagsToResource"
],
"Resource": [
"*"
]
},
{
"Sid": "RDSRecovery",
"Effect": "Allow",
"Action": [
"rds:ModifyDBInstance",
"rds:ModifyDBClusterSnapshotAttribute",
"rds:RestoreDBInstanceFromDBSnapshot",
"rds:ModifyDBCluster",
"rds:RestoreDBClusterFromSnapshot",
"rds:CreateDBInstance",
"rds:RestoreDBClusterToPointInTime",
"rds:CreateDBSecurityGroup",
"rds:CreateDBCluster",
"rds:RestoreDBInstanceToPointInTime",
"rds:DescribeDBClusterParameterGroups"
],
"Resource": [
"*"
]
},
{
"Sid": "EC2Backup",
"Effect": "Allow",
"Action": [
"sts:GetCallerIdentity",
"ec2:CreateSnapshot",
"ec2:CreateSnapshots",
"ec2:DescribeInstances",
"ec2:DescribeInstanceStatus",
"ec2:ModifySnapshotAttribute",
"ec2:CreateImage",
"ec2:CopyImage",
"ec2:CopySnapshot",
"ec2:DescribeSnapshots",
CloudPoint cloud plug-ins 79
AWS plug-in configuration notes
"ec2:DescribeVolumeStatus",
"ec2:DescribeVolumes",
"ec2:RegisterImage",
"ec2:DescribeVolumeAttribute",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs",
"ec2:DeregisterImage",
"ec2:DeleteSnapshot",
"ec2:DescribeInstanceAttribute",
"ec2:DescribeRegions",
"ec2:ModifyImageAttribute",
"ec2:DescribeAvailabilityZones",
"ec2:ResetSnapshotAttribute",
"ec2:DescribeHosts",
"ec2:DescribeImages",
"ec2:DescribeSecurityGroups" ,
"ec2:DescribeNetworkInterfaces"
],
"Resource": [
"*"
]
},
{
"Sid": "EC2Recovery",
"Effect": "Allow",
"Action": [
"ec2:RunInstances",
"ec2:AttachNetworkInterface",
"ec2:DetachVolume",
"ec2:AttachVolume",
"ec2:DeleteTags",
"ec2:CreateTags",
"ec2:StartInstances",
"ec2:StopInstances",
"ec2:TerminateInstances",
"ec2:CreateVolume",
"ec2:DeleteVolume",
"ec2:DescribeIamInstanceProfileAssociations",
"ec2:AssociateIamInstanceProfile",
"ec2:AssociateAddress",
"ec2:DescribeKeyPairs",
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
CloudPoint cloud plug-ins 80
AWS plug-in configuration notes
"secretsmanager:DescribeSecret",
"secretsmanager:RestoreSecret",
"secretsmanager:PutSecretValue",
"secretsmanager:DeleteSecret",
"secretsmanager:UpdateSecret",
"ec2:AuthorizeSecurityGroupEgress",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:DescribeInstanceTypeOfferings",
"ec2:GetEbsEncryptionByDefault"
],
"Resource": [
"*"
]
},
{
"Sid": "EBS",
"Effect": "Allow",
"Action": [
"ebs:ListSnapshotBlocks",
"ebs:StartSnapshot"
],
"Resource": [
"*"
]
},
{
"Sid": "SNS",
"Effect": "Allow",
"Action": [
"sns:Publish",
"sns:GetTopicAttributes"
],
"Resource": [
"arn:aws:sns:*:*:*"
]
},
{
"Sid": "IAM",
"Effect": "Allow",
"Action": [
"iam:ListAccountAliases",
"iam:SimulatePrincipalPolicy"
],
CloudPoint cloud plug-ins 81
AWS plug-in configuration notes
"Resource": [
"*"
]
}
]
}
3 Set up a trust relationship between the source and target AWS accounts.
In the target AWS account, edit the trust relationship and specify source account
number and source account role.
This action allows only the CloudPoint instance hosted in source AWS account
to assume the target role using the credentials associated with source account's
IAM role. No other entities can assume this role.
CloudPoint cloud plug-ins 83
Google Cloud Platform plug-in configuration notes
5 From the target account's Summary page, edit the Maximum CLI/API session
duration field and set the duration to 1 hour, at a minimum.
This setting determines the amount of time for which the temporary security
credentials that the source account IAM role gets when it assumes target
account IAM role remain valid.
Project ID The ID of the project from which the resources are managed.
Listed as project_id in the JSON file.
Client Email The email address of the Client ID. Listed as client_email
in the JSON file.
CloudPoint cloud plug-ins 84
Google Cloud Platform plug-in configuration notes
Private Key The private key. Listed as private_key in the JSON file.
Note: You must enter this key without quotes (neither single
quotes nor double quotes). Do not enter any spaces or return
characters at the beginning or end of the key.
GCP zones
compute.diskTypes.get
compute.diskTypes.list
compute.disks.create
compute.disks.createSnapshot
compute.disks.delete
compute.disks.get
compute.disks.list
compute.disks.setIamPolicy
compute.disks.setLabels
compute.disks.update
compute.disks.use
compute.globalOperations.get
compute.globalOperations.list
compute.images.get
compute.images.list
compute.instances.addAccessConfig
compute.instances.attachDisk
compute.instances.create
compute.instances.delete
CloudPoint cloud plug-ins 86
Google Cloud Platform plug-in configuration notes
compute.instances.detachDisk
compute.instances.get
compute.instances.list
compute.instances.setDiskAutoDelete
compute.instances.setMachineResources
compute.instances.setMetadata
compute.instances.setMinCpuPlatform
compute.instances.setServiceAccount
compute.instances.updateNetworkInterface
compute.instances.setLabels
compute.instances.setMachineType
compute.instances.setTags
compute.instances.start
compute.instances.stop
compute.instances.use
compute.machineTypes.get
compute.machineTypes.list
compute.networks.get
compute.networks.list
compute.projects.get
compute.regionOperations.get
compute.regionOperations.list
compute.regions.get
compute.regions.list
compute.snapshots.create
compute.snapshots.delete
compute.snapshots.get
compute.snapshots.list
compute.snapshots.setLabels
compute.snapshots.useReadOnly
compute.subnetworks.get
compute.subnetworks.list
compute.subnetworks.update
compute.subnetworks.use
compute.subnetworks.useExternalIp
compute.zoneOperations.get
compute.zoneOperations.list
compute.zones.get
compute.zones.list
CloudPoint cloud plug-ins 87
Google Cloud Platform plug-in configuration notes
■ In the dialog box, click to save the file. This file contains the parameters
you need to configure the Google Cloud plug-in. The following is a sample
JSON file showing each parameter in context. The private-key is truncated
for readability.
{
"type": "service_account",
"project_id": "some-product",
"private_key": "-----BEGIN PRIVATE KEY-----\n
N11EvA18ADAN89kq4k199w08AQEFAA5C8KYw9951A9EAAo18AQCnvpuJ3oK974z4\n
.
.
.
weT9odE4ryl81tNU\nV3q1XNX4fK55QTpd6CNu+f7QjEw5x8+5ft05DU8ayQcNkX\n
4pXJoDol54N52+T4qV4WkoFD5uL4NLPz5wxf1y\nNWcNfru8K8a2q1/9o0U+99==\n
-----END PRIVATE KEY-----\n",
"client_email": "[email protected]",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com \
/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1 \
/metadata/x509/ email%40xyz-product.iam.gserviceaccount.com"
}
3 When you configure the plug-in from the NetBackup user interface, copy and
paste the reformatted private key into the Private Key field. The reformatted
private_key should look similar to the following:
Resource Group prefix The string with which you want to append all the resources
in a resource group.
Protect assets even if The check box determines whether the assets are protected
prefixed Resource Groups if they are not associated to any resource groups. The
are not found prefixed Resource Group must exist in the same region as
the source asset’s Resource Group.
■ If you are creating multiple configurations for the same plug-in, ensure that they
manage assets from different Tenant IDs. Two or more plug-in configurations
should not manage the same set of cloud assets simultaneously.
■ When you create snapshots, the Azure plug-in creates an Azure-specific lock
object on each of the snapshots. The snapshots are locked to prevent unintended
deletion either from the Azure console or from an Azure CLI or API call. The
lock object has the same name as that of the snapshot. The lock object also
includes a field named "notes" that contains the ID of the corresponding VM or
asset that the snapshot belongs to.
You must ensure that the "notes" field in the snapshot lock objects is not modified
or deleted. Doing so will disassociate the snapshot from its corresponding original
asset.
CloudPoint cloud plug-ins 91
Microsoft Azure plug-in configuration notes
The Azure plug-in uses the ID from the "notes" fields of the lock objects to
associate the snapshots with the instances whose source disks are either
replaced or deleted, for example, as part of the 'Original location' restore
operation.
■ Azure plug-in supports the following GovCloud (US) regions:
■ US Gov Arizona
■ US Gov Texas
■ US Gov Virginia
■ CloudPoint Azure plug-in does not support the following Azure regions:
Location Region
US ■ US DoD Central
■ US DoD East
■ US Sec West
■ Microsoft Azure gen2 type of virtual machines are not supported. Ensure that
you use a gen1 type image to create a VM.
■ CloudPoint does not support application consistent snapshots and granular file
restores for Windows systems with virtual disks or storage spaces that are
created from a storage pool. If a Microsoft SQL server snapshot job uses disks
from a storage pool, the job fails with an error. But if a snapshot job for virtual
machine which is in a connected state is triggered, the job might be successful.
In this case, the file system quiescing and indexing is skipped. The restore job
for such an individual disk to original location also fails. In this condition, the
host might move to an unrecoverable state and requires a manual recovery.
The following is a custom role definition (in JSON format) that gives CloudPoint the
ability to:
■ Configure the Azure plug-in and discover assets.
■ Create host and disk snapshots.
■ Restore snapshots to the original location or to a new location.
■ Delete snapshots.
"Microsoft.Network/networkSecurityGroups/write",
"Microsoft.Network/publicIPAddresses/delete",
"Microsoft.Network/publicIPAddresses/join/action",
"Microsoft.Network/publicIPAddresses/write",
"Microsoft.Network/routeTables/join/action",
"Microsoft.Network/virtualNetworks/delete",
"Microsoft.Network/virtualNetworks/subnets/delete",
"Microsoft.Network/virtualNetworks/subnets/join/action",
"Microsoft.Network/virtualNetworks/write",
"Microsoft.Resources/*/read",
"Microsoft.Resources/subscriptions/resourceGroups/write",
"Microsoft.Resources/subscriptions/resourceGroups/ \
validateMoveResources/action",
"Microsoft.Resources/subscriptions/tagNames/tagValues/write",
"Microsoft.Resources/subscriptions/tagNames/write",
"Microsoft.Subscription/*/read",
"Microsoft.Authorization/locks/*",
"Microsoft.Authorization/*/read" ],
"NotActions": [ ],
"AssignableScopes": [
"/subscriptions/subscription_GUID",
"/subscriptions/subscription_GUID/ \
resourceGroups/myCloudPointGroup" ] }
"Microsoft.ContainerService/managedClusters/agentPools/read",
"Microsoft.ContainerService/managedClusters/read",
"Microsoft.Compute/virtualMachineScaleSets/write",
"Microsoft.Compute/virtualMachineScaleSet
To create a custom role using powershell, follow the steps in the following Azure
documentation:
https://docs.microsoft.com/en-us/azure/role-based-access-control/tutorial-custom-role-powershell
For example:
To create a custom role using Azure CLI, follow the steps in the following Azure
documentation:
https://docs.microsoft.com/en-us/azure/role-based-access-control/tutorial-custom-role-cli
For example:
CloudPoint cloud plug-ins 94
Microsoft Azure Stack Hub plug-in configuration notes
Note: Before creating a role, you must copy the role definition given earlier (text in
JSON format) in a .json file and then use that file as the input file. In the sample
command displayed earlier, ReaderSupportRole.json is used as the input file that
contains the role definition text.
For details, follow the steps in the following Azure Stack documentation:
https://docs.microsoft.com/en-us/azure-stack/operator/azure-stack-create-service-principals
Table 4-7 Azure Stack Hub plug-in configuration parameters using AAD
Azure Stack Hub Resource The endpoint URL in the following format, that allows
Manager endpoint URL CloudPoint to connect with your Azure resources.
https://management.<location>.<FQDN>
Authentication Resource URL The URL where the authentication token is sent to.
(optional)
Azure Stack Hub Resource The endpoint URL in the following format, that allows
Manager endpoint URL CloudPoint to connect with your Azure resources.
https://management.<location>.<FQDN>
User Name User name that was provided during installation for the
AzureStackAdmin domain administrator account, in the
following format:
Authentication Resource URL The URL where the authentication token is sent to.
(optional)
The following is a custom role definition (in JSON format) that gives CloudPoint the
ability to:
■ Configure Azure Stack Hub plug-in and discover assets.
■ Create host and disk snapshots.
■ Restore snapshots to the original location or to a new location.
■ Delete snapshots.
"Microsoft.Network/networkSecurityGroups/securityRules/write",
"Microsoft.Network/networkSecurityGroups/write",
"Microsoft.Network/publicIPAddresses/delete",
"Microsoft.Network/publicIPAddresses/join/action",
"Microsoft.Network/publicIPAddresses/write",
"Microsoft.Network/routeTables/join/action",
"Microsoft.Network/virtualNetworks/delete",
"Microsoft.Network/virtualNetworks/subnets/delete",
"Microsoft.Network/virtualNetworks/subnets/join/action",
"Microsoft.Network/virtualNetworks/write",
"Microsoft.Resources/*/read",
"Microsoft.Resources/subscriptions/resourceGroups/write",
"Microsoft.Resources/subscriptions/resourceGroups/ \
validateMoveResources/action",
"Microsoft.Resources/subscriptions/tagNames/tagValues/write",
"Microsoft.Resources/subscriptions/tagNames/write",
"Microsoft.Subscription/*/read",
"Microsoft.Authorization/*/read" ],
"NotActions": [ ],
"AssignableScopes": [
"/subscriptions/subscription_GUID",
"/subscriptions/subscription_GUID/ \
resourceGroups/myCloudPointGroup" ] }
To create a custom role using Powershell, follow the steps in the following Azure
Stack documentation:
https://docs.microsoft.com/en-us/azure-stack/operator/azure-stack-registration-role?view=azs-2008
For example:
To create a custom role using Azure CLI, follow the steps in the following Azure
documentation:
https://docs.microsoft.com/en-us/azure/role-based-access-control/tutorial-custom-role-cli
For example:
Note: Before creating a role, you must copy the role definition (text in JSON format)
in a .json file and then use that file as the input file. In the sample command
displayed earlier, registrationrole.json is used as the input file that contains
the role definition text.
Note: The staging location is specific to the subscription ID, you must create one
staging location for each subscription that you are using to restore VMs.
CloudPoint cloud plug-ins 100
Microsoft Azure Stack Hub plug-in configuration notes
For example:
/resourceGroup/Harsha_RG/storageaccount/harshastorageacc
3 Repeat step 2, for each subscription ID that you are using. Save and close the
file.
Chapter 5
CloudPoint storage array
plug-ins
This chapter includes the following topics:
■ For NAS-based storage deployments, ensure that the NetApp shares are
configured using an active junction_path.
■ Ensure that the NetApp user account that you will use to configure the plug-in
has the privileges to perform the following operations on the NetApp array:
■ create snapshot
CloudPoint storage array plug-ins 103
NetApp plug-in configuration notes
■ delete snapshot
■ restore snapshot
■ Ensure that the NetApp user account that you will use to configure the plug-in
is configured with http and ontapi access methods.
■ Ensure that the NetApp user account that you will use to configure the plug-in
has the following roles assigned:
■ Default: readonly
■ lun: all
■ volume snapshot: all
■ vserver export-policy: all
Refer to the NetApp documentation for instructions on how to create users and
roles, and assign permissions.
See “NetApp plug-in configuration parameters” on page 103.
See “Supported CloudPoint operations on NetApp storage” on page 104.
While configuring a Data LIF, use the prefix "nbu_nas_" in the interface name for
the SVM. If such a Data LIF exists, NetBackup automatically uses only that LIF for
accessing the snapshots.
Note: This is an optional step. If configured, the backup reads are restricted via the
dedicated LIF. If not configured, volume snapshots are accessed via any available
DATA LIF of the corresponding SVM.
Parameter Description
Nutanix Files File Server The Fully Qualified Domain Name (FQDN) of the Nutanix
FQDN Files File Server.
REST API username The user account that has the permissions to invoke the
Nutanix Files REST APIs on the File Server.
REST API password The password of the Nutanix REST API user account
specified earlier.
The following screen is displayed when you configure the plug-in using the
NetBackup administration console:
■ Nutanix Files File Server does not support point-in-time (PIT) rollback restore
of shares using snapshots. You can use NetBackup assisted restore of shares'
data.
■ The maximum snapshot limit for a Nutanix Files shares is 20.
The maximum snapshot limit defines the maximum number of policy-triggered
snapshots that are retained for the specified share. When the maximum count
is reached, the next snapshot that is created by the policy results in the deletion
of the oldest snapshot.
You may want to consider the policy schedule and retention for NetBackup's
policy protecting Nutanix File shares.
Discover assets CloudPoint discovers all the shares and their snapshots along
with some of their metadata. Shares that have CFT_BACKUP
capabilities are eligible for snapshot diff based incremental
backups.
Note: Snapshot operations are not supported for nested
shares on Nutanix Files File Server.
Create snapshot diff Nutanix Files provides an API that allows to create a diff
between two snapshots of a share. This process is called as
Changed File Tracking (CFT). When a request to create a
snapshot diff is made, CloudPoint makes a REST API call to
generate the CFT between two snapshots, and then retrieves
and stores the CFT data on the CloudPoint server.
Recommended action:
This issue occurs if the same Nutanix Files file system is configured with more than
one CloudPoint server instances simultaneously.
NetBackup is registered as a partner server on the Nutanix Files platform. A one
to one mapping exists between the NetBackup CloudPoint server and the Nutanix
Files. If the same Nutanix Files file system is configured with multiple CloudPoint
instances, it creates a resource conflict. Each CloudPoint server attempts to update
the configuration with the backup job information. This concurrent configuration
update on the single partner server registration fails and causes a conflict error.
NetBackup does not support such a mixed configuration. Ensure that you configure
Nutanix Files with a single instance of the CloudPoint server in the NetBackup
domain.
This issue occurs if the Nutanix Files version in use is not supported by CloudPoint.
Ensure that a supported version of Nutanix Files is installed before you configure
the plug-in.
See “Nutanix Files plug-in configuration prerequisites” on page 107.
Array IP address Array IP address that you want to be protected. Both, IPV6
and IPV4 settings are supported.
Password The password of the EMC unity Array user account specified
earlier.
The following screen is displayed when you configure the plug-in using the
NetBackup administration console:
CloudPoint storage array plug-ins 113
Dell EMC Unity array plug-in configuration notes
Category Supported
Software UnityOS
Library storops
Note: CloudPoint automatically installs all the required
libraries during installation.
CloudPoint Description
operation
Discover CloudPoint discovers all the volumes and their snapshots along with their
assets storage group.
Note: CloudPoint only discovers assets with depth as 2.
CloudPoint storage array plug-ins 115
Dell EMC Unity array plug-in configuration notes
CloudPoint Description
operation
NB<unique_21digit_number>
Delete To delete a snapshot, CloudPoint triggers an SDK method with the required
snapshot snapshot details and confirms that the snapshot has been deleted
successfully on the array.
Restore CloudPoint offers the ability to restore with the help of SDK methods with
snapshot different restore paths.
Export When a snapshot export operation is triggered, a new NFS export is created
snapshot over the same filesystem path, on which the backups host is added as a
client with read-only permissions
You can also perform the following CloudPoint operations on supported Dell EMC
Unity arrays:
■ List all the disks.
■ Create a copy-on-write (COW) snapshot of a LUN.
Note: Snapshot name can be lowercase or uppercase, can contain any ASCII
character, and can include special characters.
Note: You cannot snapshot LUNs which are under a consistency group. The reason
for this limitation is that to restore a single LUN snapshot would restore the entire
consistency group.
Note: The exported snapshot is attached to the host and is accessible using a
world wide name (WWN) that is assigned by the array.
CloudPoint Description
configuration parameter
Before you configure the plug-in, ensure that the specified user account has
permissions to create, delete, and restore snapshots on the array.
CloudPoint storage array plug-ins 117
Pure Storage FlashArray plug-in configuration notes
Category Supported
CloudPoint Description
configuration parameter
Array Username HPE XP Storage Array user account which have permissions
for snapshot operations.
Array Storage Device ID Storage device ID of the array that is already registered with
the HPE XP Configuration Manager.
The following screen is displayed when you configure the plug-in using the
NetBackup administration console:
CloudPoint storage array plug-ins 119
HPE RMC plug-in configuration notes
Note: You can restore a COW snapshot, but not a clone snapshot.
CloudPoint Description
configuration parameter
Before configuring the plug-in, ensure that the user account that you provide to
CloudPoint has an admin role assigned on the RMC server.
Category Supported
Category Supported
Discover assets CloudPoint discovers all the volumes that are created on the
array. If a volume is part of a multi-volume volume set,
CloudPoint scans the volume set and extracts the individual
volume information and then creates a list of all the unique
volumes that are part of the volume set.
Create snapshot CloudPoint takes snapshots of all the volumes on the array.
Delete snapshot CloudPoint deletes the snapshot or the snapshot set (if parent
volume is part of a volume set).
Restore snapshot When you restore a snapshot, CloudPoint only restores the
particular snapshot corresponding to the selected volume.
The snapshot set is a COW snapshot that can contain other
snapshots belonging to the additional volumes in the volume
set. However, CloudPoint only restores the snapshot for the
selected volume. The other snapshots are not used during
the restore operation.
Note: For a snapshot of a volume set, use name patterns that are used to form the
snapshot volume name. Refer to VV Name Patterns in the HPE 3PAR Command
Line Interface Reference available from the HPE Storage Information Library.
If you want to use CloudPoint to protect volume sets, Veritas recommends that
you configure a single volume in the volume set.
CloudPoint Description
configuration parameter
Array Username HPE XP Storage Array user account which have permissions
for snapshot operations.
Array Storage Device ID Storage device ID of the array that is already registered with
the HPE XP Configuration Manager.
CloudPoint storage array plug-ins 124
HPE XP plug-in configuration notes
The following screen is displayed when you configure the plug-in using the
NetBackup administration console:
Create snapshot For snapshots, CloudPoint uses HPE XP Fast Snap Pairs
and triggers a sequence of REST API requests with the
required information and snapshot name. The API returns
the details of the snapshot.
Note: This is not a prerequisite. If you do not create this snapshot group, the
plug-in automatically creates it during the configuration.
■ Ensure that the Hitachi storage arrays are registered with Hitachi Configuration
Manager (HCM). CloudPoint uses the HCM REST APIs to communicate with
the storage arrays.
■ Ensure that the Hitachi storage arrays have the necessary licenses that are
required to perform snapshot operations.
■ Ensure that the user account that you provide to CloudPoint has general read
permissions as well as the permissions to create, delete, export, deport, and
restore snapshots on the storage array.
See “Hitachi plug-in configuration parameters” on page 126.
See “Supported Hitachi storage arrays” on page 127.
See “Supported CloudPoint operations on Hitachi arrays” on page 127.
CloudPoint Description
configuration parameter
Hitachi Configuration The base URL for accessing the Hitachi Configuration
Manager Server URL Manager (HCM) server.
protocol://host-name:port-number/ConfigurationManager
Array Username The name of the user account that has access to the Hitachi
storage array.
Array Password The password of the user account that is used to access the
Hitachi storage array.
CloudPoint storage array plug-ins 127
Hitachi plug-in configuration notes
Category Supported
VSP G1500
For the latest information on hardware support, refer to the CloudPoint Hardware
Compatibility List (HCL).
See “ Meeting system requirements” on page 17.
Discover assets CloudPoint discovers all the Logical Devices (LDEV) created
on the storage array. The primary LDEV objects appear as
disk assets. The secondary LDEV objects that are part of a
Thin Image (TI) pair appear under snapshots.
Create snapshot NetBackup takes a snapshot of all the LDEV objects that are
attached to a hostgroup.
When CloudPoint takes a snapshot, it performs the following
actions:
■ Gather the following information about the Hitachi (HDS VSP 5000). You will
use these details while configuring the plug-in:
CloudPoint Description
configuration parameter
CloudPoint Description
configuration parameter
Array Username The name of the user account that has access to the Hitachi
storage array.
Array Password The password of the user account that is used to access the
Hitachi storage array.
Array Storage Device ID ID of the storage array device that is already registered with
the Hitachi Configuration Manager.
The following screen is displayed when you configure the plug-in using the
NetBackup administration console:
Create snapshot For snapshots, CloudPoint uses Hitachi Thin Image Pairs
and triggers a sequence of REST API requests with the
required information and snapshot name. The API returns
the details of the snapshot.
NB<unique_21digit_number>
CloudPoint Description
configuration parameter
Username The name of the user account that has access to the InfiniBox
storage array.
Password The password of the user account that is used to access the
InfiniBox storage array.
Discover assets CloudPoint discovers all the SAN volumes (virtual disks) that
are part of storage pools that are created on the InfiniBox
storage array. The plug-in sends a request to the array to
return a list of all the volumes that have the type set as
PRIMARY. Such volumes are considered as base volumes
and appear as disk assets.
Create snapshot CloudPoint takes a snapshot of all the SAN volumes that are
part of a storage pool. When a snapshot is created,
CloudPoint plug-in uses InfiniSDK to send a
create_snapshot method request on the selected volume
and passes a snapshot name as an argument in that request.
■ Gather the following information about the Dell EMC PowerScale (Isilon). You
will use these details while configuring the PowerScale plug-in:
Parameter Description
Parameter Description
The following screen is displayed when you configure the plug-in using the
NetBackup administration console:
CloudPoint Description
operation
Discover CloudPoint discovers all the NFS exports and their snapshots along with
assets some of their metadata.
Note: CloudPoint only discovers assets with depth as 2.
Create To create a snapshot, CloudPoint triggers a POST REST API call on the
snapshot nfs_export with the required information and the snapshot name. The API
returns the details of the snapshot.
NB<unique_21digit_number>
Delete To delete a snapshot, CloudPoint triggers a DELETE REST API call with
snapshot the required snapshot details and confirms that the snapshot has been
deleted successfully on the Cluster.
Export When a snapshot export operation is triggered, a new NFS export is created
snapshot over the snapshot path ("/ifs/test_fs/.snapshot/NB15985918570166499611/")
and the backup host is added as a Root Client with the read-only permission.
CloudPoint Description
operation
■ Gather the following information about the Dell EMC PowerMax/VMax. You will
use these details while configuring the plug-in:
Parameter Description
You can configure any port through which you can access
the Unisphere console.
The following screen is displayed when you configure the plug-in using the
NetBackup administration console:
CloudPoint storage array plug-ins 140
Dell EMC PowerMax and VMax plug-in configuration notes
CloudPoint Description
operation
Discover CloudPoint discovers all the volumes and their snapshots along with their
assets storage group.
Note: CloudPoint only discovers assets with depth as 2.
Create To create a snapshot, CloudPoint triggers a POST API call on the storage
snapshot group within which the volumes resides, with the required information and
snapshot name.
NB<unique_21digit_number>
CloudPoint storage array plug-ins 141
Qumulo plug-in configuration notes
CloudPoint Description
operation
Delete To delete a snapshot, CloudPoint triggers a DELETE REST API call with
snapshot the required snapshot details and confirms that the snapshot has been
deleted successfully on the array.
Restore CloudPoint uses storage group snapshot restore API from Unisphere.
snapshot
To restore a snapshot to the point in time image on the volume.
6 Using the export storage group, Host ID and Port group ID, create a
masking view group which would attach the exported storage group
to the host.
support and allows you to protect NFS exports that are hosted in a Qumulo
environment. You can configure CloudPoint to discover and then perform backup
and restore operations on Network File System (NFS) exports.
The CloudPoint plug-in for Qumulo contains the necessary functional logic that
enables NetBackup to discover the NFS exports on the Qumulo cluster and then
trigger snapshot create, export, deport, and delete operations for those exports.
You must configure this plug-in on the NetBackup primary server.
CloudPoint uses the REST API SDK Qumulo (qumulo-api) provides to communicate
with the Qumulo assets. CloudPoint establishes a connection with Qumulo by using
the RestClient library exposed by SDK and then uses the SDK methods to discover
the NFS exports and their snapshots that need to be backed up.
Parameter Description
Cluster Address You can add any management IP address or the Fully
Qualified Domain Name (FQDN) of the Node. You can
also use Qumulo DNS Roundrobin FQDN here.
The following screen is displayed when you configure the plug-in using the
NetBackup administration console:
CloudPoint storage array plug-ins 143
Qumulo plug-in configuration notes
CloudPoint Description
operation
Discover CloudPoint discovers all the Qumulo file system paths and their snapshots
assets along with some of their metadata. Single depth discovery is supported..
Create To create a snapshot, CloudPoint triggers an SDK method with the required
snapshot information and snapshot name. The API returns the details of the snapshot.
NB<unique_21digit_number>
Delete To delete a snapshot, CloudPoint triggers a SDK method call with the
snapshot required snapshot details. Then CloudPoint confirms that the snapshot has
been deleted successfully on the cluster.
Export When a snapshot export operation is triggered, a new NFS export is created
snapshot over the same filesystem path on which the backup hosts is added as a
client with the read-only permission.
■ SQL snapshot or restore and granular restore operations fail if the Windows
instance loses connectivity with the CloudPoint host
■ Disk-level snapshot restore fails if the original disk is detached from the instance
Note: CloudPoint does not support discovery, snapshot, and restore operations
for SQL databases that contain leading or trailing spaces or non-printable
characters. This is because the VSS writer goes into an error state for such
databases. Refer to the following for more details:
https://support.microsoft.com/en-sg/help/2014054/backing-up-a-sql-server-database-
using-a-vss-backup-application-may-fa
MongoDB configuration file path The location of the MongoDB conf file.
CloudPoint application agents and plug-ins 149
About the installation and configuration process
MongoDB admin user password The password of the MongoDB admin user
account.
■ For the Linux-based agent, type the following command on the Linux host:
# sudo yum -y install <cloudpoint_agent_rpm_name>
Here, <cloudpoint_agent_rpm_name> is the name of the agent rpm package
you downloaded earlier.
For example:
# sudo yum -y install
VRTScloudpoint-agent-8.3.0.8549-RHEL7.x86_64.rpm
■ For the Windows-based agent, run the agent package file and follow the
installation wizard workflow to install the agent on the Windows application
host.
Note: To allow the installation, admin users will have to click Yes on the
Windows UAC prompt. Non-admin users will have to specify admin user
credentials on the UAC prompt.
8 This completes the agent installation. You can now proceed to register the
agent.
See “Registering the Linux-based agent” on page 152.
See “Registering the Windows-based agent” on page 155.
■ Ensure that you have downloaded and installed the agent on the application
host.
See “Downloading and installing the CloudPoint agent” on page 150.
■ Ensure that you have root privileges on the Linux instance.
■ If the CloudPoint Linux-based agent was already configured on the host earlier,
and you wish to re-register the agent with the same CloudPoint instance, then
do the following on the Linux host:
■ Remove the /opt/VRTScloudpoint/keys directory from the Linux host.
Type the following command on the host where the agent is running:
# sudo rm -rf /opt/VRTScloudpoint/keys
■ If the CloudPoint Linux-based agent was already registered on the host earlier,
and you wish to register the agent with a different CloudPoint instance, then do
the following on the Linux host:
■ Uninstall the agent from the Linux host.
See “Removing the CloudPoint agents” on page 238.
■ Remove the /opt/VRTScloudpoint/keys directory from the Linux host.
Type the following command:
# sudo rm -rf /opt/VRTScloudpoint/keys
■ From the desired CloudPoint server row, click the actions button on the
right and then select Add agent.
■ On the Add agent dialog box, click Create Token.
Note: The token is valid for 180 seconds only. If you do not copy the token
within that time frame, generate a new token again.
3 Connect to the Linux host and register the agent using the following command:
# sudo flexsnap-agent --ip <cloudpoint_host_FQDN_or_IP> --token
<authtoken>
CloudPoint application agents and plug-ins 155
Registering the Windows-based agent
Note: You can use flexsnap-agent --help to see the command help.
CloudPoint performs the following actions when you run this command:
■ registers the Linux-based agent
■ creates a /etc/flexsnap.conf configuration file on the Linux instance and
updates the file with CloudPoint host information
■ enables and then starts the agent service on the Linux host
4 Return to the NetBackup Web UI, close the Add agent dialog box, and then
from the CloudPoint server row, click the actions button on the right and then
click Discover.
This triggers a manual discovery of all the assets that are registered with the
CloudPoint server.
5 Click on the Virtual machines tab.
The Linux host where you installed the agent should appear in the discovered
assets list.
Click to select the Linux host. If the host status is displayed as VM Connected
and a Configure Application button appears, it confirms that the agent
registration is successful.
6 This completes the agent registration. You can now proceed to configure the
application plug-in.
See “Configuring the CloudPoint application plug-in” on page 159.
Note: The token is valid for 180 seconds only. If you do not copy the token
within that time frame, generate a new token again.
The agent installation directory is the path you specified while installing the
Windows agent using the installation wizard earlier. The default path is
C:\Program Files\Veritas\CloudPoint\.
Note: You can use flexsnap-agent.exe --help to see the command help.
NetBackup performs the following actions when you run this command:
■ registers the Windows-based agent
■ creates a C:\ProgramData\Veritas\CloudPoint\etc\flexsnap.conf
configuration file on the Windows instance and updates the file with
NetBackup host information
■ enables and then starts the agent service on the Windows host
Note: If you intend to automate the agent registration process using a script
or a 3rd-party deployment tool, then consider the following:
Even if the agent has been registered successfully, the Windows agent
registration command may sometimes return error code 1 (which generally
indicates a failure) instead of error code 0.
An incorrect return code might lead your automation tool to incorrectly indicate
that the registration has failed. In such cases, you must verify the agent
registration status either by looking in to the flexsnap-agent-onhost logs or from
the NetBackup Web UI.
4 Return to the NetBackup Web UI, close the Add agent dialog box, and then
from the CloudPoint server row, click the actions button on the right and then
click Discover.
This triggers a manual discovery of all the assets that are registered with the
CloudPoint server.
CloudPoint application agents and plug-ins 159
Configuring the CloudPoint application plug-in
5 After the discovery is completed, click the Virtual machines tab and verify the
state of the application host. The Application column in the assets pane displays
a value as Configured and this confirms that the plug-in configuration is
successful.
6 Click on the Applications tab and verify that the application assets are
displayed in the assets list.
For example, if you have configured the Microsoft SQL plug-in, the Applications
tab displays the SQL Server instances, databases, and SQL Availability Group
(AG) databases that are running on the host where you configured the plug-in.
You can now select these assets and start protecting them using protection
plans.
2. For each drive letter on which you want to take disk-level, application-consistent
snapshots using CloudPoint, enter a command similar to the following:
Here, maxsize represents the maximum free space usage allowed on the
shadow storage drive. The caret (^) character in the command represents the
Windows command line continuation character.
For example, if the VSS shadow copies of the D: drive are to be stored on the
D: drive and allowed to use up to 80% of the free disk space on D:, the
command syntax is as follows:
asset to a protection plan, the cloud provider of the asset must be the same
as the cloud provider defined in the protection plan.
■ Click Next.
4 On the Schedules and retention panel, specify the desired backup schedule
and then click Next.
5 Configure the remaining options as per your requirement and click Finish to
create the protection plan.
The Protection plans pane displays the plan you created.
6 You can now proceed to assign assets to this protection plan.
See “Subscribing cloud assets to a NetBackup protection plan” on page 162.
For detailed information about managing protection plans, refer to the NetBackup
Web UI Backup Administrator's Guide.
3 On the Applications tab, search and select the asset that you wish protect and
then click Add Protection.
For example, to protect Microsoft SQL, you can select a SQL instance, a
standalone database, or an Availability Group (AG) database.
Note: If instance level SQL server backup is selected, only the databases that
are online are included in the snapshot. The snapshot does not include
databases that are offline or in an erroneous state.
4 On the Choose a protection plan panel, search and select the appropriate
protection plan and then click Protect.
Verify that on the Applications tab, the Protected by column for the selected
asset displays the protection plan that you just assigned. This indicates that
the asset is now being protected by the configured protection plan.
The backup jobs should automatically get triggered as per the schedule defined
in the plan. You can monitor the backup jobs from the Activity monitor pane.
For more detailed information on how to subscribe assets to a protection plan, refer
to the NetBackup Web UI Backup Administrator's Guide.
that is using the file system and then unmount the file system and perform
restore.
■ Snapshot restore of applications on Logical Volume Manager (LVM) and Logical
Disk Manager (LDM) based storage spaces are not supported.
■ After a restore operation, update the inbound port rules for the restored instance,
to gain remote access to the instance.
■ For AWS/Azure/GCP cloud disk/volume snapshots, you must first detach the
disk from the instance and then restore the snapshot to original location.
■ (Applicable to AWS only) When you restore a host-level application snapshot,
the name of the new virtual machine that is created is the same as the name of
the host-level snapshot that corresponds to the application snapshot.
For example, when you create an application snapshot named OracleAppSnap,
NetBackup automatically creates a corresponding host-level snapshot for it
named OracleAppSnap-<number>. For example, the snapshot name may
resemble OracleAppSnap-15.
Now, when you restore the application snapshot (OracleAppSnap), the name
of the new VM is OracleAppSnap-<number> (timestamp).
Using the example cited earlier, the new VM name may resemble
OracleAppSnap-15 (restored Nov 20 2018 09:24).
Note that the VM name includes "Oracle-AppSnap-15" which is the name of the
host-level snapshot.
■ (Applicable to AWS only) When you restore a disk-level application snapshot
or a disk snapshot, the new disk that is created does not bear any name. The
disk name appears blank.
You have to manually assign a name to the disk to be able to identify and use
it after the restore.
■ When you restore a snapshot of a Windows instance, you can log in to the newly
restored instance using original instance's username/password/pem file.
By default, AWS disables generating a random encrypted password after
launching the instance from AMI. You must set Ec2SetPassword to Enabled in
config.xml to generate new password every time. For more information on
how to set the password, see the following link.
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/
ec2config-service.html#UsingConfigXML_WinAMI
■ With CloudPoint 9.0, a restore of any Amazon EC2 instances created before
June 2019 will not have a product billing code due to an AWS limitation.
■ The volume type of newly created volumes for replicated snapshots is according
to the region's default volume type.
If volume type is not specified, the following default values are used:
CloudPoint application agents and plug-ins 165
About snapshot restore
■ If you are performing a disk-level snapshot restore to the same location, then
verify that the original disk is attached to the instance, before you trigger a
restore.
If the existing original disk is detached from the instance, then the restore
operation might fail.
See “ Disk-level snapshot restore fails if the original disk is detached from the
instance” on page 180.
■ You can perform only one restore operation on a snapshot at any given time. If
multiple operations are submitted on the same asset, then only the first operation
is triggered and the remaining operations will fail.
This is applicable for all CloudPoint operations in general. CloudPoint does not
support running multiple jobs on the same asset simultaneously.
■ If you intend to restore multiple file systems or databases on the same instance,
then Veritas recommends that you perform these operations one after the other,
in a sequential manner.
Running multiple restore operations in parallel can lead to an inconsistency at
the instance level and the operations might fail eventually. Multiple restore jobs
that need access to any shared asset between them are not allowed. Assets
that participate in the restore job are locked and any other job requiring such
locked assets will fail.
The following types of SQL server deployments are supported:
■ SQL instances and databases, including standalone databases
You can perform snapshot and restore operations at an instance level. When
you take a snapshot of a SQL instance, the snapshot includes all the online
databases that are configured in that instance.
Beginning with NetBackup 8.3 release, you can also perform the same set of
operations at a single database level. You can take a backup of a individual
standalone SQL database that is in an online state and restore it either to the
same location or to an alternate location. You are provided with an option to
overwrite the existing database. Restore to the same location or alternate location
CloudPoint application agents and plug-ins 166
Restore requirements and limitations for Microsoft SQL Server
fails if the overwrite existing option is not selected. A disk-level snapshot restore
operation restores the database on the target host. The new database is
discovered in the next discovery cycle and automatically displayed in the UI.
■ SQL databases deployed in an Availability Group (AG)
Beginning with NetBackup 8.3 release, you can perform backup and restore
operations on SQL databases that are part of an AG. When you take a snapshot
of a database in the SQL AG the snapshots are taken from the replica that is
configured by the SQL database administrator. You can restore a single AG
database to a SQL instance that is configured as a replica in the AG
configuration. The AG database can also be restored to a SQL instance that is
not part of any AG configuration. When restoring to an AG environment, the
database must be removed from the AG before performing the restore.
See “Restore requirements and limitations for Microsoft SQL Server” on page 166.
See “Restore requirements and limitations for Oracle” on page 167.
See “Restore requirements and limitations for MongoDB” on page 169.
This is applicable only if you are restoring the snapshot to replace the current
asset (Overwrite existing option) or restoring the snapshot to the same location
as the original asset (Original Location option).
■ In case of a SQL instance disk-level restore to a new location fails if the target
host is connected or configured.
In such a case, to complete the SQL Server snapshot restore to a new location
successfully, you must perform the restore in the following order:
■ First, perform a SQL Server disk-level snapshot restore.
Ensure that you restore the disk snapshots of all the disks that are used by
SQL Server. These are the disks on which SQL Server data is stored.
See “Recovering a SQL database to the same location” on page 172.
■ Then, after the disk-level restore is successful, perform the additional manual
steps.
See “Additional steps required after a SQL Server snapshot restore”
on page 176.
■ CloudPoint does not support discovery, snapshot, and restore operations for
SQL databases that contain leading or trailing spaces or non-printable characters.
This is because the VSS writer goes into an error state for such databases.
Refer to the following for more details:
https://support.microsoft.com/en-sg/help/2014054/backing-up-a-sql-server-database-
using-a-vss-backup-application-may-fa
■ Before you restore a SQL Availability Group (AG) database, perform the
pre-restore steps manually.
See “Steps required before restoring SQL AG databases” on page 171.
■ New location restore of system database is not supported.
■ If destination instance has AG configured, restore is not supported.
■ If database exists on new location destination and the overwrite existing option
is not selected, the restore job will fail.
■ If the overwrite existing option is selected for database that is a part of an AG,
the restore job will fail.
■ For system database restore, the SQL Server version must be same. For user
databases, restore from a higher SQL version to a lower version is not allowed.
■ The destination host where you wish to restore the snapshot must have the
same Oracle version installed as that at the source.
■ If you are restoring the snapshot to a new location, verify the following:
■ Ensure that there is no database with the same instance name running on
the target host.
■ The directories that are required to mount the application files are not already
in use on the target host.
■ Disk-level restore to a new location fails if the NetBackup plug-in for Oracle is
not configured on the target host.
In such a case, to complete the Oracle snapshot restore to a new location
successfully, you must perform the restore in the following order:
■ First, perform a Oracle disk-level snapshot restore.
Ensure that you restore the disk snapshots of all the disks that are used by
Oracle. These are the disks on which Oracle data is stored.
■ Then, after the disk-level restore is successful, perform the additional manual
steps.
See “Additional steps required after an Oracle snapshot restore” on page 168.
These manual steps are not required in case of a disk-level restore in the following
scenario:
■ You are performing a disk-level restore to the original location or an alternate
location
■ The target host is connected to the CloudPoint host
■ The CloudPoint Oracle plug-in is configured on the target host
Perform the following steps:
1 Ensure that the snapshot restore operation has completed successfully and a
new disk is created and mounted on the application host (in case of a disk-level
restore) or the application host is up and running (in case of a host-level
restore).
2 Connect to the virtual machine and then log on to the Oracle database as a
database administrator (sysdba).
3 Start the Oracle database in mount mode using the following command:
# STARTUP MOUNT
5 Open the Oracle database for normal usage using the following command:
# ALTER DATABASE OPEN
6 Add an entry of the newly created database in the Oracle listerner.ora and
tnsnames.ora files.
Note: These manual steps are not required in case of a disk-level restore to the
same location.
Here, <diskname> is the name of the new disk that was created after restore,
and <mountdir> is the path where you want to mount the disk.
4 Edit the MongoDB config file /etc/mongod.conf and set the dbPath parameter
value to the <mountdir> path that you specified in the earlier step.
CloudPoint application agents and plug-ins 171
Steps required before restoring SQL AG databases
5 Start the MongoDB service on the application host and verify that the service
is running.
Use the following commands:
# sudo systemctl start mongod.service
Note: In case of a disk-level restore to a new host, ensure that mongo is installed
on that host.
6 Log on to the MongoDB server using the MongoDB client and verify that the
database is running.
Note: If you are restoring the AG database to multiple replicas, perform the entire
restore process on the primary replica first, and then repeat the steps for each
secondary replica.
1. For the database that you want to restore, suspend data movement from the
replica.
From the SQL Server Management Studio, right-click on the database and
select Suspend Data Movement.
2. Remove the database from the AG on the replica.
From the SQL Server Management Studio, right-click on the database and
select Remove Database from Availability Group.
Confirm that the database is no longer part of the AG. Observe that the
database on the primary replica is no longer in synchronized mode, and the
status of the corresponding database on the secondary replica appears as
(Restoring...).
6 On the Recover to original location dialog box, choose the database recovery
options and then click Start recovery to trigger the recovery job.
Restore with RECOVERY Select this option if you want to perform a single restore
on the database and bring it back to a consistent and
operational state.
Restore with NORECOVERY Select this option if you intend to perform multiple
database restores from a group of backups. For
example, if you want to perform a restore using a full
backup snapshot and then restore transaction logs.
Overwrite existing database Select this option if you want the restore operation to
replace the original database.
CloudPoint application agents and plug-ins 174
Recovering a SQL database to an alternate location
7 You can monitor the recovery job from the Activity monitor pane.
A status code 0 indicates that the recovery job is successful. You can now
verify that the SQL database is recovered.
6 On the Recover to alternate location dialog box, choose the database recovery
options and then click Start recovery to trigger the recovery job.
The following options are available:
Restore with RECOVERY Select this option if you want to perform a single restore
on the database and bring it back to a consistent and
operational state.
Restore with NORECOVERY Select this option if you intend to perform multiple
database restores from a group of backups. For
example, if you want to perform a restore using a full
backup snapshot and then restore transaction logs.
Overwrite existing database If a database with the same name exists at the target
location, select this option if you want the restore
operation to replace that database.
7 You can monitor the recovery job from the Activity monitor pane.
A status code 0 indicates that the recovery job is successful. You can now
verify that the SQL database is recovered.
8 If recovering SQL database in restoring mode, then after the recovery operation
is complete, verify that the state of the database on the SQL host appears as
(Restoring...).
9 If applicable, you can now manually restore any transaction logs on the
recovered database.
CloudPoint application agents and plug-ins 176
Additional steps required after a SQL Server snapshot restore
Note: These steps are applicable only in case of a SQL Server instance snapshot
restore to a new location. These are not applicable for a SQL Server database
snapshot restore.
4 View the list of disks on the new host using the following command:
list disk
Identify the new disk that is attached due to the snapshot restore operation
and make a note of the disk number. You will use it in the next step.
5 Select the desired disk using the following command:
select disk <disknumber>
Here, <disknumber> represents the disk that you noted in the earlier step.
CloudPoint application agents and plug-ins 177
Additional steps required after a SQL Server snapshot restore
6 View the attributes of the selected disk using the following command:
attributes disk
The output displays a list of attributes for the disk. One of the attributes is
read-only, which we will modify in the next step.
7 Modify the read-only attribute for the selected disk using the following command:
attributes disk clear readonly
Do not close the command prompt yet; you can use the same window to perform
the remaining steps described in the next section.
CloudPoint application agents and plug-ins 178
Additional steps required after a SQL Server snapshot restore
2 View the list of all the shadow copies that exist on the new host. Type the
following command:
list shadows all
Identify the shadow copy that you want to use for the revert operation and
make a note of the shadow copy ID. You will use the shadow ID in the next
step.
3 Revert the volume to the desired shadow copy using the following command:
revert <shadowcopyID>
Here, <shadowcopyID> is the shadow copy ID that you noted in the earlier
step.
4 Exit the DiskShadow utility using the following command:
exit
5 In the Attach Databases dialog box, click Add and then in the Locate Database
Files dialog box, select the disk drive that contains the database and then find
and select all the .mdf and .ldf files associated with that database. Then click
OK.
The disk drive you selected should be the drive that was newly created by the
disk-level snapshot restore operation.
6 Wait for the requested operations to complete and then verify that the database
is available and is successfully discovered by NetBackup.
Note: If you are restoring the AG database to multiple replicas, perform the entire
restore process on the primary replica first, and then repeat the steps for each
secondary replica.
Workaround:
To resolve this issue, restart the Veritas CloudPoint Agent service on the
Windows instance.
Workaround:
If the restore has already failed in the environment, you may have to manually
perform a disk cleanup first and then trigger the restore job again.
Perform the following steps:
1 Log on to the instance for which the restore operation has failed.
Ensure that the user account that you use to connect has administrative
privileges on the instance.
2 Run the following command to unmount the application disk cleanly:
# sudo umount /<application_diskmount>
3 From the NetBackup UI, trigger the disk-level restore operation again.
In general, if you want to detach the original application disks from the instance,
use the following process for restore:
1. First take a disk-level snapshot of the instance.
2. After the snapshot is created successfully, manually detach the disk from the
instance.
For example, if the instance is in the AWS cloud, use the AWS Management
Console and edit the instance to detach the data disk. Ensure that you save
the changes to the instance.
3. Log on to the instance using an administrative user account and then run the
following command:
# sudo umount /<application_diskmount>
CloudPoint application agents and plug-ins 182
Additional steps required after restoring an AWS RDS database instance
If you see a "device is busy" message, wait for some time and then try the
umount command again.
■ Under Backup, verify that the Copy tags to snapshots option is set as per
the original instance.
■ Under Deletion protection, verify that the Enable deletion protection option
is set as per the original instance.
■ If required, verify all the other parameter values and set them as per your
preference.
7 Once you have modified the desired RDS instance properties, click Continue.
8 Under Scheduling of modifications, choose an appropriate option depending
on when you wish to apply the modifications to the instance and then click
Modify DB instance.
9 Verify the RDS instance properties and ensure that the changes have taken
effect.
Chapter 7
Protecting assets with
CloudPoint's agentless
feature
This chapter includes the following topics:
Note: The following steps are provided as a general guideline. Refer to the operating
system or the distribution-specific documentation for detailed instructions on how
to grant password-less sudo access to a user account.
1. Perform the following steps on a host where you want to configure the agentless
feature
2. Verify that the host user name that you provide to CloudPoint is part of the
wheel group.
Here, hostuserID is the host user name that you provide to CloudPoint.
3. Log out and log in again for the changes to take effect.
4. Edit the /etc/sudoers file using the visudo command:
# sudo visudo
6. In the /etc/sudoers file, edit the entries for the wheel group as follows:
Protecting assets with CloudPoint's agentless feature 186
Prerequisites for the agentless configuration
■ Comment out (add a # character at the start of the line) the following line
entry:
# %wheel ALL=(ALL) ALL
■ Uncomment (remove the # character at the start of the line) the following
line entry:
%wheel ALL=(ALL) NOPASSWD: ALL
The changes should appear as follows:
If you do not see any prompt requesting for a password, then the user account
has been granted password-less sudo access.
You can now proceed to configure the CloudPoint agentless feature.
■ You can use fixed or dynamic WMI-IN ports. If you want to configure a fixed
WMI-IN port, see
https://docs.microsoft.com/en-us/windows/win32/wmisdk/setting-up-a-fixed-port-for-wmi
■ Disable User Account Control for the users groups accessing the agentless
feature.
■ For protecting SQL applications, the user account used for connecting to the
cloud host, must have the required admin privileges to access the SQL server.
3 Click to select the host and then click Connect in the top bar.
Note: If you have not assigned any credential to the VM, a message prompts you
to assign the credentials before you can connect the VM. See the Managing
Credentials section, in the Web UI Administrator’s Guide.
Platform Managed Key (PMK) Same PMK is used as the source disk.
Customer Managed Key (CMK) Same CMK is used as the source disk.
Platform Managed Key (PMK) Same PMK is used as the source disk.
Note: For successful restoration, the target restore location must be inside the
scope of the key during restoration.
Volume Encryption in NetBackup CloudPoint 191
Volume encryption for AWS
Platform Managed Key (PMK) Same PMK is used as the source disk.
For Azure Stack, you must specify the file path of the root certificates using the
ECA_TRUST_STORE_PATH parameter in the
/cloudpoint/openv/netbackup/bp.conf file in the CloudPoint server. The value
of ECA_TRUST_STORE_PATH must be in the /cloudpoint/eca/trusted/cacerts.pem
file.
2. NBCA, CloudPoint and AzureStack are configured with different ECAs: Only
the Azure stack appliance public root certificates need to be present in the:
/cloudpoint/eca/trusted/cacerts.pem file.
3. NBCA, CloudPoint is configured with CPCA and AzureStack is configured with
ECA.
■ Use the /usr/openv/var/global/wmc/cloud/cacert.pem file available
under the data-mover container for peer and host validations.
■ Configure ECA_TRUST_STORE_PATH the CloudPoint server.
ECA_TRUST_STORE_PATH should point to a file that contain the
NetBackup root CA certificates, so that the vnetd is able to connect back
to NetBackup servers.
CloudPoint security 195
Securing the connection to CloudPoint
■ 0 (disabled): No CRL/OCSP is
performed during validation
■ 1 (leaf): CRL/OSCP validation is
performed only for leaf
■ 2 (chain): CRL/OSCP validation is
performed for the whole chain
Note: Cache is invalidated if any of ECA tuneable are added or modified manually
inside the /cloudpoint/flexsnap.conf .
CloudPoint security 196
Securing the connection to CloudPoint
Note: The scope of CRL is check is limited to Azure and Azure Stack only.
Section 2
CloudPoint maintenance
■ CloudPoint logs
■ Agentless logs
■ A single stream of all CloudPoint logs (vs disparate individual log files) makes
it easy to trail and monitor specific logs
■ Metadata associated with the logs allow for a federated search that speeds up
troubleshooting
■ Ability to integrate and push CloudPoint logs to a third-party tool for analytics
and automation
defined in the plug-in configuration file. For CloudPoint, these plug-in configurations
are stored in a fluentd configuration file that is located at
/cloudpoint/fluent/fluent.conf on the CloudPoint host. The fluentd daemon
reads the output plug-in definition from this configuration file to determine where to
send the CloudPoint log messages.
The following output plug-in definitions are added to the configuration file by default:
■ STDOUT
This is used to send the CloudPoint log messages to
/cloudpoint/logs/flexsnap.log.
The plug-in is defined as follows:
Additionally, the CloudPoint fluentd configuration file includes plug-in definitions for
the following destinations:
■ MongoDB
■ Splunk
■ ElasticSearch
These plug-in definitions are provided as a template and are commented out in the
file. To configure an actual MongoDB, Splunk, or ElasticSearch target, you can
uncomment these definitions and replace the parameter values as required.
Note that the changes take effect immediately and are applicable only to the newer
log messages that get generated after the change. The file changes do not apply
to the older logs that were generated before the configuration file was updated.
CloudPoint logs
CloudPoint maintains the following logs that you can use to monitor CloudPoint
activity and troubleshoot issues, if any. The logs are stored at
<install_path>/cloudpoint/logs on the CloudPoint host.
Log Description
■ bpbkar, bpcd, bpclntcmd, nbcert, vnetd, vxms and all other services logs
can be found inside netbackup directory
To increase logging verbosity, bp.conf and nblog.conf files can be updated on
CloudPoint server at /cloudpoint/openv/netbackup. See NetBackup Logging
Reference Guide
Changes to the bp.conf and nblog.conf files come to effect when the next
backup from snapshot or restore job runs.
Log retention
The default configuration for datamover logs is as follows:
■ Log retention maximum period is 30 days. Logs older than 30 days are deleted.
CloudPoint logging 202
Agentless logs
■ The default configuration for high and low water marks for datamover logs is
70% and 30% of the size of "/cloudpoint" mount point. For example, if the usable
size of the /cloudpoint folder is 30 GB, then the high water mark is 21 GB
(70%) and low water mark is 9GB (30%). In case, the logs directory
(/cloudpoint/openv/dm/) size reaches to high water mark, older logs for which
the datamover containers are cleaned up and no longer running are considered
for deletion. The logs are deleted for such datamover containers until low water
mark is reached or no logs are remaining for the datamover containers cleaned
up or no longer running.
Modifying the default configuration:
You can modify the default configuration for log retention by adding such a section
in the flexsnap.conf on the primary CloudPoint server. Open the flexsnap.conf
file from the path /cloudpoint/flexsnap.conf and add the following section:
[datamover]
high_water_mark = 50
low_water_mark = 20
log_retention_in_days = 60
In case of CloudPoint extensions, the configuration from the primary server are
used. Once the configuration is changed in primary CloudPoint server, the
configuration is updated on each CloudPoint extension within one hour. It is not
possible to have separate custom configurations for primary CloudPoint or the
CloudPoint extensions and configurations should only be changed in the primary
CloudPoint server. Though the configuration is same for primary as well as
CloudPoint extensions, the high water mark and low water mark for log size are
calculated based on the /cloudpoint mounted on each primary or CloudPoint
extensions.
Agentless logs
Logs for agentless connection to cloud instance(s) are present on the cloud instance
at following locations based on the platform:
CloudPoint logging 203
Troubleshooting CloudPoint logging
■ Upgrade scenarios
■ Upgrading CloudPoint
■ Post-upgrade tasks
Notes:
■ Direct upgrade from CloudPoint 2.2.x to 9.1 or later is not supported.
■ Upgrading CloudPoint across the OS versions is not supported. If you are using
CloudPoint on a RHEL7.x host, then you can only migrate it to a RHEL 8.3 or
8.4 host. Then follow the upgrade paths mentioned in the above table for
upgrading CloudPoint on a RHEL 8.3 or 8.4 host.
Upgrade scenarios
The following table lists the CloudPoint upgrade scenarios.
Note: Any CloudPoint servers that are not upgraded to 9.1 or later version after
the NetBackup primary server is upgraded to 9.1 or later, can cause compatibility
issues.
Only CloudPoint If you plan to upgrade only the ■ Please contact Veritas Technical Support to obtain an
upgrades to version CloudPoint servers to 9.1 or Emergency Engineering Binary (EEB) to support the
9.1 or later later, but do not plan to upgrade incompatibility between the CloudPoint and NetBackup
NetBackup to 9.1 or later. versions.
■ Disable CloudPoint servers
■ Apply the EEB patch on the NetBackup primary server
and associated media servers.
■ Upgrade CloudPoint servers
■ Then enable CloudPoint servers
■ To cancel the pending SLP operation for images that belong to specific
lifecycle, use nbstlutil cancel -lifecycle <name>
Upgrading CloudPoint 207
Upgrading CloudPoint
■ After you upgrade CloudPoint, if required you can upgrade the NetBackup
primary server. Also, you must enable the CloudPoint server from NetBackup
Web UI.
■ After upgrading, all the CloudPoint servers that you want to use for backup from
snapshot or restore from backup jobs, must be re-edited by providing a token
so that NetBackup certificates are generated in the CloudPoint server. See Edit
a CloudPoint server section, in the NetBackup Web UI Cloud Administrator's
Guide.
Upgrading CloudPoint
The following procedures describe how to upgrade your CloudPoint deployment.
During the upgrade, you replace the container that runs your current version of
CloudPoint with a newer container.
The numerical sequence in the file name represents the product version.
2 Copy the downloaded compressed image file to the computer on which you
want to deploy CloudPoint.
Upgrading CloudPoint 208
Upgrading CloudPoint
Load -i VRTScloudpoint-docker-9.1.0.0.9349.img.gz
Make a note of the loaded image name and version that appears towards the
end of the status messages on the command prompt. This represents the new
CloudPoint version that you wish to upgrade to. You will need this information
in the subsequent steps.
Note: The version displayed here is used for representation only. The actual
version will vary depending on the product release you are installing.
4 Make a note of the current CloudPoint version that is installed. You will use
the version number in the next step.
Upgrading CloudPoint 210
Upgrading CloudPoint
Note: This is a single command. Ensure that you enter the command without
any line breaks.
The CloudPoint containers are stopped one by one. Messages similar to the
following appear on the command line:
Wait for all the CloudPoint containers to be stopped and then proceed to the
next step.
Upgrading CloudPoint 212
Upgrading CloudPoint
Here, new_version represents the CloudPoint version you are upgrading to.
The -y option passes an approval for all the subsequent installation prompts
and allows the installer to proceed in a non-interactive mode.
For example, using the version number specified earlier, the command will be
as follows:
Note: This is a single command. Ensure that you enter the command without
any line breaks.
Upgrading CloudPoint 213
Upgrading CloudPoint
7 The new CloudPoint installer detects the existing CloudPoint containers that
are running and asks for a confirmation for removing them.
Press Y to confirm the removal of the old CloudPoint containers.
The installer first loads the individual service images and then launches them
in their respective containers.
Wait for the installer to display messages similar to the following and then
proceed to the next step:
8 (Optional) Run the following command to remove the previous version images.
# docker rmi -f <imagename>:<oldimage_tagid>
Example: Veritas_CloudPoint_9.1.0.0.9349.tar.gz
2 Copy the downloaded compressed image file to the computer on which you
want to deploy CloudPoint.
3 Unzip and un-tar the image file and list the contents:
# gunzip VRTScloudpoint-podman-9.1.0.0.9349.tar.gz
flexsnap-cloudpoint-9.x.x.x.x.img
flexsnap-coordinator-9.x.x.x.x.img
flexsnap-agent-9.x.x.x.x.img
flexsnap-onhostagent-9.x.x.x.x.img
flexsnap-policy-9.x.x.x.x.img
flexsnap-scheduler-9.x.x.x.x.img
flexsnap-config-9.x.x.x.x.img
flexsnap-certauth-9.x.x.x.x.img
flexsnap-rabbitmq-9.x.x.x.x.img
flexsnap-api-gateway-9.x.x.x.x.img
flexsnap-notification-9.x.x.x.x.img
flexsnap-fluentd-9.x.x.x.x.img
flexsnap-nginx-9.x.x.x.x.img
flexsnap-idm-9.x.x.x.x.img
flexsnap-workflow-9.x.x.x.x.img
flexsnap-listener-9.x.x.x.x.img
flexsnap-datamover-9.x.x.x.x.img
flexsnap-mongodb-9.x.x.x.x.img
flexsnap-podman-api.service
flexsnap-podman-containers.service
flexsnap_preinstall.sh
dnsname
Upgrading CloudPoint 216
Upgrading CloudPoint
4 Run the following command to prepare the CloudPoint host for installation:
# ./flexsnap_preinstall.sh
Note: Ensure that you enter the command without any line breaks.
The CloudPoint containers are stopped one by one. Messages similar to the
following appear on the command line:
Wait for all the CloudPoint containers to be stopped and then proceed to the
next step.
Upgrading CloudPoint 218
Upgrading CloudPoint
Here, new_version represents the CloudPoint version you are upgrading to,
for example '9.1.0.0.9349'
The -y option passes an approval for all the subsequent installation prompts
and allows the installer to proceed in a non-interactive mode.
Note: Ensure that you enter the command without any line breaks.
Upgrading CloudPoint 219
Upgrading CloudPoint
7 The installer first loads the individual service images and then launches them
in their respective containers.
The output resembles the following:
8 (Optional) Run the following command to remove the previous version images.
# podman rmi -f <imagename>:<oldimage_tagid>
10 This concludes the upgrade process. Verify that your CloudPoint configuration
settings and data are preserved as is.
11 If CloudPoint is not registered with the NetBackup primary server, you must
register it now.
Refer to the NetBackup Web UI Cloud Administrator's Guide for instructions.
3 Run the following command to prepare the CloudPoint host for installation:
# ./flexsnap_preinstall.sh
Upgrading CloudPoint 221
Upgrading CloudPoint using patch or hotfix
Note: Ensure that you enter the command without any line breaks.
The installer first loads the individual service images and then launches them
in their respective containers.
6 (Optional) Run the following command to remove the previous version images.
# podman rmi -f <imagename>:<oldimage_tagid>
■ Run the following commands to install the required packages (lvm2, udev and
dnsmaq) on the hosts:
#yum install -y lvm2-<version>
#yum install -y lvm2-libs-<version>
Upgrading CloudPoint 223
Migrating and upgrading CloudPoint
■ Run the following commands to lock the Podman and Conmon versions to the
supported versions, so that they do not get updated with the yum update:
sudo yum install -y podman-2.2.1-7.module+el8.3.1+9857+68fb1526
sudo yum install -y conmon-2:2.0.20-2.module+el8.3.0+8221+97165c3f
sudo yum install -y python3-dnf-plugin-versionlock
sudo yum versionlock podman* conmon*
■ Verify that specific ports are open on the instance or physical host.
See “Verifying that specific ports are open on the instance or physical host”
on page 34.
Next, you migrate CloudPoint from the RHEL 7.x host to the newly prepared RHEL
8.3 or 8.4 host.
See “Migrate and upgrade CloudPoint on RHEL 8.3 or 8.4” on page 223.
To migrate CloudPoint
Upgrading CloudPoint 225
Migrating and upgrading CloudPoint
1 On the RHEL 7.x host, verify that there are no protection policy snapshots or
other operations in progress and then stop CloudPoint by running the following
command:
Note: This is a single command. Ensure that you enter the command without
any line breaks.
The CloudPoint containers are stopped one by one. Messages similar to the
following appear on the command line:
Wait for all the CloudPoint containers to be stopped and then proceed to the
next step.
2 Migrate the CloudPoint configuration data to the RHEL 8.3 or 8.4 host:
■ If you have upgraded from RHEL 7.x to RHEL 8.3 or 8.4, copy the
/cloudpoint mountpoint data from RHEL 7.x system and move it to the
RHEL8.3 or 8.4 system under /cloudpoint folder.
■ If you have created a new system with RHEL 8.3 or 8.4:
■ Run the following command to unmount /cloudpoint from the current
host.
# umount /cloudpoint
Note: For detailed instructions to detach or attach the data disks, follow
the documentation provided by your cloud or storage vendor.
■ On the RHEL8.3 or 8.4 host, run the following commands to create and
mount the disk:
# mkdir /cloudpoint
# mount /dev/<diskname> /cloudpoint
For vendor-specific details
See “Creating and mounting a volume to store CloudPoint data”
on page 32.
3 Run the following command to prepare the CloudPoint host for installation:
# ./flexsnap_preinstall.sh
Upgrading CloudPoint 228
Post-upgrade tasks
Here, new_version represents the CloudPoint version you are upgrading to,
for example '9.1.0.0.9349'
The -y option passes an approval for all the subsequent installation prompts
and allows the installer to proceed in a non-interactive mode.
Note: Ensure that you enter the command without any line breaks.
The installer first loads the individual service images and then launches them
in their respective containers.
5 (Optional) Run the following command to remove the previous version images.
# podman rmi -f <imagename>:<oldimage_tagid>
Post-upgrade tasks
You may need to perform the following tasks after a successful upgrade of the
CloudPoint server.
Post-upgrade tasks
1 Upgrade the CloudPoint agents on the Linux and Windows application hosts.
Upgrading CloudPoint 229
Post-upgrade tasks
Note: If you are upgrading from CloudPoint 8.3 to 9.0 or 9.1, then you must
manually upgrade the on-host agents. If you are upgrading from CloudPoint
9.0 to 9.1, upgrading the on-host agents is optional.
■ Repeat these steps on all the Linux hosts where you wish to upgrade the
Linux-based agent.
Perform the following steps to upgrade the agent on Windows hosts:
■ Sign in to NetBackup UI and download the newer agent package.
Navigate to Cloud > CloudPoint servers > Actions > Add agent.
■ Stop the Veritas CloudPoint Agent service that is running on the host.
■ Run the newer version of the agent package file and follow the installation
wizard workflow to upgrade the on-host agent on the Windows host.
The installer detects the existing installation and upgrades the package to
the new version automatically.
Upgrading CloudPoint 230
Post-upgrade tasks
■ Repeat these steps on all the Windows hosts where you wish to upgrade
the Windows-based agent.
For details on how to download the agent installation package from the
NetBackup UI, refer to the following:
See “Downloading and installing the CloudPoint agent” on page 150.
2 If you want to run backup from snapshot and restore from backup jobs after
upgrade, you must update the NetBackup configuration so that the upgraded
CloudPoint configuration details are available with NetBackup. After upgrading,
all the CloudPoint servers that you want to use for backup from snapshot or
restore from backup jobs, must be re-edited by providing a token so that
NetBackup certificates are generated. See Edit a CloudPoint server section,
in the NetBackup Web UI Cloud Administrator's Guide.
Perform one of the following actions:
■ From the NetBackup Web UI, edit the CloudPoint server information.
■ In the Web UI, click Workloads > Cloud from the left navigation pane
and then click the CloudPoint servers tab.
■ Select the CloudPoint server that you just upgraded, and then click Edit
from the ellipsis action button on the right.
■ In the Edit CloudPoint server dialog, specify all the requested details.
■ Click Validate to validate the CloudPoint server certificate.
■ In the Token field enter the Standard Host Token.
■ Click Save to update the CloudPoint server configuration.
For more details about the tpconfig command and its options, refer to the Veritas
NetBackup Commands Reference Guide.
Chapter 12
Uninstalling CloudPoint
This chapter includes the following topics:
■ Backing up CloudPoint
■ Restoring CloudPoint
■ Ensure that you disable the CloudPoint server from NetBackup. Depending on
how you have set up your CloudPoint server, whether on-premise or in the cloud,
you can disable CloudPoint server either from the NetBackup Web UI or from
the NetBackup Administration console (Java UI).
Refer to the NetBackup Web UI Backup Administrator’s Guide or the NetBackup
Snapshot Client Administrator’s Guide for instructions.
■ All the snapshot data and configuration data from your existing installation is
maintained in the external /cloudpoint data volume. This information is external
to the CloudPoint containers and images and is deleted after the uninstallation.
You can take a backup of all the data in the /cloudpoint volume, if desired.
See “Backing up CloudPoint” on page 234.
Uninstalling CloudPoint 234
Backing up CloudPoint
Backing up CloudPoint
If CloudPoint is deployed in a cloud
To back up CloudPoint when it is deployed in a cloud
1 Stop CloudPoint services.
Use the following command:
# sudo docker run -it --rm -v
/full_path_to_volume_name:/full_path_to_volume_name -v
/var/run/docker.sock:/var/run/docker.sock
veritas/flexsnap-cloudpoint:version stop
For example:
# sudo docker run -it --rm -v /cloudpoint:/cloudpoint -v
/var/run/docker.sock:/var/run/docker.sock
veritas/flexsnap-cloudpoint:8.3.0.8549 stop
Note: This is a single command. Ensure that you enter the command without
any line breaks.
2 Make sure that all CloudPoint containers are stopped. This step is important
because all activity and connections to and from CloudPoint must be stopped
to get a consistent CloudPoint backup.
Enter the following:
# sudo docker ps | grep veritas
This command should not return any actively running CloudPoint containers.
3 (Optional) If you still see any active containers, repeat step 2. If that does not
work, run the following command on each active container:
# sudo docker kill container_name
For example:
# sudo docker kill flexsnap-api
Uninstalling CloudPoint 235
Backing up CloudPoint
4 After all the containers are stopped, take a snapshot of the volume on which
you installed CloudPoint. Use the cloud provider's snapshot tools.
5 After the snapshot completes, restart CloudPoint services.
Use the following command:
# sudo docker run -it --rm -v
/full_path_to_volume_name:/full_path_to_volume_name-v
/var/run/docker.sock:/var/run/docker.sock
veritas/flexsnap-cloudpoint:version start
Note: This is a single command. Ensure that you enter the command without
any line breaks.
Uninstalling CloudPoint 236
Backing up CloudPoint
Note: This is a single command. Ensure that you enter the command without
any line breaks.
2 Make sure that all CloudPoint containers are stopped. This step is important
because all activity and connections to and from CloudPoint must be stopped
to get a consistent CloudPoint backup.
Enter the following:
# sudo docker ps | grep veritas
This command should not return any actively running CloudPoint containers.
3 (Optional) If you still see any active containers, repeat step 2. If that does not
work, run the following command on each active container:
# sudo docker kill container_name
For example:
# sudo docker kill flexsnap-api
4 Back up the folder /cloudpoint. Use any backup method you prefer.
For example:
# tar -czvf cloudpoint_dr.tar.gz /cloudpoint
CloudPoint agents manage the CloudPoint plug-ins that discover assets and perform
snapshot operations on the host.
To uninstall the CloudPoint on-host agents
1 Connect to the host where you have installed the CloudPoint agent.
Ensure that the user account that you use to connect has administrative
privileges on the host.
2 For Linux-based agent, do the following:
Remove the .rpm package using the following command:
# sudo yum -y remove <cloudpoint_agent_package>
Note: To allow the uninstallation, admin users will have to click Yes on the
Windows UAC prompt. Non-admin users will have to specify admin user
credentials on the UAC prompt.
During uninstallation, the installer performs the following tasks on the CloudPoint
host:
■ Stops all the CloudPoint containers that are running
■ Removes the CloudPoint containers
■ Unloads and removes the CloudPoint images
To uninstall CloudPoint
1. Ensure that you have uninstalled the CloudPoint agents from all the hosts that
are part of the CloudPoint configuration.
See “Removing the CloudPoint agents” on page 238.
2. Verify that there are no protection policy snapshots or other operations in
progress, and then uninstall CloudPoint by running the following command on
the host:
Parameter Description
If using a proxy server, then using the examples provided in the table earlier,
the command syntax is as follows:
# sudo docker run -it --rm -v /cloudpoint:/cloudpoint -e
VX_HTTP_PROXY="http://proxy.mycompany.com:8080/" -e
VX_HTTPS_PROXY="https://proxy.mycompany.com:8080/" -e
VX_NO_PROXY="localhost,mycompany.com,192.168.0.10:80" -v
Uninstalling CloudPoint 241
Removing CloudPoint from a standalone Docker host environment
/var/run/docker.sock:/var/run/docker.sock
veritas/flexsnap-cloudpoint:8.3.0.8549 uninstall
Note: This is a single command. Ensure that you enter the command without
any line breaks.
The installer begins to unload the relevant CloudPoint container packages from
the host. Messages similar to the following indicate the progress status:
Use the following docker command to remove the CloudPoint container images
from the host:
# sudo docker rmi <image ID>
Example:
Example:
Parameter Description
2 If desired, remove the CloudPoint container images from the extension host.
Use the following docker command to view the docker images that are loaded
on the host and remove the CloudPoint images based on their IDs.
# sudo docker images -a
Restoring CloudPoint
You can restore CloudPoint using any of the following methods:
■ Recover CloudPoint using a snapshot you have in the cloud
■ Recover CloudPoint using a backup located on-premises
For example:
# mkdir /cloudpoint
6 Mount the attached volume to the installation directory you just created.
Use the following command:
# mount /dev/device-name
/full_path_to_cloudpoint_installation_directory
For example:
# mount /dev/xvdb /cloudpoint
7 Verify that all CloudPoint related configuration data and files are in the directory.
Enter the following command:
# ls -l /cloudpoint
9 Install CloudPoint.
Use the following command:
Note: This is a single command. Ensure that you enter the command without
any line breaks.
3 Install CloudPoint.
Use the following command:
Note: This is a single command. Ensure that you enter the command without
any line breaks.
■ Troubleshooting CloudPoint
Troubleshooting CloudPoint
Refer to the following troubleshooting scenarios:
■ CloudPoint agent fails to connect to the CloudPoint server if the agent
host is restarted abruptly.
This issue may occur if the host where the CloudPoint agent is installed is shut
down abruptly. Even after the host restarts successfully, the agent fails to
establish a connection with the CloudPoint server and goes into an offline state.
The agent log file contains the following error:
This issue occurs because the RabbitMQ connection between the agent and
the CloudPoint server does not close even in case of an abrupt shutdown of the
agent host. The CloudPoint server cannot detect the unavailability of the agent
until the agent host misses the heartbeat poll. The RabbitMQ connection remains
open until the next heartbeat cycle. If the agent host reboots before the next
heartbeat poll is triggered, the agent tries to establish a new connection with
the CloudPoint server. However, as the earlier RabbitMQ connection already
exists, the new connection attempt fails with a resource locked error.
Troubleshooting CloudPoint 249
Troubleshooting CloudPoint
As a result of this connection failure, the agent goes offline and leads to a failure
of all snapshot and restore operations performed on the host.
Workaround:
Restart the Veritas CloudPoint Agent service on the agent host.
■ On a Linux hosts, run the following command:
# sudo systemctl restart flexsnap-agent.service
■ On Windows hosts:
Restart the Veritas CloudPoint™ Agent service from the Windows Services
console.
■ Execute the following command on the primary server to get the NBU UUID:
/usr/openv/netbackup/bin/admincmd/nbhostmgmt -list -host
<primary server host name> | grep "Host ID"
■ The snapshot job is successful but the backup from snapshot job fails
with the error "Certificate verification failed" if CloudPoint server's
certificate is revoked
In backup from snapshot operations, while taking snapshot NetBackup
communicates with CloudPoint server.
In backup operations, communication happens between the datamover container
on CloudPoint server and NetBackup media/primary server. Following flags
should be used to enforce the revocation status check of certificates of respective
servers.
■ ECA_CRL_CHECK: By default enabled and validated during backup
operation, whereas VIRTUALIZATION_CRL_CHECK is by default disabled
and is validated during snapshot and cloud vendor operations.
■ VIRTUALIZATION_CRL_CHECK: If this flag is enabled and CloudPoint
machines certificate is revoked, then snapshot job fails.
See “Configuring security for Azure and Azure Stack ” on page 192.
■ CloudPoint fails to establish connection using agentless to the Windows
cloud instance
Error 1: <Instance_name>: network connection timed out.
Case 1: CloudPoint server log message:
…
Troubleshooting CloudPoint 251
Troubleshooting CloudPoint
Workaround
To resolve this issue, try the following steps:
■ Verify if the SMB port 445 is added in the Network security group and is
accessible from the CloudPoint server.
■ Verify if the SMB port 445 is allowed through cloud instance firewall.
Case 2: CloudPoint Server log message:
Workaround:
To resolve this issue, try the following steps:
■ Verify and add DCOM port (135) in the Network security group and is
accessible from CloudPoint server.
■ Verify if the port 135 is allowed through cloud instance firewall.
Case 3: CloudPoint Server log message:
Error: Cannot connect to the remote host. <IP address> Access denied.
Workaround:
To resolve this issue, try the following steps:
■ Verify if the user is having administrative rights.
■ Verify if the UAC is disabled for the user.
■ Restart Docker
# systemctl restart docker
■ Restart CloudPoint
# docker run --rm -it
-v /var/run/docker.sock:/var/run/docker.sock
-v /cloudpoint:/cloudpoint veritas/flexsnap-cloudpoint:<version>
start