Flex Software Admin Guide 46x
Flex Software Admin Guide 46x
x
Administration Guide
November 2024
Rev. 2.0
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2024 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Contents
Chapter 1: Introduction................................................................................................................10
Common PowerFlex terms and associated acronyms.............................................................................................. 10
Contents 3
Configure RMcache ...................................................................................................................................................39
Remove SDSs...............................................................................................................................................................39
Place storage data server in maintenance mode.................................................................................................40
Exit storage data server from maintenance mode.............................................................................................. 40
Cancel entering protected maintenance mode for SDS.................................................................................... 40
Add storage devices................................................................................................................................................... 40
Add acceleration devices............................................................................................................................................41
Storage pools..................................................................................................................................................................... 42
Add storage pools........................................................................................................................................................42
Configure storage pool settings...............................................................................................................................43
Configure RMcache for the storage pool .............................................................................................................43
Using the background device scanner................................................................................................................... 44
Set media type for storage pool.............................................................................................................................. 44
Configuring I/O priorities and bandwidth use...................................................................................................... 45
Acceleration pools............................................................................................................................................................. 45
Add an acceleration pool........................................................................................................................................... 45
Rename an acceleration pool....................................................................................................................................46
Remove an acceleration pool....................................................................................................................................46
Devices................................................................................................................................................................................ 46
Activate devices.......................................................................................................................................................... 46
Clear device errors...................................................................................................................................................... 47
Remove devices........................................................................................................................................................... 47
Rename devices........................................................................................................................................................... 47
Set media type............................................................................................................................................................. 47
Set device capacity limits.......................................................................................................................................... 48
Modify device LED settings...................................................................................................................................... 48
Volumes............................................................................................................................................................................... 48
Add volumes................................................................................................................................................................. 48
Delete volumes.............................................................................................................................................................49
Overwrite volume content........................................................................................................................................ 49
Create volume snapshots.......................................................................................................................................... 49
Set volume bandwidth and IOPS limits.................................................................................................................. 50
Increase volume size...................................................................................................................................................50
Map volumes................................................................................................................................................................ 50
Unmap volumes............................................................................................................................................................ 51
Remove a snapshot consistency group ................................................................................................................. 51
Volume tree vTree migration.....................................................................................................................................51
NVMe targets.................................................................................................................................................................... 55
Add an NVMe target.................................................................................................................................................. 55
Modify an NVMe target.............................................................................................................................................56
Remove an NVMe target...........................................................................................................................................56
Hosts.................................................................................................................................................................................... 56
Add an NVMe host...................................................................................................................................................... 57
Map volumes to hosts................................................................................................................................................ 57
Unmap volumes from hosts...................................................................................................................................... 57
Remove hosts...............................................................................................................................................................58
Configure or modify approved host IP addresses............................................................................................... 58
Approve SDCs.............................................................................................................................................................. 58
Rename hosts...............................................................................................................................................................58
Modify an SDC performance profile....................................................................................................................... 59
4 Contents
Chapter 8: Managing file storage................................................................................................ 60
High availability.................................................................................................................................................................. 60
Multitenancy.......................................................................................................................................................................60
Create the tenant.........................................................................................................................................................61
Modify the tenant........................................................................................................................................................ 61
Delete the tenant......................................................................................................................................................... 61
Overview of configuring NAS servers.......................................................................................................................... 61
Create a NAS server for NFS (Linux or UNIX-only) file systems....................................................................62
Create NAS server for SMB (Windows-only) file systems .............................................................................. 62
Change NAS server settings.................................................................................................................................... 63
NAS server networks................................................................................................................................................. 64
NAS server naming services..................................................................................................................................... 64
NAS server sharing protocols...................................................................................................................................66
NAS server protection and events.......................................................................................................................... 67
NAS server settings.................................................................................................................................................... 67
NAS server security ................................................................................................................................................... 70
About file system storage................................................................................................................................................ 71
File-level retention.......................................................................................................................................................73
Create a file system for NFS exports..................................................................................................................... 73
Create a file system for SMB shares...................................................................................................................... 74
Modify file-level retention......................................................................................................................................... 75
Change file system settings......................................................................................................................................76
Create an SMB share..................................................................................................................................................77
Create an NFS export.................................................................................................................................................77
Create a global namespace....................................................................................................................................... 78
More about file systems............................................................................................................................................ 80
File system quotas ..................................................................................................................................................... 83
File protection.................................................................................................................................................................... 84
Create a protection policy......................................................................................................................................... 84
Create snapshot rules................................................................................................................................................ 85
Create a snapshot....................................................................................................................................................... 85
Modify a snapshot.......................................................................................................................................................86
Delete a snapshot........................................................................................................................................................86
Assign a protection policy to a file system............................................................................................................86
Unassign a protection policy.....................................................................................................................................86
Modify a protection policy.........................................................................................................................................87
Delete a protection policy..........................................................................................................................................87
Modify a snapshot rule............................................................................................................................................... 87
Delete a snapshot rule................................................................................................................................................ 87
Refresh a file system using snapshot..................................................................................................................... 87
Restore a file system from a snapshot...................................................................................................................88
Clone NAS server and file system........................................................................................................................... 88
Clone the file system.................................................................................................................................................. 88
Create a thin clone using a snapshot..................................................................................................................... 89
Contents 5
Overwrite volume content from a snapshot.........................................................................................................90
Set bandwidth and IOPS limits for snapshots.......................................................................................................91
Lock and unlock snapshots........................................................................................................................................91
Delete snapshots.......................................................................................................................................................... 91
Map snapshots.............................................................................................................................................................92
Unmap snapshots........................................................................................................................................................ 92
Increase the size of a snapshot............................................................................................................................... 92
Migrate a snapshot vTree......................................................................................................................................... 92
Pause snapshot vTree migration............................................................................................................................. 93
Roll back snapshot vTree migration........................................................................................................................94
Set snapshot vTree migration priority....................................................................................................................94
Snapshot policies...............................................................................................................................................................94
Create a snapshot policy........................................................................................................................................... 95
Remove snapshot policy............................................................................................................................................ 95
Modify snapshot policy.............................................................................................................................................. 95
Rename snapshot policy............................................................................................................................................ 96
Activate snapshot policy............................................................................................................................................96
Pause snapshot policy................................................................................................................................................96
Assign volumes or snapshots to a snapshot policy............................................................................................. 96
Unassign a volume from a snapshot policy........................................................................................................... 96
Remote protection............................................................................................................................................................ 97
Extract and upload certificates................................................................................................................................ 97
Journal capacity...........................................................................................................................................................98
Add a peer system...................................................................................................................................................... 99
Restart the replication cluster................................................................................................................................100
SDRs............................................................................................................................................................................. 100
Replication consistency group.................................................................................................................................101
6 Contents
Exporting a compliance report for all resources.................................................................................................174
Exporting a configuration report for all resources and resource groups......................................................175
Importing networks....................................................................................................................................................175
Updating passwords for system components.....................................................................................................176
Contents 7
Upgrading PowerFlex Manager from a local repository path.........................................................................224
Editing the upgrade settings.................................................................................................................................. 225
8 Contents
Chapter 19: Enabling audit logging.............................................................................................273
Define the PowerFlex events notification policy..................................................................................................... 273
Define the Ingress notification policy......................................................................................................................... 274
Change Ingress setting to emit audit messages......................................................................................................274
Contents 9
1
Introduction
This document provides procedures for using Dell PowerFlex Manager to administer your Dell PowerFlex system. This document
includes content from the online help for PowerFlex Manager, as well as information about other administrative procedures you
might perform outside of PowerFlex Manager.
It provides the following information:
● Initial configuration and setup
● Performing common tasks
● Expanding PowerFlex storage
● Displaying system information at a glance
● Managing block storage
● Managing file storage
● Protecting your storage environment
● Performing lifecycle operations for a resource group
● Managing resources
● Monitoring events and alerts
● Configuring system settings
● PowerFlex Manager user interface reference
● Additional administration activities
The target audience for this document includes system administrators responsible for managing PowerFlex systems.
For additional PowerFlex software documentation, go to PowerFlex software technical documentation.
See the PowerFlex glossary for more information about PowerFlex terms and acronyms.
10 Introduction
2
Revision history
Table 2. Revisions
Date Document revision Description of changes
November 2024 2.0 Updates for release 4.6.1
May 2024 1.0 Initial release
Revision history 11
3
Initial configuration and setup
This section includes tasks you need to perform when you first begin using PowerFlex Manager.
Initial configuration
The first time you log in to PowerFlex Manager, you are prompted with an Initial Configuration Wizard, which prompts you to
configure the basic settings that are required to start using PowerFlex Manager.
Before you begin, have the following information available:
● SupportAssist configuration details.
SupportAssist refers to secure connect gateway, which is used for call home functionality and remote connectivity.
● Information about whether you intend to use a Release Certification Matrix (RCM) or Intelligent Catalog (IC).
● Information about the type of installation you want to perform, including details about your existing PowerFlex instance when
you intend to import from another PowerFlex instance.
To configure the basic settings:
1. On the Welcome page, read the instructions and click Next.
2. On the SupportAssist page, optionally enable SupportAssist and specify SupportAssist connection settings, and click Next.
3. On the Installation Type page, specify whether you want to deploy a new instance of PowerFlex or import an existing
instance, and click Next.
4. On the Summary page, verify all settings for SupportAssist and installation type. Click Finish to complete the initial
configuration.
After completing the Initial Configuration Wizard, you can get started using PowerFlex Manager from the Getting Started
page.
Enabling SupportAssist
SupportAssist is a secure support technology for the data center. SupportAssist refers to secure connect gateway, which
is used for call home functionality and remote connectivity. You can enable SupportAssist as part of the initial configuration
wizard. Alternatively, you can enable it later by adding it as a destination to a notification policy in Events and Alerts.
PowerFlex Manager provides support through integration with the secure connect gateway, ensuring better alignment with Dell
Technologies services initiatives to enhance the user experience. SupportAssist is the functionality that enables this connection.
When you enable SupportAssist, you can take advantage of the following benefits, depending on the service agreement on your
device:
● Automated issue detection - SupportAssist monitors your Dell Technologies devices and automatically detects hardware
issues, both proactively and predictively.
● Automated case creation - When an issue is detected, SupportAssist automatically opens a support case with Dell
Technologies Support.
● Automated diagnostic collection - SupportAssist automatically collects system state information from your devices and
uploads it securely to Dell Technologies. Dell Technologies Support uses this information to troubleshoot the issue.
● Proactive contact - A Dell Technologies Support agent contacts you about the support case and helps you resolve the issue.
The first time that you access the initial configuration wizard, the connection status displays as not configured.
Ensure you have the details about your SupportAssist configuration.
If you do not enable SupportAssist in the initial setup, you can add it later as a destination when adding a notification policy
through Settings > Events and Alerts > Notification Policies.
1. Click Enable SupportAssist.
2. From the Connection Type tab, there are two options:
Related information
Monitoring events and alerts
License management
Add a notification policy
If you are importing an existing PowerFlex deployment that was not managed by PowerFlex Manager, make sure you have the IP
address, username, and password for the primary and secondary MDMs. If you are importing an existing PowerFlex deployment
that was managed by PowerFlex Manager, make sure you have the IP address, username, and password for the PowerFlex
Manager virtual appliance.
1. Specify the type of installation you want to perform:
● Click I want to deploy a new instance of PowerFlex if you do not have an existing PowerFlex deployment and would
like to bypass the import step.
● Click I have a PowerFlex instance to import if you would like to import a an existing PowerFlex instance that was not
managed by PowerFlex Manager.
Specify whether you are currently running PowerFlex 3.x or PowerFlex 4.x.
For a PowerFlex 3.x system, provide the following details about the existing PowerFlex instance:
○ IP addresses for the primary and secondary MDMs (separated by a comma with no spaces)
○ Admin username and password for the primary MDM
○ Operating system username and password for the primary MDM
○ LIA password
For a PowerFlex 4.x system, indicate whether the PowerFlex instance is used for Production Storage or a
Management Cluster. The Management Cluster use case is applicable for PowerFlex appliance and PowerFlex rack
install types that have a dedicated management cluster with PowerFlex as a shared storage for the datastore hosting the
Management VMs.
Then, provide the following details about the existing PowerFlex instance:
○ IP addresses for all nodes with a primary and secondary MDM
○ System ID for the cluster
○ LIA password
● Click I have a PowerFlex instance managed by PowerFlex Manager to import if you would like to import a an
existing PowerFlex directly from an existing PowerFlex Manager virtual appliance.
Provide the following details about the existing PowerFlex Manager virtual appliance:
○ IP address or DNS name for the virtual appliance
○ Username and password for the virtual appliance
2. Click Next to proceed.
For a full PowerFlex Manager migration, the import process backs up and restores information from the old PowerFlex Manager
virtual appliance. The migration process for the full PowerFlex Manager workflow imports all resources, templates, and services
from a 3.8 instance of PowerFlex Manager. The migration also connects the legacy PowerFlex gateway to the MDM cluster,
which enables the Block tab in the user interface to function.
The migrated environment includes the PowerFlex gateway resource. The operating system hostname and asset/service tag are
set to powerflex.
For a system that has just the PowerFlex software, there is no PowerFlex Manager information available after the migration
completes. The migrated environment does not include resources, templates, and services.
After completing the Initial Configuration Wizard, you may need to perform some additional steps. The steps vary depending
on which installation type you selected:
● If you clicked I want to deploy a new instance of PowerFlex on the Installation Type page, you now need to deploy a
new instance of PowerFlex. The Getting Started page provides more information on what to do next.
● If you clicked I have a PowerFlex instance to import on the Installation Type page, you must perform these steps:
1. On the Settings page, upload the compatibility matrix file and upload the latest PowerFlex software catalog.
This catalog only includes the components that are required for an upgrade of PowerFlex.
2. On the Resources page, select the PowerFlex entry, and perform a nondisruptive update.
You do not need a resource group (service) to perform an upgrade of the PowerFlex environment. In addition, PowerFlex
Manager does not support Add Existing Resource Group operations for a PowerFlex software migration. If you want
to be able to perform any deployments, you need a new resource group. Therefore, you must create a new template (or
clone a sample template), and deploy a new resource group from the template.
● If you clicked I have a PowerFlex instance managed by PowerFlex Manager to import on the Installation Type page,
you must perform these steps:
1. On the Settings page, upload the compatibility matrix file and upload the latest repository catalog (RCM or IC).
2. On the Resources page, select the PowerFlex entry, and perform a nondisruptive update.
3. On the Resource Groups page, perform an RCM/IC upgrade on any migrated service that must be upgraded.
The migrated resource groups are initially non-compliant, because PowerFlex Manager is running a later RCM that
includes PowerFlex 4.x. These resource groups must be upgraded to the latest RCM before they can be expanded or
managed with automation operations.
CAUTION: Check the Alerts page before performing the upgrade. Look for major and critical alerts that
are related to PowerFlex Block and File to be sure the MDM cluster is healthy before proceeding.
4. Power off the old PowerFlex Manager VM, the old PowerFlex gateway VM, and the presentation server VM.
The upgrade of the cluster causes the old PowerFlex Manager virtual appliances to stop working.
5. After validating the upgrade, decommission the old instances of PowerFlex Manager, the PowerFlex gateway, and the
presentation server.
Do not delete the old instances until you have had a chance to review the initial configuration and confirm that the old
environment was migrated successfully.
Getting started
The Getting Started page guides you through the common configurations that are required to prepare a new PowerFlex
Manager environment. A green check mark on a step indicates that you have completed the step. Only super users have access
to the Getting Started page.
The following table describes each step:
Upload Compliance File Provide compliance file location and authentication information for use within
PowerFlex Manager. The compliance file defines the specific hardware
components and software version combinations that are tested and certified
by Dell for hyperconverged infrastructure and other Dell products. This step
enables you to choose a default compliance version for compliance or add new
compliance versions.
You can also click Settings > Repositories > Compliance Versions.
NOTE: Before you make an RCM or IC the default compliance version, you
must first upload a suitable compatibility management file under Settings
> Repositories > Compatibility Management.
Define Networks If you plan to perform an advanced CSV-based deployment, this step can be
skipped.
Enter detailed information about the available networks in the environment.
This information is used later during deployments that are based on templates
and resource groups. These deployments use the network information to
configure nodes and switches to have the right network connectivity.
PowerFlex Manager uses the defined networks in templates to specify the
networks or VLANs that are configured on nodes and switches for your
resource groups.
This step is enabled immediately after you perform an initial configuration for
PowerFlex Manager.
You can also click Settings > Networking > Networks.
Discover Resources If you plan to perform an advanced CSV-based deployment, this step can be
skipped.
Grant PowerFlex Manager access to resources (nodes, switches, virtual
machine managers) in the environment by providing the management IP and
credential for the resources to be discovered.
This step is not enabled until you define your networks.
You can also click Resources > Discover Resources.
Manage Deployed Resources (Optional) If you plan to perform an advanced CSV-based deployment, this step can be
skipped.
Add an existing resource group for a cluster that is already deployed and
manage the resources within PowerFlex Manager.
This step is not enabled until you define your networks.
You can also click Lifecycle > Resource Groups > Add Existing Resource
Group.
Deploy Resources If you plan to perform an advanced CSV-based deployment, this step can be
skipped.
Create a template with requirements that must be followed during a
deployment. Templates enable you to automate the process of configuring
and deploying infrastructure and workloads. For most environments, you can
clone one of the sample templates that are provided with PowerFlex Manager
and make modifications as needed. Choose the sample template that is most
appropriate for your environment.
For example, for a hyperconverged deployment, clone one of the
hyperconverged templates.
For a two-layer deployment, clone the compute-only templates. Then clone one
of the storage templates.
If you would like to deploy PowerFlex with a CSV file, click Deploy With Installation File. See the topic on deploying a
PowerFlex cluster using a CSV topology file in the Dell PowerFlex 4.6.x Install and Upgrade Guide for more information.
To revisit the Getting Started page, click Getting Started on the help menu.
Related information
Networking
Discover a resource
Adding an existing resource group
Templates
License management
You can also create a new template. However, for most environments, you can
simply clone one of the sample templates that are provided with PowerFlex
Manager.
Deploy a new resource group Click Lifecycle > Resource Groups. On the Resource Groups page, click Deploy
New Resource Group.
You can only deploy a resource group using a published template.
Related information
Repositories
Networking
Resources
Preparing a template for a resource group deployment
Deploying a resource configuration in a resource group
You can also create a new template. However, for most environments, you can
simply clone one of the sample templates that are provided with PowerFlex
Manager.
Import an existing block storage Click Lifecycle > Resource Groups and click +Add Existing Resource Group.
configuration
Be sure to upload a compliance file (if necessary), define the networks, and
discover resources before adding the existing resource group.
Manage block storage On the menu bar, click Block, and choose the type of block storage components
you want to manage:
● Protection Domains
● Fault Sets
● SDSs
● Storage Pools
● Acceleration Pools
● Devices
● Volumes
● NVMe Targets
● Hosts
If you create new objects on the Block tab, you need to update the inventory
on the Resources page, and then click Update Resource Group Details on the
Lifecycle > Resource Groups page for any resource group that requires the
updates.
Related information
Repositories
Networking
Discover a resource
Clone a template
Deploying a resource configuration in a resource group
Managing block storage after PowerFlex deployment
For a file storage deployment, clone one of the following sample templates:
● PowerFlex File
● PowerFlex File - SW Only
6. Click Lifecycle > Resource Groups. On the Resource Groups page, click
Deploy New Resource Group.
For a PowerFlex file cluster, you must have a minimum of two nodes and a
maximum of 16 nodes.
For a PowerFlex file cluster, you must choose Use Compliance File Linux Image
for the OS Image, Compute Only for the PowerFlex Role. Also, select Enable
PowerFlex File.
The sample templates for file storage configuration pull in the PowerFlex
management and PowerFlex data networks from the associated PowerFlex
gateway. In addition to the PowerFlex management and PowerFlex data networks,
you need to include a NAS management network and at least one NAS data
network. One NAS data network is enough, but, for redundancy, two networks
are recommended.
NOTE: Check the network settings carefully, as they are different for standard
configurations and software-only configurations.
Import an existing file storage Click Lifecycle > Resource Groups and click +Add Existing Resource Group.
configuration
Upload a compliance file (if necessary), define the networks, and discover
resources before adding the existing resource group.
PowerFlex Manager does not support importing existing deployments for software-
only NAS environments.
Manage file storage On the menu bar, click File. Then, choose the type of file components you want to
manage:
● NAS Servers
● File Systems
● SMB Shares
If you create new objects on the File tab, you need to update the inventory on
the Resources page. Then, you need to click Update Resource Group Details on
the Lifecycle > Resource Groups page for any resource group that requires the
updates.
Related information
Deploy the PowerFlex file cluster
Managing file storage
Managing components
Once PowerFlex Manager is configured, you can use it to manage the system.
The following table describes common tasks for managing system components and what steps to take in PowerFlex Manager to
initiate each task:
Be sure to upload a compliance file (if necessary), define the networks, and
discover resources before adding the existing resource group.
Perform node expansion 1. Click Lifecycle > Resource Groups. On the Resource Groups page, select a
resource group.
2. On the Resource Group Details tab, under Add Resources, click Add Nodes.
The procedure is the same for new resource groups and existing resource groups.
Remove a node 1. Click Lifecycle > Resource Groups.
2. On the Resource Groups page, select a resource group.
3. On the Resource Group Details tab, under More Actions, click Remove
Resource.
4. Select Delete Resource for the Resource removal type.
Enter service mode (applicable for 1. Click Lifecycle > Resource Groups.
PowerFlex appliance and PowerFlex rack 2. On the Resource Groups page, select a resource group.
only) 3. On the Resource Group Details tab, under More Actions, click Enter
Service Mode.
Exit service mode (applicable for 1. Click Lifecycle > Resource Groups.
PowerFlex appliance and PowerFlex rack 2. On the Resource Groups page, select a resource group.
only) 3. On the Resource Group Details tab, under More Actions, click Exit Service
Mode.
Related information
Lifecycle
Resource groups
Templates
Resources
For information about which resource groups are healthy and in compliance, and
which are not, look at the Resource Groups section.
Monitor software and firmware 1. Click Lifecycle > Resource Groups.
compliance 2. On the Resource Groups page, select a resource group.
3. On the Resource Group Details page, click View Compliance Report.
Perform software and firmware From the compliance report, view the firmware or software components. Click
remediation Update Resources to update non-compliant resources.
Generate a troubleshooting bundle 1. Click Settings and click Serviceability.
2. Click Generate Troubleshooting Bundle.
You can also generate a troubleshooting bundle from the Resource Groups page:
1. Click Lifecycle > Resource Groups.
2. On the Resource Groups page, select a resource group.
3. On the Resource Group Details page, click Generate Troubleshooting
Bundle.
Download a report that lists compliance 1. Click Resources.
details for all resources 2. Click Export Report and download a compliance report (PDF or CSV) or a
configuration report (PDF).
View alerts On the menu bar, click Monitoring > Alerts.
Related information
Lifecycle
If you do not choose the correct options when you remove the resource group,
you could tear down the resource group and destroy it, or leave the servers in an
unmanaged state, and not be able to add the existing resource group.
Manually created volumes outside of Click Run Inventory on the Resources page to update the inventory. Then, click
PowerFlex Manager Update Resource Group Details on the Lifecycle > Resource Groups page for
any resource group that requires the updated volumes.
Manually deleted volumes outside of Click Run Inventory on the Resources page to update the inventory. Then, click
PowerFlex Manager Update Resource Group Details on the Lifecycle > Resource Groups page
for any resource group that requires updated information about volumes deleted.
PowerFlex Manager displays a message indicating that the volumes have been
removed from the resource group.
Note that vCLS datastore names are based on cluster names, so any change to a
cluster name renders the vCLS datastore un-recognizable. If you change the cluster
name manually for a deployment, the datastore name hosting the vCLS VMs needs
to be updated. You need to change the vCLS datastore name to match the new
cluster name and then perform an Update Resource Group Details operation. For
example:
● Existing cluster name: cluster1
● vCLS datastore name: powerflex-cluster1-ds-1
● Updated cluster name: cluster1-new
● vCLS datastore name: powerflex-cluster1-new-ds-1
Created new objects on the Block or Click Run Inventory on the Resources page to update the inventory. Then, click
File tab Update Resource Group Details on the Lifecycle > Resource Groups page for
any resource group that requires the updates.
Related information
Lifecycle
Resource groups
Related information
Deploy a PowerFlex cluster using a CSV topology file from PowerFlex Manager
Related information
Adding new components or features by editing the existing CSV file
Where: {token} is the Keycloak token, <PATH_TO_CSV_FILE> is the path to the CSV topology file, and <IP_ADDRESS> is
the PowerFlex Manager IP address.
The command returns a JSON file containing the required cluster configuration. The installation command requires this file in
order to install resources. Save this file.
2. Add packages:
a. Copy the installation packages of the required core components (including ActiveMQ RPM) to: "/usr/local/
tomcat/temp/scaleio".
b. Open the shell, and run the following command:
c. Make a note of the block-legacy-gateway ID that is returned in the first line of the command output. For example, in this
sample output, the ID is 76594f9459-mqgxj:
apt-get update
apt-get install openssh-client
scp root@<RPM_MACHINE_IPS>:<RPMS_LOCATION>* temp/scaleio
Where <RPM_MACHINE_IPS> is the IP address where the packages are saved, and <RPMS_LOCATION> is the path to
the location of the packages.
Where: {token} is the Keycloak token, <IP_ADDRESS> is the PowerFlex Manager IP address, and <PATH> is the path to the
JSON configuration file that was output in step 1.
4. Installation is asynchronous and consists of four stages: query, upload, install, and configure. Manually monitor and move
from one phase to the next one, using the commands queryPhaseState and moveToNextPhase. You can also monitor
the progress of each stage by viewing events in PowerFlex Manager.
The following is an example of an installation event:
COMMAND_STARTED_ON_OPERATION_INSTALL_AND_PHASE_QUERY
5. After installation is complete, clear the gateway state. Use the command POST im/types/Command/instances/
actions/clear.
For example:
Where: {token} is the Keycloak token, and <IP_ADDRESS> is the PowerFlex Manager IP address.
The following table describes the terminology around capacity that is used in the dashboard:
The following diagram is an example of how data savings are represented in the dashboard for fine granularity:
If 540.4 TB of storage
is provisioned to VMs but
only 105.94 TB is used
on the volumes, the thin
provisioning ratio is 5.10:1
(540 TB / 105.94 TB) or 20
% in fine granularity.
Storage efficiency The ratio between the logical space needed (written by the operating The logical space that is
system) compared with the actual physical space that is written to the written is 105.94 TB. After
storage after data reduction (if applicable). data reduction, it actually
writes only 31.55 TB to the
storage pool which is a 3.4
reduction ratio (105.94 TB /
31.55 TB).
Resources/Inventory
This section shows the number of resources in the current inventory:
● VM managers
● Nodes
● Switches
● Protection domains
● Storage pools
● Volumes
● Hosts
Resource groups
This section displays a graphical representation of the resource groups deployed based on status. The number next to each icon
indicates the number of resource groups in a particular state. The resource groups are categorized into the following states:
You can monitor node health by viewing the status of the resource group on the Resource Groups page.
If a resource group is in a yellow (or warning) state, it means that one or more nodes is in a warning or failed state. If a resource
group is in a red (or error) state, it indicates that the resource group has fewer than two nodes that are not in a failed state.
To view the status of a failed node component, hover the cursor on the image of the failed component in the resource group.
Alerts
This section lists the current alerts within the system that is categorized by the severity level:
● Critical
● Major
● Minor
Related information
Configuring block storage
Protection domains
A protection domain is a set of storage data servers (SDSs) configured in separate logical groups. It may also contain NVMe
target components known as storage data targets (SDTs) and replication components known as storage data replicators
(SDRs). These logical groups allow you to physically and/or logically isolate specific data sets and performance capabilities
within specified protection domains and limit the effect of any specific device failure. You can add, modify, activate, inactivate,
or remove a protection domain in the PowerFlex system.
Fault sets
Fault sets are logical entities that contain a group of SDSs within a protection domain. A fault set can be defined for a set
of servers that are likely to fail together, for example, an entire rack full of servers. PowerFlex maintains mirrors of all chunks
within a fault set on SDSs that are outside of this fault set so that data availability is assured even if all the servers within one
fault set fail simultaneously.
NOTE: Acceleration settings can be configured later from the Block > Acceleration Pools page.
NOTE: You cannot enable zero padding after adding the devices.
Configure RMcache
RMcache uses RAM that is allocated for caching. Its size is limited to the amount of allocated RAM. By default, RMcache
caching is disabled.
● Enable RMcache at the storage pool level for all of the SDSs in the storage pool.
● RMcache must also be enabled at the SDS level.
For a read to be stored in the RAM of a specific SDS, the RMcache feature on that SDS must be enabled, and the relevant
storage pool and the relevant volume must both be configured to use RMcache. Caching only begins after one or more devices
are added to the SDS. The amount of RAM that you may allocate for RMcache is limited and can never be the maximum
available RAM.
Enabling RMcache at the storage pool level allows you to control the cache settings for all SDSs in the storage pool. You can
enable RAM caching for a storage pool and then disable caching on one or more SDSs individually.
Remove SDSs
Remove SDSs and devices gracefully from a system. The removal of some objects in the system can take a long time, because
removal may require data to be moved to other storage devices in the system.
If you plan to replace a device with a device containing less storage capacity, you can configure the device to a smaller capacity
than its actual capacity, in preparation for replacement. This will reduce rebuild and rebalance operations in the system later on.
The system has job queues for operations that take a long time to execute. You can view jobs by clicking the Running Storage
Jobs icon on the right side of the toolbar. Operations that are waiting in the job queue are shown as Pending. If a job in the
queue will take a long time, and you do not want to wait, you can cancel the operation using the Abort button in the Remove
command window (if you left it open), or using the Abort entering Protected Maintenance Mode command from
the More Actions menu.
CAUTION: The Remove command deletes the specified objects from the system. Use the Remove command with
caution.
1. On the menu bar, click Block > SDSs.
2. In the right pane, select the relevant SDS check box and click More Actions > Remove.
3. In the Remove SDS dialog box, click Remove.
4. Verify that the operation has finished and was successful, and click Dismiss.
Storage pools
A storage pool is a set of physical storage devices in a protection domain.
A volume is distributed over all devices residing in the same storage pool. Add, modify, or remove a storage pool in the
PowerFlex system.
5. To enable validation of the checksum value of in-flight data reads and writes, select Use Inflight Checksum.
6. By default, the Use Persistent Checksum is selected to ensure persistent checksum data validation.
NOTE: This option is enabled only when HDD or SSD with medium granularity is selected.
Option Description
Enable Rebuild/ By default, the rebuild/rebalance features are enabled in the system because they are essential for
Rebalance system health, optimal performance, and data protection.
CAUTION: Rebuilding and rebalancing are essential parts of PowerFlex and should only
be disabled temporarily, in special circumstances. If rebuilds are disabled, redundancy
will not be restored after failures. Disabling rebalance may cause the system to become
unbalanced even if no capacity is added or removed.
Enable Inflight Inflight checksum protection mode can be used to validate data reads and writes in storage pools, in
Checksum order to protect data from data corruption.
Enable Persistent Persistent checksum can be used to support the medium granularity layout in protecting the storage
Checksum device from data corruption. Select validate on read to validate data reads in the storage pool.
NOTE: If you want to enable or disable persistent checksum, you must first disable the
background device scanner from the storage pool.
Enable Zero When zero padding policy is enabled, it ensures that every read from an area previously not written
Padding Policy to returns zeros. Zero padding policy is enabled by default on fine granularity-based storage pools and
cannot be changed. For medium granularity storage pools, the zero padding policy can be modified only
if the storage pool is empty and has no SDS devices added to it.
Enable For fine granularity storage pools, inline compression allows you to gain more effective capacity.
Compression
4. Click Apply.
NOTE: Zero padding must be enabled in order to set the background device scanner to data compare mode.
NOTE: High bandwidth may create negative impact on system performance and should be used carefully and
in extreme cases only—for example, when there is an urgent need to check certain devices. When setting the
background device scanner bandwidth, you should take into account the maximum bandwidth of the devices.
4. Click Apply.
5. Verify that the operation has finished and was successful, and click Dismiss.
NOTE: These features affect system performance, and should only be configured by an advanced user.
Acceleration pools
An acceleration pool is a group of acceleration devices within a protection domain. PowerFlex only supports acceleration of fine
granularity storage pools.
Fine granularity acceleration uses NVDIMMs devices configured to fine granularity storage pools. Configure NVDIMM
acceleration pools for fine granularity acceleration.
NOTE: Dell PowerEdge R660 and R760 servers uses software-defined persistent memory (SDPM). If you are using
SDPM, select NVDIMM for the pool type.
Option Description
Add acceleration devices from Optionally, select the Add Devices To All SDSs check box to add acceleration
all SDSs that contribute to the devices from all SDSs in the protection domain. Otherwise, select acceleration devices
relevant acceleration pool. one by one.
Add devices one by one. Enter the path and name of each acceleration device, select the SDS on which the
device is installed, and then click Add Devices. Repeat for all desired acceleration
devices in the acceleration pool.
6. Click Create.
7. Verify that the operation has finished and was successful, and click Dismiss.
Devices
Storage devices or acceleration devices are added to an SDS or to all SDSs in the system.
There are two types of devices: storage devices and acceleration devices.
Activate devices
Activate one or more devices that were added to a system using the Test only option for device tests.
1. On the menu bar, click Block > Devices.
2. In the list of devices, select the check boxes of the required devices, and click More Actions > Activate.
3. In the Activate Device dialog box, click Activate.
4. Verify that the operation has finished and was successful, and click Dismiss.
Remove devices
Use this procedure to remove a storage or acceleration device.
Before removing an NVDIMM or SDPM acceleration device, remove all storage devices that are being accelerated by the
persistent memoru. Then, remove the NVDIMM or SDPM from its acceleration pool.
1. On the menu bar, click Block > Devices.
2. In the list of devices, find the device that you want to remove, make a note of the SDS in which it is installed, and the
storage pool or acceleration pool to which it belongs.
This information will be useful for adding the device back to the system later.
3. Select the required device, and click More Actions > Remove.
4. In the Remove Device dialog box, verify that you have selected the desired device, and click Remove.
5. Verify that the operation has finished and was successful, and click Dismiss.
A rebuild/rebalance occurs. For each device that was removed from the SDS, the corresponding cell in the Used Size
column counts down to zero, and then the device disappears from the Devices list.
Rename devices
Use this procedure to change the name of a device.
You can view the current device name by displaying the Name column in the device list. The Name column is hidden, by default.
When no device name has been defined, the name is set by default to the device's path name.
The device name must adhere to the following rules:
● Contains less than 32 characters
● Contains only alphanumeric and punctuation characters
● Is unique within the object type
1. On the menu bar, click Block > Devices.
2. In the list of devices, select the required device, and click Modify > Rename.
3. In the Rename Device dialog box, enter the new name, and click Apply.
4. Verify that the operation has finished and was successful, and click Dismiss.
NOTE: The capacity assigned to the storage device must be smaller than its actual physical size.
Volumes
Define, configure and manage volumes in the PowerFlex system.
Add volumes
Use the following procedure to add volumes. Dell Technologies highly recommends giving each volume a meaningful name
associated with its operational role.
There must be at least three SDS nodes in the system and there must be sufficient capacity available for the volumes.
PowerFlex objects are assigned a unique ID that can be used to identify the object in CLI commands. The default name for each
volume object is its ID. The ID is displayed in the Volumes list or can be obtained using a CLI query. Define each volume name
according to the following rules:
● Contains less than 32 characters
● Contains only alphanumeric and punctuation characters
● Is unique within the object type
To add one or multiple volumes, perform these steps:
1. On the menu bar, click Block > Volumes.
2. Click + Create Volume.
3. In the Create Volume dialog box, configure the following items:
a. Enter the number of volumes to be created.
● If you type 1, enter a name for the volume.
● If you type a number greater than 1, enter the Volume Prefix and the Starting Number of the volume. This number
will be the first number in the series that will be appended to the volume prefix. For example, if the volume prefix
is Vol%i% and the starting number value is 100, the name of the first volume created will be Vol100, the second
volume will be Vol101, and so on.
b. Select either Thin (default) or Thick provisioning options.
Delete volumes
Remove volumes from PowerFlex.
Ensure that the volume that you are deleting is not mapped to any hosts. If it is, unmap it before deleting it. In addition, ensure
that the volume is not the source volume of any snapshot policy. You must remove the volume from the snapshot policy before
you can remove the volume.
To prevent causing a data unavailability scenario, avoid deleting volumes or snapshots while the MDM cluster is being upgraded.
CAUTION: All data is erased from a deleted volume.
NOTE: Use this command carefully, since this will overwrite data on the target volume or snapshot.
NOTE: If the destination volume is an auto snapshot, the auto snapshot must be locked before you can continue to
overwrite volume content.
1. On the menu bar, click Block > Volumes.
2. In the list of volumes, select the required volume, and click More Actions > Overwrite Content.
3. In the Overwrite Content of Volume dialog box, in the Target Volume tab, the selected volume details are displayed. Click
Next.
4. In the Select Source Volume tab, do the following:
a. Select the source volume from which to copy content.
b. Click the Time Frame button and select the interval from which to copy content. If you choose Custom, select the date
and time and click Apply.
c. Click Next.
5. In the Review tab, review the details and click Overwrite Content.
6. Verify that the operation has finished and was successful, and click Dismiss.
Map volumes
Mapping exposes the volume to the specified host, effectively creating a block device on the host. You can map a volume to one
or more hosts.
Volumes can only be mapped to one type of host: either SDC or NVMe. Ensure that you know which type of hosts are being
used for each volume, to avoid mixing host types.
For Linux-based devices, the scini device name may change on reboot. Dell Technologies recommends that you mount a
mapped volume to the PowerFlex unique ID, which is a persistent device name, rather than to the scini device name.
To identify the unique ID, run the command ls -l /dev/disk/by-id/.
NOTE: You cannot map a volume if the volume is an auto snapshot that is not locked.
NOTE: You cannot map the volume on the target of a peer system if it is connected to a replication consistency group.
Unmap volumes
Unmap one or more volumes from hosts.
1. On the menu bar, click Block > Volumes.
2. In the list of volumes, select the relevant volumes, and then click Mapping > Unmap volumes.
3. Select the host from which to remove mapping to the volumes.
4. Click Unmap.
5. Verify that the operation has finished and was successful, and click Dismiss.
NOTE: You cannot remove a consistency group that contains auto snapshots.
vTree migration can take a long time, depending on the size of the vTree and the system workload. During migration, the vTree
is fully available for user I/O. vTree migration is done volume block by volume block. When a single block has completed its
migration, the capacity of the block at the source becomes available, and it becomes active in the destination storage pool.
During migration, the vTree has some of its blocks active in the source storage pool, and the remaining blocks active in the
destination storage pool.
NOTE: You can speed up the migration by adjusting the volume migration I/O priority (QoS). The default favors
applications with one concurrent I/O and 10 MB/sec per device. Increasing the 10 MB/sec setting increases the migration
speed in most cases. The maximum value that can be reached 25 MB/sec. The faster the migration, the higher the impact
might be on applications. To avoid significant impact, the value of concurrent I/O operations per second should not be
increased.
When migrating from a medium granularity storage pool to a fine granularity storage pool, volumes must be zero padded.
You can pause a vTree migration at any time, in the following ways:
● Gracefully—to allow all data blocks currently being migrated to finish before pausing.
● Forcefully—to stop the migration of all blocks currently in progress.
Once paused, you can choose to either resume the vTree migration, or to roll back the migration and have all volume blocks
returned to the original storage pool.
vTree migration might also be paused internally by the system. System pauses happen when a rebuild operation begins at either
the source or destination storage pool. If the migration is paused due to a rebuild operation, it remains paused until the rebuild
ends. If the system encounters a communication error that prevents the migration from proceeding, it pauses the migration and
periodically tries to resume it. After a configurable number of attempts to resume the migration, the migration remains paused,
and no additional retries will be attempted. You can manually resume migrations that were internally paused by the system.
Concurrent vTree migrations are allowed in the system. These migrations are prioritized by the order in which they were
invoked, or by manually assigning the migration to the head or the tail of the migration queue. You can update the priority of a
migration while it is being run. The system strives to adhere to the priority set by the user, but it is possible that volume blocks
belonging to migrations lower in priority are run before ones that are higher in priority. This can happen when a storage pool that
NOTE: vTree migration is a long process and can take days or weeks, depending on the size of the vTree.
Option Description
Add migration to the Give this vTree migration the highest priority in the migration priority queue.
head of the migration
queue
Ignore destination Allow the migration to start regardless of whether there is enough capacity at the destination,
capacity or not.
Enable compression Compression is done by applying a compression-algorithm to the data.
Convert vTree from... Convert a thin-provisioned vTree to thick-provisioned, or vice-versa, at the destination,
depending on the provisioning of the source volume.
NOTE: SDCs with a version earlier than v3.0 do not fully support converting a thick-
provisioned vTree to a thin-provisioned vTree during migration; after migration, the vTree
will be thin-provisioned, but the SDC will not be able to trim it. These volumes can be
trimmed by unmapping and then remapping them, or by restarting the SDC. The SDC
version will not affect capacity allocation, and a vTree converted from thick to thin
provisioning will be reduced in size accordingly in the system.
Save current vTree The provisioning state is returned to its original state before the migration took place.
provisioning state during
migration
8. Click Migrate vTree.
The vTree migration is initiated. The vTree appears in both the source and the destination storage pools.
9. At the top right of the window, click the Running Storage Jobs icon and check the progress of the migration of the vTree.
10. Verify that the operation has finished and was successful, and click Dismiss.
NOTE: This feature is available only when there is more than one vTree migration currently in the queue.
Remove a vTree
You can remove a vTree from PowerFlex, as long as it is unmapped.
Ensure that the vTree is unmapped.
1. On the menu bar, click Block > Volumes.
2. In the list of volumes, select the volume that you want to remove.
3. In the right pane, click View Details.
4. In the left pane, click the VTree tab.
5. In the left pane, from the VTree menu on the right, select Remove.
6. In the Remove VTree dialog box, click Remove VTree.
7. Verify that the operation has finished and was successful, and click Dismiss.
NVMe targets
NVMe targets (or SDT components) must be configured on the PowerFlex system side, in order to use NVMe over TCP
technology.
The NVMe target (or SDT component) is a frontend component that translates NVMe over TCP protocol into internal
PowerFlex protocols. The NVMe target provides I/O and discovery services to NVMe hosts configured on the PowerFlex
system. A minimum of two NVMe targets must be assigned to a protection domain before it can serve NVMe hosts, to provide
minimal path resiliency to hosts.
TCP ports, IP addresses, and IP address roles must be configured for each NVMe target (or SDT component). You can assign
both storage and host roles to the same target IP addressess. Alternatively, assign the storage role to one target IP address,
and add another target IP address for the host role. Both roles must be configured on each NVMe target.
● The host port listens for incoming connections from hosts, over the NVMe protocol.
● The storage port listens for connections from the MDM.
Once the NVMe targets have been configured, add hosts to PowerFlex, and then map volumes to the hosts. Connect hosts to
NVMe targets, preferably using the discovery feature.
On the operating system of the compute nodes, NVMe initiators must be configured. Network connectivity is required between
the NVMe targets and the NVMe initiators, and between NVMe targets (or SDT components) and SDSs.
Usage: scli --add_sdt --sdt_ip <IP> [--sdt_ip_role <ROLE>] [--storage_port <PORT>] [--
nvme_port <PORT>] [--discovery_port <PORT>] [--sdt_name <NAME>] (--protection_domain_id
<ID> | --protection_domain_name <NAME>) [--fault_set_id <ID> | --fault_set_name <NAME>]
[--profile <PROFILE>] [--force_clean [--i_am_sure]]
Description: Add an SDT
Parameters:
--sdt_ip <IP> A comma separated list of IP addresses assigned
to the SDT
--sdt_ip_role <ROLE> A comma separated list of roles assigned to each
SDT IP address
Role options: storage_only, host_only, or
storage_and_host
--storage_port <PORT> Port assigned to the SDT (default: 12200)
--nvme_port <PORT> Port to be used by the NVMe hosts (default: 4420)
--discovery_port <PORT> Port to be used by the NVMe hosts for discovery
(default: 8009)
Set to 1 in order to indicate no use of
discovery port
--sdt_name <NAME> A name to be assigned to the SDT
--protection_domain_id <ID> Protection Domain ID
--protection_domain_name <NAME> Protection Domain name
--fault_set_id <ID> Fault Set ID
--fault_set_name <NAME> Fault Set name
--profile <PROFILE> Set the performance profile from the following
options: compact | high_performance
The default is high_performance
--force_clean Clean a previous SDT configuration
--i_am_sure Preemptive approval
Hosts
Hosts are entities that consume PowerFlex storage for application usage. There are two methods of consuming PowerFlex block
storage: using the SDC kernel driver, or using NVMe over TCP connectivity. Therefore, a host is either an SDC or an NVMe
host.
Once a host is configured, volumes may be mapped to it. In each case, hosts must be mapped to volumes.
Remove hosts
Remove hosts from PowerFlex.
1. On the menu bar, click Block > Hosts.
2. In the list of hosts, select the relevant host, and click Remove.
3. In the Remove Host dialog box, ensure that you have selected the correct host for removal.
NOTE: You can only remove hosts with unmapped volumes. For SDCs, the host must be disconnected from the
PowerFlex cluster before it can be removed.
4. Click Remove.
5. Verify that the operation has finished and was successful, and click Dismiss.
Approve SDCs
When the restricted host (SDC) mode is set to GUID restriction, approve SDCs before mapping them to volumes.
For more information about the restricted host feature, see the Dell PowerFlex 4.6.x Technical Overview or Dell PowerFlex 4.6.x
Security Configuration Guide.
1. On the menu bar, click Block > Hosts.
2. In the list of hosts, select one or more hosts and click Modify > Approve.
3. In the Approve host dialog box, verify that the hosts listed are the ones that you want to approve, and click Approve.
4. Verify that the operation has finished and was successful, and click Dismiss.
Rename hosts
Use this procedure to rename hosts.
The host name must adhere to the following rules:
● Contains less than 32 characters
● Contains only alphanumeric and punctuation characters
● Is unique within the object type
1. On the menu bar, click Block > Hosts.
2. In the list of hosts, select the relevant host, and click Modify > Rename.
3. In the Rename Host dialog box, enter the new name, and click Apply.
4. Verify that the operation has finished and was successful, and click Dismiss.
Related information
Configuring file storage
High availability
New NAS servers are automatically assigned on a round-robin basis across the available nodes.
The primary node acts as a marker to indicate the node that the NAS server should be running on, based on this algorithm. Once
provisioned, the primary node for a NAS server never changes. Backup node, on which the NAS server is backed up for fault
tolerance purposes. This means that the NAS server will be moved to this node during any failover event, the node is chosen
during NAS server creation by automatic load balancing logic.
Multitenancy
NAS servers can be used to enforce multitenancy. This is useful when hosting multiple tenants on a single system, such as for
service providers.
Since each NAS server has its own independent configuration, it can be tailored to the requirements of each tenant without
impacting the other NAS servers on the same system. Also, each NAS server is logically separated from each other, and clients
that have access to one NAS server do not inherently have access to the file systems on the other NAS servers. File systems
are assigned to a NAS server upon creation and cannot be moved between NAS servers.
If customers want to put their NAS Server in a private namespace, then they must select a tenant from the drop-down field, to
indicate NAS Server is going to be a member of the selected/specified tenant.
NOTE: To enable multi-tenancy on existing NAS Servers would require removing the File-interfaces before associating the
NAS Servers with a tenant. This may cause loss of access to clients connected to these NAS Servers.
● If a customer does not select any tenant while creating the NAS server, then created NAS server will reside in the default
namespace.
● Create tenant with unique set of VLAN IDs (exclusive to this tenant and should not be used elsewhere in PowerFlex
Manager)
● The NAS server clone action creates a new cloned NAS server which will be attached to the same tenant as the original NAS
server.
● File Interface details need to be configured explicitly after creating the clone, as this configuration is not being done in the
NAS Clone Creation process.
● Modify NAS Server with Tenant workflow.
○ NAS server object is a private data object for a particular tenant, shifting the object between multiple tenants may not be
advisable for security reasons.
○ Currently supporting two types of modify workflow.
■ Default tenant to a specific tenant
■ Specific tenant to a default tenant
NOTE: Both these operations are possible only if the NAS server is not associated with any File Interface. There
may be loss of access to connected NAS clients during dissociate/associate of File interface operation from the NAS
server in an existing system.
Wizard Description
Screen
Details Select a protection domain, and enter a NAS server name, description, and network information. Starting
with this release, you can configure a file interface using IPv6 addressing. This support extends to both
Production and Backup roles. IPv6 is just for front-end client connectivity access.
NOTE: You cannot reuse VLANs that are being used for the management and storage networks.
User Select automatic user mapping, or enable the default account for both a Windows and Linux user.
Mapping
Summary Review the content and select Back to go back and make any corrections.
3. Select Create NAS Server to create the NAS server.
The Status window opens, and you are redirected to the NAS Servers page once the server is listed on the page.
Once you have created the NAS server for NFS, you can continue to configure the server settings.
If you enabled Secure NFS, you must continue to configure Kerberos.
Select the NAS server to continue to configure, or edit the NAS server settings.
NOTE: You cannot reuse VLANs that are being used for the management and storage networks.
● If you are configuring a stand-alone NAS server, obtain the workgroup and NetBIOS name. Then define what to use for the
stand-alone local administrator of the SMB server account.
● If you are joining the NAS server to the Active Directory (AD), ensure that NTP is configured on your storage system. Then
obtain the SMB computer name (used to access SMB shares), Windows domain name, and the username and password of a
domain administrator or user who has a sufficient domain access level to join the AD.
1. Select File > NAS Servers.
2. Select + Create NAS Server, and enter the information in the wizard.
Wizard Description
Screen
Details Enter a NAS server name, description, and network details.
Sharing Select Sharing Protocol
Protocol
Select SMB.
NOTE: If you select SMB and an NFS protocol, you automatically enable the NAS server to support
multiprotocol. Multiprotocol configuration is not described in this help system.
Windows Server Settings
Select Standalone to create a stand-alone SMB server or Join to the Active Directory Domain to
create a domain member SMB server.
If you join the NAS server to the AD, optionally Select Advanced to change the default NetBios name and
organizational unit.
DNS
If you selected to Join to the Active Directory Domain, it is mandatory to add a DNS server.
Optionally, enable DNS if you want to use a DNS server for your stand-alone SMB server.
User Mapping
Keep the default Enable automatic mapping for unmapped Windows accounts/users, to support
joining the active directory domain. Automatic mapping is required when joining the active directory
domain.
Summary Review the content and select Back to go back and make any corrections.
3. Select Create NAS Server.
The Status window opens, and you are redirected to the NAS Servers page once the server is added.
Once you have created the NAS server for SMB, you can continue to configure the server settings, or create file systems.
Select the NAS server to continue to configure or edit the NAS server settings.
Option Description
Modify To modify the NAS server name or description or Tenant (optional).
File interfaces
Presents the NAS server file interfaces.
You can add more interfaces, and define which will be the preferred interface to use with the NAS server. PowerFlex assigns a
preferred interface by default, but you can set which interface to use first for production and backup, and IPv4, and IPv6.
Select Ping and enter an IP address or host name to test the connectivity from the NAS server to an external resource.
Select the interface to modify or delete it. All properties of the file interface can be modified.
DNS
DNS is required for Secure NFS.
You cannot disable DNS for:
● NAS servers that support multiprotocol file sharing.
● NAS servers that support SMB file sharing and that are joined to an Active Directory (AD).
NOTE: If you use NFS secure with a custom realm, you must upload a keytab file.
You can also configure LDAP with SSL (LDAP Secure) and can enforce the use of a Certificate Authority certificate for
authentication.
Local files
Local files can be used instead of, or in addition to DNS, LDAP, and NIS directory services.
To use local files, configuration information must be provided through the files listed in PowerFlex Manager. If you have not
created your own files ahead of time, use the download arrows to download the template for the type of file you need to
provide, and then upload the edited version.
To use local files for NFS, FTP access, the passwd file must include an encrypted password for the users. This password is
used for FTP access only. The passwd file uses the same format and syntax as a standard Unix system, so you can leverage
this to generate the local passwd file. On a Unix system, use useradd to add a new user and passwd to set the password for
that user. Then, copy the hashed password from the /etc/shadow file, add it to the second field in the /etc/passwd file,
and upload the /etc/passwd file to the NAS server.
SMB server
This section contains options for configuring a Windows server.
If you are configuring SMB with Kerberos security, you must select to Join to the Active Directory Domain.
If you select to Join to the Active Directory Domain, you must have added a DNS server. You can add a DNS server from the
Naming Services card.
If the Windows Server Type is set to Join to the Active Directory Domain, then Enable automatic mapping for
unmapped Windows accounts/users must be selected in the User Mapping tab.
NFS server
This section contains options for configuring an NFS, or NFS secure server for Linux or UNIX support.
Specify a Linux or UNIX credential In the Credential cache retention field, enter a time period (in minutes) for which
cache retention period. access credentials are retained in the cache. The default value is 15 minutes.
NOTE: This option can lead to better performance, because it reuses the UNIX
credential from the cache instead of building it for each request.
You can configure Secure NFS when you create or modify a multiprotocol NAS server or one that supports Unix-only shares.
Secure NFS provides Kerberos-based user authentication, which can provide network data integrity and network data privacy.
There are two methods for configuring Kerberos for secure NFS:
● Use the Kerberos realm (Windows realm) associated with the SMB domain configured on the NAS server, if any. If you
configure secure NFS using this method, SMB support cannot be deleted from the NAS server while secure NFS is enabled
and configured to use the Windows realm.
This method of configuring secure NFS requires fewer steps than configuring a custom realm.
● Configure a custom realm to point to any type of Kerberos realm (AD, MIT, Heimdal). If you configure secure NFS using this
method, you must upload the keytab file to the NAS server being defined.
FTP
FTP or Secure FTP can only be configured after a NAS server has been created.
Passive mode FTP is not supported.
FTP access can be authenticated using the same methods as NFS or SMB. Once authentication is complete, access is the same
as SMB or NFS for security and permission purposes. The method of authentication that is used depends on the format of the
username:
● If the format is domain@user or domain\user, SMB authentication is used. SMB authentication uses the Windows
domain controller.
User mapping
If you are configuring a NAS server to support both types of protocols, SMB and NFS, you must configure the user mapping.
When configured for both types of protocol, the user mapping requires that the NAS server is joined with an AD domain. You
can configure the SMB server with AD from the SMB Server card.
If the Windows Server Type is set to Join to the Active Directory Domain, then you must select Enable automatic
mapping for unmapped Windows accounts/users.
Kerberos
Kerberos is a distributed authentication service designed to provide strong authentication with secret-key cryptography. It
works on the basis of "tickets" that allow nodes communicating over a non-secure network to prove their identity in a secure
manner. When configured to act as a secure NFS server, the NAS server uses the RPCSEC_GSS security framework and
Kerberos authentication protocol to verify users and services.
● Using Kerberos with NFS requires that DNS and a UDS, are configured for the NAS server and that all members of the
Kerberos realm are registered in the DNS server.
● For authentication Kerberos can be configured with either a custom realm, or with Active Directory (AD).
● The storage system must be configured with an NTP server. Kerberos relies on the correct time synchronization between
the KDC, servers, and the client network.
File-level retention
File-level retention protects files from modification or deletion until a specified retention period.
Protecting a file system using file-level retention enables you to create a permanent and unalterable set of files and directories.
File-level retention ensures data integrity and accessibility, simplifies archiving procedures for administrators, and improves
storage management flexibility.
There are two levels of file-level retention:
● Enterprise: Protects data from changes that are made by users and storage administrators using SMB, NFS, and FTP. An
administrator can delete a file-level retention enterprise file system which contains locked files.
● Compliance: Protects data from changes that are made by users and storage administrators using SMB, NFS, and FTP.
An administrator cannot delete a file-level retention compliance file system which contains locked files. File-level retention
compliance complies with SEC rule 17a-4(f).
File-level retention enterprise is intended for self-regulated archiving while file-level retention compliance is intended to assist
those companies that must comply with regulations such as SEC rule 17a-4(f). The main difference is that file-level retention
compliance performs write verification to ensure the quality and accuracy of the storage media recording process and prevents
any modification or deletion of protected files by administrators.
The following restrictions apply:
● File-level retention is available on PowerFlex Manager 4.6 and later.
● Once file system is created, file-level retention cannot be enabled, disabled, or changed to another type.
● File-level retention compliance does not support restoring from a snapshot.
● When refreshing using a snapshot, both file systems must be at the same file-level retention level.
● A cloned file system has the same file-system retention level as the source and cannot be modified.
● GNS creation is not allowed for a file-system retention enabled file system.
File-level retention is configured during file system creation.
This is because the metadata space is reserved from the usable file system capacity.
6. Configure security, access permissions, and host access for the system.
Option Description
Minimum Select Sys to allow users with non-secure NFS, or Secure NFS to mount and NFS export on the file
Security system. If you are not configuring Secure NFS you must select this option.
If you are creating a file system with Secure NFS, then you can choose from the following options:
● Kerberos to allow any type of Kerberos security for authentication (krb5/krb5i/krb5p).
● Kerberos with Integrity to allow both Kerberos with integrity and Kerberos with encryption security
for user authentication (krb5i/krb5p).
● Kerberos with Encryption to allow only Kerberos with encryption security for user authentication
(krb5p).
Default The default access that is applied to the hosts unless the hosts are configured with a different access
Access permission.
Add Host Enter hosts individually, or you can add hosts by uploading a properly formatted CSV file. You can download
the CSV file first to obtain a template.
Option Description
Local The path to the file system storage resource on the storage system. This path specifies the unique location of
path the share on the storage system.
● Each NFS share must have a unique local path. PoweFlex automatically assigns this path to the initial export
created within a new file system. The local path name is based on the file system name.
● Before you can create more exports within an NFS file system, create a directory to share from a Linux/
UNIX host that is connected to the file system. Then you can create an export from PowerFlex Manager and
set access permissions accordingly.
Export The path used by the host to connect to the export. PowerFlex creates the export path that is based on the IP
path address of the host, and the name of the export. Hosts use either the file name or the export path to mount or
map to the export from a network host.
7. Optionally, add a protection policy to the file system.
If you are adding a protection policy to the file system, the policy must have been created before creating the file system.
Only snapshots are supported for protection for file systems. Replication is not supported on file system.
8. Review the summary and click Create File System.
The file system is added to the File System tab. If you created an export simultaneously, then the export displays in the
NFS export tab.
Option Description
Select NAS Select a NAS server enabled for SMB.
Server
This is because the metadata space is reserved from the usable file system capacity.
SMB Share Optionally, configure the initial SMB Share. You can add shares to the file system after the initial file
system configuration.
Protection Optionally, provide a protection policy for the file system.
Policy NOTE: PowerFlex supports snapshots for file storage protection. Replication protection is not
supported for file systems. If a protection policy is set for both replication and snapshot protections,
PowerFlex implements the snapshot policy on the file system, and ignores the replication policy for
the file system.
Option Description
Select File Select a file system that has been enabled for SMB.
System
Select a Optionally, select one of the file system snapshots on which to create the share.
snapshot of the
Only snapshots are supported for file system protection policies. Replication is not supported for file
file system
systems.
SMB share Enter a name, and local path for the share. When entering the local path:
details ● You can create multiple shares with the same local path on a single SMB file system. In these cases,
you can specify different host-side access controls for different users, but the shares within the file
system have access to common content.
● A directory must exist before you can create shares on it. If you want the SMB shares within the
same file system to access different content, you must first create a directory on the Windows
host that is mapped to the file system. Then, you can create corresponding shares using PowerFlex
Manager. You can also create and manage SMB shares from the Microsoft Management Console.
PowerFlex Manager also created the SMB Share path, which uses the host to connect to the share.
The export path is the IP address of the file system, and the name of the share. Hosts use either the file
name or the share path to mount or map to the share from a network host.
3. Click Next.
Once you create a share, you can modify the share from PowerFlex Manager or using the Microsoft Management Console.
To modify the share from PowerFlex Manager, select the share from the list on the SMB Share page, and click Modify.
Option Description
NameSpace for NFS For NFS only. The namespace provides access over the NFS(nfsv4) protocol only.
NameSpace for SMB For SMB only. The namespace provides access over the SMB protocol only.
NameSpace for both NFS and SMB For SMB and NFS. The namespace provides access over both NFS and SMB
protocols.
3. Click Next.
4. Choose a NAS server on which to create the GNS and click Next.
A NAS server can host multiple namespaces.
Option Description
Create new Filesystem Create a dedicated file system to host the GNS.
(Recommended)
Select from available General Type Select an existing file system. Do not select the existing NFS file system if the file
Filesystems system root has already been exported.
6. Specify GNS details for the namespace:
Option Description
Name of the The name allows remote hosts to connect to the Global Namespace over the network.
server
Description Optional description for the namespace.
Local Path Local path relative to the NAS server. This path is the local path to the storage resource or any
existing subfolder of the storage resource that is shared over the network. The path is relative to the
7. Review the Summary page and choose one of the following options to create the GNS.
● Run in the background
● Add to the Job List to schedule later
The root shares and exports are automatically created on the file system.
NOTE: These shares and exports cannot be deleted without deleting the namespace.
Option Description
Local path A path name relative to the namespace root (without a forward slash or trailing slash). Remote
hosts use this path to connect to the target file system.
Description (Optional) Description of the link.
Client Cache Timeout Client cache timeout is the amount of time that clients cache namespace root referrals. A
(Seconds) referral is an ordered list of targets that a client system receives from a namespace server
when the user accesses a namespace root or folder with targets in the namespace.
Add Target UNC Select the target from UNC path from the available exports or shares, or add the target UNC
(Universal Naming path manually.
Convention) Path
Oplocks Enabled (Enabled by default) Opportunistic file locks (oplocks, also known as Level 1 opslock)
enable SMB clients to buffer file data locally before sending it to a server. SMB clients
Read/Write, allow Root Hosts have permission to read and write to the storage resource or share, and
to grant revoke access permissions (for example, permission to read, modify and
execute specific files and directories) for other login accounts that access the
storage. The root of the NFS client has root access to the share.
NOTE: Unless the hosts are part of a supported cluster configuration, a void
granting Read/Write access to more than one host.
NOTE: VMware ESXi hosts must have Read/Write, allow Root access in order
to mount an NFS datastore using NFSv4 with NFS Owner:root authentication.
Read-only, allow Root (NFS Exports) Hosts have permission to view the contents of the share, but not to write to it. The
root of the NFS client has root access to the share.
Protocol Encryption Enables SMB encryption of the network traffic through the share. SMB encryption is
supported by SMB 3.0 clients and above. By default, access is denied if an SMB 2 client
attempts to access a share with protocol encryption enabled.
You can control this by configuring the RejectUnencryptedAccess registry key on the
NAS Server. 1 (default) rejects non-encrypted access and 0 allows clients that do not
support encryption to access the file system without encryption.
Access-Based Enumeration Filters the list of available files and directories on the share to include only those to
which the requesting user has read access.
NOTE: Administrators can always list all files.
Branch Cache Enabled Copies content from the share and caches it at branch offices. This allows client
computers at branch offices to access the content locally rather than over the WAN.
Branch Cache is managed from Microsoft hosts.
NFS export, and SMB share names must be unique at the NAS server level
per protocol. However, you can specify the same name for SMB shares and
NFS exports.
Local path The path to the file system storage resource on the storage system. This
path specifies the unique location of the share on the storage system.
SMB shares
● An SMB file system allows you to create multiple shares with the same
local path. In these cases, you can specify different host-side access
controls for different users, but the shares within the file system will all
access common content.
● A directory must exist before you can create shares on it. Therefore,
if you want the SMB shares within the same file system to access
different content, you must first create a directory on the Windows host
that is mapped to the file system. Then, you can create corresponding
shares using Unisphere. You can also create and manage SMB shares
from the Microsoft Management Console.
NFS exports
● Each NFS export must have a unique local path. PowerFlex
automatically assigns this path to the initial export created within a new
file system. The local path name is based on the file system name.
● Before you can create additional exports within an NFS file system,
you must create a directory to share from a Linux/UNIX host that
is connected to the file system. Then, you can create a share from
PowerFlex Manager and set access permissions accordingly.
SMB share path or export path The path used by the host to connect to the share or export.
PowerFlex Manager creates the export path based on the IP address of the
file system, and the name of the export or share. Hosts use either the file
name or the export path to mount or map to the export or share from a
network host.
Quotas are supported on SMB, NFS, FTP, NDMP, and multiprotocol file systems.
You can set the following types of quotas for a file system.
NOTE: If you change the limits for a tree quota, the changes take effect
immediately without disrupting file system operations.
User quota on a quota tree Limits the amount of storage that is consumed by an individual user storing data on
the quota tree.
Quota limits
To track space consumption without setting limits, set Soft Limit and Hard Limit to 0, which indicates no limit.
File protection
PowerFlex file uses thin clone and snapshots for protecting the file system data and clone for repurposing the NAS server.
Create a snapshot
Creating a snapshot saves the state of the file system and all files and data within it at a particular point in time. You can use
snapshots to restore the entire file system to a previous state.
Before creating a snapshot, consider:
● Snapshots are not full copies of the original data. Do not rely on snapshots for mirrors, disaster recovery, or high-availability
tools. Because snapshots are partially derived from the real-time data of the file systems, they can become inaccessible if
the storage resource becomes inaccessible.
● Although snapshots are space efficient, they consume overall system storage capacity. Ensure that the system has enough
capacity to accommodate snapshots.
● When configuring snapshots, review the snapshot retention policy that is associated with the storage resource. You may
want to change the retention policy in the associated rules or manually set a different retention policy, depending on the
purpose of the snapshot.
● Manual snapshots that are created with PowerFlex Manager are retained for one week after creation (unless configured
otherwise).
● If the maximum number of snapshots is reached, no more can be created. In this case, to enable creation of new snapshots,
you are required to delete existing snapshots.
1. Click File > File Systems.
2. Select the check box of the relevant file system to select it and click Protection > Create Snapshot.
3. In the Create Snapshot of File System panel, enter a unique name for the snapshot, and set the Local Retention Policy.
NOTE: Retention period is set to one week by default. You can set a different retention period or select the No
Automatic Deletion for indefinite retention.
Modify a snapshot
Use the following procedure to modify a snapshot.
1. Log in to PowerFlex Manager.
2. Click File > File Systems.
3. Select the file systems from the list, click View Details > Snapshots and select the snapshot to modify.
4. Click Modify.
5. Click Apply.
Delete a snapshot
Use the following procedure to delete a snapshot.
1. Log in to PowerFlex Manager.
2. Click File > File Systems.
3. Select the file systems from the list, click View Details > Snapshots and select the snapshot to delete.
4. Click Delete.
5. Click OK on the warning screen.
5. Click Restore.
NOTE: You can also restore the file system by selecting the file system snapshot from the Snapshots view. Click File
> File Systems, and select the file systems from the list, click View Details, and click More Actions > Restore from
Snapshot.
6. Select the cloned NAS server and click View Details > Network > File Interface > +Add.
7. Provide the interface information and click Add.
8. For routes to external services, Select the cloned NAS server and click View Details > Network > Routes to External
Services > +Add
9. Provide the information and click Add.
NOTE: Events publishing can only be enabled when events are configured on the selected NAS server.
5. Click Clone.
The cloned file system is now added to the file system list.
6. If the file system is enabled with a global namespace, activate a cloned namespace after it is cloned:
a. Click File > Global Namespace and select the cloned namespace.
b. Click Activate.
c. Provide the unique global namespace and click Apply.
Snapshots
A snapshot is a copy of a volume at a specific point in time. With snapshots, you can overwrite the contents of the volume, map
to a host, and set bandwidth and IOPS limits.
Create snapshots
PowerFlex lets you to create instantaneous snapshots of one or more volumes.
The Use secure snapshots option prohibits deletion of the snapshots until the defined expiration period has elapsed.
When you create a snapshot of more than one volume, PowerFlex generates a consistency group by default. The snapshots
under the consistency group are taken simultaneously for all listed volumes, thereby ensuring their consistency. You can view
the consistency group by clicking View Details in the right pane and then clicking the Snapshots Consistency Group tab in
the left pane.
NOTE: The consistency group is for convenience purposes only. No protection measures are in place to preserve the
consistency group. You can delete members from the group.
1. On the menu bar, click Protection > Snapshots.
2. In the list, select the relevant volumes, and click More > Create Snapshot.
3. In the Create snapshot of volume dialog box, enter the name of the snapshot. You can accept the default name, or create
a snapshot a name according to the following rules:
● Contains less than 32 characters
● Contains only alphanumeric and punctuation characters
● Is unique within the object type
4. Optionally, configure the following parameters:
● To set read-only permission for the snapshot, select the Read Only check box.
● To prevent deletion of the snapshot during the expiration period, select the Use secure snapshot check box, enter the
Expiration Time, and select the time unit type.
5. Click Create Snapshot.
6. Verify that the operation has finished and was successful, and click Dismiss.
NOTE: Use this command very carefully, since this will overwrite data on the target volume or snapshot.
NOTE: If the destination volume is an auto snapshot, the auto snapshot must be locked before you can continue to
overwrite volume content.
1. On the menu bar, click Protection > Snapshots.
2. In the list of snapshots, select the snapshot to be overwritten, and then click More Actions > Overwrite Content.
3. Click Next.
4. In the Select Source Volume tab, do the following:
a. Select the source volume from which to copy content.
b. Click Time Frame, and then select the interval from which to copy content. If you choose Custom, select the date and
time and click Apply.
c. Click Next.
5. In the Review tab, review the details and then click Overwrite Content.
6. Verify that the operation has finished and was successful, and click Dismiss.
Delete snapshots
Remove snapshots of volumes from PowerFlex.
Ensure that the snapshot that you are removing is not mapped to any hosts. If the snapshot is mapped, unmap it before
removing it. In addition, ensure that the snapshot is not the source volume of any snapshot policy. You must remove the volume
from the snapshot policy before you can remove the snapshot.
To prevent causing a data unavailability scenario, avoid deleting volumes or snapshots while the MDM cluster is being upgraded.
CAUTION: Removing a snapshot erases all the data in the corresponding snapshot.
Map snapshots
Mapping exposes the snapshot to the specified host, effectively creating a block device on the host. You can map a snapshot to
one or more hosts.
For Linux-based devices, the scini device name may change on reboot. Dell Technologies recommends that you mount a
mapped volume to the /dev/disk/by-id unique ID, which is a persistent device name, rather than to the scini device
name.
To identify the unique ID, run the /dev/disk/by-id/ command.
You can also identify the unique ID using VMware. In the VMware management interface, the device is called EMC Fibre
Channel Disk, followed by an ID number starting with the prefix eui.
NOTE: You cannot map a volume if the volume is an auto snapshot that is not locked, and you cannot map the volume on
the target of a peer system if it is connected to an RCG..
1. On the menu bar, click Protection > Snapshots.
2. In the list of snapshots, select one or more snapshots, and then click Mapping > Map.
3. In the Map Volume dialog box, select one or more hosts to which you want to map the snapshots.
4. Click Map, and click Apply.
5. Verify that the operation has finished and was successful, and then click Dismiss.
Unmap snapshots
Unmap one or more snapshot volumes from hosts.
1. On the menu bar, click Protection > Snapshots.
2. In the list of snapshots, select the relevant snapshots, and click Mapping > Unmap.
3. Select the host from which to remove mapping to the snapshots.
4. Click Unmap, and click Apply.
5. Verify that the operation has finished and was successful, and click Dismiss.
NOTE: vTree migration is a long process and can take days or weeks, depending on the size of the vTree.
Option Description
Add migration to the Give this vTree migration the highest priority in the migration priority queue.
head of the migration
queue
Ignore destination Allow the migration to start regardless of whether there is enough capacity at the destination.
capacity
Enable compression A compression algorithm is applied to the data.
Convert vTree from... Convert a thin-provisioned vTree to thick-provisioned, or a thick-provisioned vTree to thin-
provisioned at the destination, depending on the provisioning of the source volume.
NOTE: SDCs with a version earlier than v3.0 do not fully support converting a thick-
provisioned vTree to a thin-provisioned vTree during migration; after migration, the vTree
will be thin-provisioned, but the SDC will not be able to trim it. These volumes can
be trimmed by unmapping and then remapping them, or by restarting the SDC. The
SDC version will not affect capacity allocation and a vTree converted from thick to thin
provisioning will be reduced in size accordingly in the system.
Save current vTree The provisioning state is returned to its original state before the migration took place.
provisioning state during
migration
8. Click Migrate vTree.
The vTree migration is initiated. The vTree appears in both the source and the destination storage pools.
9. At the top right of the page, click the Running Storage Jobs icon, and check the progress of the migration.
10. Verify that the operation has finished and was successful, and click Dismiss.
Snapshot policies
Snapshot policies enable you to define policies for the number of snapshots that PowerFlex takes of one or more defined
volumes at a given time.
Snapshots are taken according to the defined rules. You can define the time interval between two rounds of snapshots, as
well as the number of snapshots to retain, in a multi-level structure. For example, take snapshots every x minutes/hours/days/
weeks. You can define a maximum of six levels, with the first level having the most frequent snapshots.
For example:
Rule: Take snapshots every 60 minutes
Retention Levels:
● 24 snapshots
● 7 snapshots
● 4 snapshots
After defining the parameters, select the source volume to add to the snapshot policy. You can add multiple source volumes to
a snapshot policy, but only a single policy per source volume is allowed. Only one volume per vTree may be used as a source
volume of a policy (any policy).
When you remove the source volume from the policy, you must choose how to handle snapshots. Snapshots created by the
policy are referred to as auto snapshots. This is indicated if a snapshot policy is displayed for the snapshot.
● If the source volume has auto snapshots, you cannot unassign the source volume from the snapshot policy. You can remove
auto snapshots from Snapshots.
where <FILE_PATH> is the location where the certificate will be saved, and the file name. For example: /opt/
source_sys.crt
3. Copy the certificate file to the target system.
where <PATH_TO_LOCAL_COPY_OF_TARGET_CERT> is the copy of the target system's certificate that you copied to
the source system.
7. On the target system, add the source system's certificate:
where <PATH_TO_LOCAL_COPY_OF_SOURCE_CERT> is the copy of the source system's certificate that you copied to
the target system.
Journal capacity
You should consider several factors when allocating journal capacity.
Journal capacity is defined as a percentage of the total storage capacity in the storage pool and must equal at least 28 GB per
SDR. In general, journal capacity should be at least 5% of replicated usable capacity in the protection domain, including volumes
used as source and targets. It is important to assign enough storage capacity for the replication journal.
The amount of capacity needed for the journal is based on the following factors:
● Minimal requirements—108 GB multiplied by the number of SDR sessions. The number of SDR sessions is equal to the
number of SDRs plus one. The extra SDR session is to ensure that a new session can be allocated for an SDR during a
system upgrade.
● The capacity needed to sustain an outage—application WAN bandwidth multiplied by the planned WAN outage. In general,
journal capacity in the protection domain should be at least 5% of the application pool. If the application has a heavy I/O
load, larger capacity should be used. Similarly, if a long outage is expected, a larger capacity should be allocated. If there
are replicated volumes in more than one storage pool in the protection domain, this calculation should be repeated for
each storage pool, and the allocated journal capacity in the protection domain must at least equal the sum of the size per
application pool.
Use the following steps to calculate exactly how much journal capacity to allocate:
1. Select the storage pools from which to allocate the journal capacity. The journal is shared between all of the replicated RCGs
in the protection domain. Journal capacity should be allocated from storage pools as fast as (or faster than) the storage pool
of the fastest replicated application in the protection domain. It should use the same drive technology and about the same
drive count and distribution in nodes.
2. Consider the minimal requirements needed (28 GB multiplied by the number of SDR sessions) and the capacity needed to
sustain an outage. Journal capacity will be at least the maximum of these two factors.
3. Take into account the expected outage time. The minimal outage allowance is one hour, but at least three hours are
recommended.
4. Calculate the journal capacity needed per application: maximal application throughput x maximum outage interval.
5. Since journal capacity is defined as a percentage of storage pool capacity, calculate the percentage of capacity based on the
previously calculated needs.
For example:
● An application generates 1 GB/s of writes.
● The maximal supported outage is 3 hours (3 hours x 3600 seconds = 10800 seconds).
● The journal capacity needed for this application is 1 GB/s x 10800 s = ~10.547 TB.
● Since the journal capacity is expressed as a percentage of the storage pool capacity, divide the 10.547 TB by the size of the
storage pool, which is 200 TB: 100 x 10.547 TB/200 TB = 5.27%. Round this up to 6%.
● Repeat this for each application being replicated.
When a protection domain has several storage pools and several replicated applications, the journal capacity should be
calculated as in the example above, and the capacity can be divided among all the storage pools (provided they are fast
enough). For higher availability, the journal capacity should be allocated from multiple storage pools.
NOTE: When storage pool capacity is critical, capacity cannot be allocated for new volumes or for expanding existing
volumes. This behavior must be taken into account when planning the capacity available for journal usage. The volume
SDRs
Storage Data Replicators (SDRs) are responsible for processing all I/Os of replication volumes.
All application I/Os of replicated volumes are processed by the source SDRs. At the source, application I/Os are sent by the
SDC to the SDR. The I/Os are sent to the target SDRs and stored in their journals. The target SDRs’ journals apply the I/Os to
the target volumes. A minimum of two SDRs are deployed at both the source and target systems to maintain high availability. If
one SDR fails, the MDM directs the SDC to send the I/Os to an available SDR.
Add SDR
Add an additional SDR to an existing PowerFlex system, or add back a new SDR if a previously existing SDR was removed. A
minimum of two SDRs are required on each replication system. Each SDR must be configured with one or more IP addresses and
roles.
The SDR communicates with several components, including: SDC (application), SDS (storage) and remote SDR (external).
When an IP address is added to an SDR, the role or roles of the IP address must be defined. The IP address role determines the
component with which that IP address communicates. For example, the application role means that the associated IP address is
used for SDR-SDC communication. By default, all the roles are selected for an IP address.
SDR components must be deployed as resources before you can add them using this procedure.
1. On the menu bar, click Protection > SDRs.
2. Click Add SDR.
3. In the Add SDR dialog box, enter the connection information of the SDR:
a. Enter the SDR name.
b. If necessary, modify the SDR port number.
c. Select the relevant protection domain.
d. Enter the IP address of the SDR.
e. Select one or more roles, for example, default: all roles are selected.
f. If the SDR has more than one IP address, click Add IP to add more IP addresses and their roles.
g. Click Add SDR to initiate a connection with the peer system.
4. Verify that the operation has finished and was successful, and click Dismiss.
After failover Reverse/restore N/A - data is not replicated By default access to the
volume is allowed through
Remove
the original target (system
B).
It is possible to enable
access through the original
source (system A).
NOTE: It is recommended to enter the minimum amount of time the feature allows, which is 15 seconds.
Perform a failover
If the system is not healthy, you can fail over the source role to the target system. When the source is compromised, data from
the host stops sending I/Os to the source volume, replication is then stopped and the target system takes on the source role.
The host on the target starts sending I/Os to the volume. The target takes on the role of source, and the source takes on the
role of target.
Before performing a failover, stop the application and unmount the file systems at the source (if the source is available). Target
volumes can only be mapped after performing a failover.
There are two options when choosing to fail over an RCG:
● Switchover—This option is a complete synchronization and failover between the source and the target. Application I/Os are
stopped at the source, and then source and target volumes are synchronized. Access mode is changed of the target volumes
to the target host, roles are switched, and finally, the access mode of the new source volumes is changed to read/write.
● Latest PIT—The system prevents any writes to the source volumes.
1. On the menu bar, click Protection > RCGs.
2. Click the relevant RCG check box, and then click More Actions > Failover.
3. In the Failover RCG dialog box, select one of the following options: Switchover (Sync & Failover) or Latest PIT: (date
& time).
4. Click Apply Failover.
5. In the RCG Sync & Failover dialog box, click Proceed.
6. Verify that the operation has finished and was successful, and click Dismiss.
7. From the upper right, click the Running Jobs icon and check the progress of the failover.
Reverse replication
When the RCG is in failover or switchover mode, you can reverse or restore the replication. Reversing replication changes the
direction, so that the original target becomes the source. All data at the original source is overwritten by the data at the target
side. This option may be selected from either source or target systems.
This option is available when RCG is in failover mode, or when the target system is not available. It is recommended to take a
snapshot of the original source before reversing the replication for backup purposes.
1. On the menu bar, click Protection > RCGs.
2. Click the relevant RCG check box, and click More Actions > Reverse.
Restore replication
When the replication consistency group is in failover or switchover mode, you can reverse or restore the replication. Restoring
replication maintains the replication direction from the original source and overwrites all data at the target side. This option may
be selected from either source or target systems.
This option is available when an RCG is in failover mode, or when the target system is not available. Dell Technologies
recommends taking a snapshot of the original destination for backup purposes, before restoring replication.
1. On the menu bar, click Protection > RCGs.
2. Click the relevant RCG check box, and click More Actions > Restore.
3. In the Restore Replication RCG dialog box, click Apply.
4. Verify that the operation has finished and was successful, and click Dismiss.
Test Failover
Test failover of the latest copy of snapshots of source and target systems before performing a failover.
Replication is still running and is in a healthy state.
1. On the menu bar, click Protection > RCGs.
2. Click the relevant RCG check box, and then click More Actions > Test Failover.
3. In the RCG Test Failover dialog box, click Start Test Failover.
4. In the RCG Test Failover using target volumes dialog box, click Proceed.
5. Verify that the operation has finished and was successful, and click Dismiss.
Freeze an RCG
The freeze command stops the writing of data from the target journal to the target volume. This option is used while creating a
snapshot or copy of the replicated volume.
1. On the menu bar, click Protection > RCGs.
2. Click the relevant RCG check box, and click More Actions > Freeze Apply.
3. Click Freeze Apply.
4. Verify that the operation has finished and was successful, and click Dismiss.
Unfreeze an RCG
The unfreeze apply command resumes data transfer from the target journal to the target volume.
1. On the menu bar, click Protection > RCGs.
2. Click the relevant RCG check box, and click More Actions > Unfreeze Apply.
3. Click Unfreeze Apply.
4. Verify that the operation has finished and was successful, and click Dismiss.
Delete an RCG
Delete an RCG.
If you no longer require replication of the pairs in an RCG, you can delete it.
1. On the menu bar, click Protection > RCGs.
2. Click the relevant RCG check box, and click More Actions > Delete.
3. In the Delete RCG dialog box, verify that you have selected the desired RCG, and click Delete.
4. Verify that the operation has finished and was successful, and click Dismiss.
The Resource Groups page displays the resource groups that are in the following states in both Tile and List view.
To switch views, click the Tile View icon or List View icon .
To view the resource groups based on a particular resource group state, select an option from the Filter By drop-down list.
Alternately, in the Graphical view, click the graphic in a particular state.
In the Tile view, each square tile represents a resource group and has the status of the resource group at the bottom of
the graphic. The state icon on the graphic indicates the state of the resource group. The components in blue indicate the
component types that are in the deployment. The components that are in gray indicate the component types that are not in the
resource group.
In the List view, the following information displays:
● Status—Status of the resource group.
● Name—Name of the resource group.
● Deployed By—Name of the user who deployed the resource group.
● Deployed On—Date and time when the resource group is deployed.
● Components—Components used in the resource group.
Click the resource group in the List view or Tile view to view the following information about the resource group in the right
pane:
● Resource group name and description to identify the resource group.
● Name of the user who deployed the resource group.
● Date and time when the resource group is deployed.
● Name of the reference template that is used in the resource group.
● Number of resources that are in the resource group for deployment, based on component type (cluster or node).
Click View Details to view more details about the resource group. You can also generate troubleshooting bundles from the
resource group details page.
Click the resource group name in the List view to open the Resource Group Details page.
Click Update Resources to update the firmware of all nodes in the resource group that are not compliant.
Related information
Configuring block storage
Deploying and provisioning
Basic tasks
This section provides basic tasks for resource group management.
d. Indicate Who should have access to the resource group deployed from this template by selecting one of the
following options:
● To restrict access to super users, select Only PowerFlex SuperUser.
● To grant access to super users and some specific lifecycle administrators and drive replacers, select the PowerFlex
SuperUser and specific LifecycleAdmin and DriveReplacer option, and perform the following steps:
i. Click Add User(s) to add one or more LifecycleAdmin or DriveReplacer users to the list displayed.
ii. Select which users will have access to this resource group.
iii. To delete a user from the list, select the user and click Remove User(s).
iv. After adding the users, select or clear the check box next to the users to grant or block access.
● To grant access to super users and all lifecycle administrators and drive replacers, select PowerFlex SuperUser and
all LifecycleAdmin and DriveReplacer.
3. Click Next.
4. On the screens that follow the Deployment Settings page, configure the settings, as needed for your deployment.
5. Click Next.
6. On the Schedule Deployment page, select one of the following options and click Next:
● Deploy Now—Select this option to deploy the resource group immediately.
● Deploy Later—Select this option and enter the date and time to deploy the resource group.
7. Review the Summary page.
The Summary page gives you a preview of what the resource group will look like after the deployment.
8. Click Finish when you are ready to begin the deployment. If you want to edit the resource group, click Back.
PowerFlex Manager displays a message indicating that a new job is being created for this resource group.
PowerFlex Manager creates a configuration file called /etc/sysctl.d/powerflex-so.conf for a storage-only resource group
and a configuration file called /etc/sysctl.d/powerflex-svm.conf for a hyperconverged resource group. PowerFlex Manager
maintains default values in a conf file for resource group deployment settings. If you have an old sysctl.conf file from a
previous release, it will override these settings. If you need to customize these settings, you can do it in sysctl.conf.
Related information
Lifecycle
Viewing resource group details
Adding components to a resource group
Build and publish a template
Component types
Removing a resource group
5. To specify the compliance version to use for compliance, select the version from the Firmware and Software Compliance
list or select Use PowerFlex Manager appliance default catalog.
You cannot specify a minimal compliance version when you add an existing resource group, since it only includes
server firmware updates. The compliance version for an existing resource group must include the full set of compliance
update capabilities. PowerFlex Manager does not show any minimal compliance versions in the Firmware and Software
Compliance list.
NOTE: Changing the compliance version might update the firmware level on nodes for this resource group. Firmware on
shared devices is maintained by the global default firmware repository.
6. Specify the resource group permissions under Who should have access to the resource group deployed from this
template? by performing one of the following actions:
● To restrict access to super users, select Only PowerFlex SuperUser.
● To grant access to super users and some specific lifecycle administrators and drive replacers, select the PowerFlex
SuperUser and specific LifecycleAdmin and DriveReplacer option, and perform the following steps:
a. Click Add User(s) to add one or more LifecycleAdmin or DriveReplacer users to the list.
b. Select which users will have access to this resource group.
c. To delete a user from the list, select the user and click Remove User(s).
d. After adding the users, select or clear the check box next to the users to grant or block access.
● To grant access to super users and all lifecycle administrators and drive replacers, select PowerFlex SuperUser and all
LifecycleAdmin and DriveReplacer.
7. Click Next.
8. Choose one of the following network automation types:
● Full Network Automation
● Partial Network Automation
When you choose Partial Network Automation, PowerFlex Manager skips the switch configuration step, which is normally
performed for a resource group with Full Network Automation. Partial network automation allows you to work with
unsupported switches. However, it also requires more manual configuration before a deployment can proceed successfully.
If you choose to use partial network automation, you give up the error handling and network automation features that are
available with a full network configuration that includes supported switches.
In the Number of Instances box, provide the number of component instances that you want to include in the template.
9. On the Cluster Information page, enter a name for the cluster component in the Component Name field.
10. Select values for the cluster settings:
For a hyperconverged or compute-only resource group, select values for these cluster settings:
a. Target Virtual Machine Manager—Select the vCenter name where the cluster is available.
b. Data Center Name—Select the data center name where the cluster is available.
NOTE: Ensure that selected vCenter has unique names for clusters in case there are multiple clusters in the
vCenter.
17. To import many general-purpose VLANs from vCenter, perform these steps:
a. Click Import Networks on the Network Mapping page.
PowerFlex Manager displays the Import Networks wizard. In the Import Networks wizard, PowerFlex Manager lists
the port groups that are defined on the vCenter as Available Networks. You can see the port groups and the VLAN IDs.
b. Optionally, search for a VLAN name or VLAN ID.
Related information
Getting started
Lifecycle
Support for full and partial network automation
Migrating vCLS VMs to shared storage
Related information
Upgrading a PowerFlex gateway
10. If you encounter any errors while performing firmware or software updates, you can view the PowerFlex Manager logs for
the resource group. On the Resource Group Details page, click Generate Troubleshooting Bundle.
This action creates a compressed file that contains:
● PowerFlex Manager application logs
● SupportAssist logs
● PowerFlex gateway logs
● iDRAC life-cycle logs
● Dell PowerSwitch switch logs
● Cisco Nexus switch logs
● VMware ESXi logs
● CloudLink Center logs
The logs are for the current resource group only.
Alternatively, you can access the logs from a VMware console, or by using SSH to log in to PowerFlex Manager.
5. Specify the type of maintenance that you want to perform by selecting one of the following options:
● Instant Maintenance Mode enables you to perform short-term maintenance that lasts less than 30 minutes. PowerFlex
Manager does not migrate the data.
● Protected Maintenance Mode enables you to perform maintenance that requires longer than 30 minutes in a safe
and protected manner. When you use protected maintenance mode, PowerFlex makes a temporary copy of the data
so that the cluster is fully protected from data loss. Protected maintenance mode applies only to hyperconverged and
storage-only resource groups.
6. Click Finish.
PowerFlex Manager displays a yellow warning banner at the top of the Resource Groups page. The Service Mode icon
displays for the Deployment State and Overall Resource Group Health, and for the Resource Health for the selected
nodes.
7. When you are ready to leave service mode, click More Actions > Exit Service Mode.
Replacing a drive
You can replace a failed drive in a deployed node. PowerFlex Manager supports the replacement of SSD and NVMe drives for
PowerFlex storage-only nodes and hyperconverged nodes.
Ensure that you have a replacement drive and can access the node.
This capability is available on the PowerFlex appliance and PowerFlex rack offerings only.
PowerFlex Manager supports drive replacement for:
● Nodes that have NVDIMM compression enabled.
● SSD for HBA330 controllers.
The color for the selected drive changes to blue and the table below the hardware image is selected. The table shows details
about the selected drive. To help you pick the correct drive, PowerFlex Manager provides the iDRAC name for the drive, the
PowerFlex drive name, and the serial number.
8. Optionally, click Launch iDRAC GUI to see iDRAC details about the selected drive before proceeding.
When you launch iDRAC, the iDRAC user interface opens in a different tab. Log in and go to the drive details.
Related information
Viewing PowerFlex system details
3. For a hyperconverged or compute-only resource group, select a storage pool in the Storage Pool drop-down.
The list of storage pools available for selection is filtered to list the storage pools in the selected protection domain.
4. Review the Destination Datastores. The destination datastores are the two heartbeat datastores that PowerFlex Manager
creates automatically when you migrate the vCLS VMs to shared storage.
PowerFlex Manager also creates two resource group volumes and maps these volumes to the destination datastores.
Only two datastores and resource group volumes are created. If you already have existing datastores, PowerFlex Manager
adds the new datastores to the list. If you already have datastores with the same names in the same cluster, PowerFlex
Manager does not create new ones, but simply uses the ones that exist already.
Related information
Adding an existing resource group
Viewing PowerFlex system details
If a node has an NSX-T or NSX-V configuration, PowerFlex Manager removes the Add Resources button under Resource
Actions.
If the PowerFlex gateway used in the resource group is being updated on the Resources page, PowerFlex Manager also
removes the Add Resources button.
If the PowerFlex gateway used in the resource group is being updated on the Resources page, PowerFlex Manager does not
allow you to add a node.
Related information
Component types
Deploying a resource group
3. Click Save.
g. When you are ready to add the selected volumes, click Add.
h. After you have selected the volumes that you want to add, define a template for datastore names in the Datastore
Name Template field and click Next.
The template must include a variable that allows PowerFlex Manager to produce a unique datastore name.
If you want to create multiple volumes that share a common naming pattern:
e. In the Datastore Name Template field, define a template for datastore names.
The template must include a variable that allows PowerFlex Manager to produce a unique datastore name.
f. In the Storage Pool drop-down, choose the storage pool where the volume will reside.
g. Select the Enable Compression check box to take advantage of the PowerFlex NVDIMM compression feature.
h. In the Volume Size (GB) field, select the size in GB. The minimum size is 8 GB and the value you specify must be
divisible by eight.
i. In the Volume Type field, select thick or thin.
A thick volume provides a larger amount of storage in advance, whereas a thin volume provides on-demand storage and
faster setup and startup times.
If you enable compression on a hyperconverged or storage-only resource group with the granularity of the storage pool
set to fine, the only option for Volume Type is thin. This is the case regardless of whether you deploy a compressed or
non-compressed volume.
4. Optionally, click Add volume again to add another volume section. Then, provide the required information for that section.
5. Click Next once you have included information about all of the volumes you want to add.
6. On the Summary screen, review the volume details to be sure that everything looks correct.
If you added existing volumes, you can click View Volumes to review the list of volumes previously selected.
7. Click Finish.
The resource group moves to the In Progress state and the new volume icons appear on the Resource Group Details
page. You may see multiple volume components while the add operation is still in progress. One the operation is complete,
you will see just one volume component with the count updated.
After the deployment completes successfully, you can click View Volumes in the Storage list on the Resource Group
Details page to search for volumes that are part of the resource group.
Related information
Resize a volume
Viewing and selecting volumes
Resize a volume
After adding volumes to a resource group, you can resize the volumes.
For a storage-only resource group, you can increase the volume size. For a VMware ESXi compute-only resource group, you can
increase the size of the datastore that is associated with the volume. For a hyperconverged resource group, you can increase
the size of both the volume and the datastore.
If you resize a volume in a storage-only resource group, you must update the datastore size in the corresponding VMware ESXi
compute-only resource group. The datastore size cannot exceed the size of the volume.
1. On the Resource Groups page, click the volume component and choose Volume Actions > Resize.
2. Choose the volume that you want to resize:
a. Click Select Volume.
b. Enter a volume or datastore name search string in the Search Text box.
c. Optionally, apply additional search criteria by specifying values for the Size, Type, Compression, and Storage filters.
d. Click Search.
PowerFlex Manager updates the results to show only those volumes that satisfy the search criteria. If the search returns
more than 50 volumes, you must refine the search criteria to return only 50 volumes.
e. Select the row for the volume you want to resize.
f. Click Apply.
3. Update the sizing information:
a. In the New Volume Size (GB) field, specify a value that is greater than the current volume size.
b. Optionally, select Resize Datastore to increase the size of the datastore.
If you are resizing a volume for a storage-only resource group, enter a value in the New Volume Size (GB) field. Specify a
value that is greater than the current volume size. Values must be in multiples of eight, or an error occurs.
If you are resizing a volume for a compute-only resource group, review the Volume Size (GB) field to see if the volume size
is greater than Current Datastore Size (GB). If it is, PowerFlex Manager expands the datastore size.
4. Click Save.
Related information
Adding volumes to a resource group
5. Select one or more nodes in the Available Nodes list and click the arrow icon (>>) to move these items to the Selected
Nodes list.
The Available Nodes list shows the nodes in the resource group, along with the name, IP address, MDM role, fault set (if
applicable), and operating system for each node. Only nodes that have CentOS nodes are selectable. The check boxes for
SLES nodes are disabled.
Not all nodes in the deployment will be displayed unless all the nodes have CentOS as the operating system type. If you
migrate from 3.8.x to 4.6.x, you will likely have only CentOS nodes in the resource group. In this case, on the very first
attempt at operating system migration, all the nodes will be CentOS. If you choose to add a SLES node before attempting
the operating system replacement, the resource group will have a mix of operating systems on the nodes (CentOS and SLES
nodes). Nonetheless, the SLES nodes will not be displayed in Replace Node OS wizard.
NOTE: The only scenario where SLES nodes will be displayed along with CentOS nodes is when the operating system
migration operation fails on a previous attempt. To allow you to retry the operation on the failed nodes, all nodes will be
displayed, irrespective of the operating system type.
PowerFlex Manager shows all the SDSs in a storage-only resource group and all the SVMs for a hyperconverged resource
group. For a hyperconverged resource group, the operating system field shows the operating system for the SVM.
You can optionally add a filter to control which nodes appear in the list.
NOTE: Changing the firmware repository might update the firmware level on nodes for this resource group. The
global default firmware repository maintains firmware on shared devices.
6. Specify the permissions for this resource group under Who should have access to the resource group deployed from
this template? by performing one of the following actions:
● To restrict access to super users, select Only PowerFlex SuperUser.
● To grant access to super users and some specific lifecycle administrators and drive replacers, select the PowerFlex
SuperUser and specific LifecycleAdmin and DriveReplacer option, and perform the following steps:
a. Click Add User(s) to add one or more LifecycleAdmin or DriverReplacer users to the list displayed.
b. Select which users will have access to this resource group.
c. To delete a user from the list, select the user and click Remove User(s).
d. After adding the users, select or clear the check box next to the users to grant or block access.
● To grant access to super users and all lifecycle administrators and drive replacers, select PowerFlex SuperUser and all
LifecycleAdmin and DriveReplacer.
7. Click Save.
CloudLink View the following information about the CloudLink Centers participating in the resource
group:
● Health
● Hostname
● Management IP
● Machine Group
● Keystore
When a resource group is deployed, if CloudLink has clustered CloudLink Centers,
PowerFlex Manager shows all CloudLink Centers on the Resource Group Details page.
PowerFlex Manager ensures that the resource group deployment succeeds as long as at
least one CloudLink Center in the cluster is working.
Add resources
Add Resources allows you to add nodes, volumes, and networks to a resource group.
More actions
Under More Actions, you can perform the following tasks:
Resource actions
Under Resource Actions, you can perform the following tasks:
● Deployed By—Displays the name of the user who deployed the resource group.
● Deployed On—Displays the date and time when the resource group is deployed.
● Reference Template—Displays the name of the reference template that is used in the resource group.
NOTE: For existing resource groups, the name displays as User Generated Template and not a template name from
the inventory.
Recent activity
Recent Activity displays component deployment status and information about the current deployed resource group.
You can also click the Port View tab to display port view details.
If you want to see the current firmware or software repository that is in use, look at Target Version. To change the compliance
version, click Change Target.
If you select a minimal compliance version for a resource group, PowerFlex Manager puts the resource group in lifecycle mode
and restricts the actions that can be performed. In lifecycle mode, the resource group supports monitoring, service mode, and
compliance upgrade operations only. All other resource group operations are blocked.
c. When you are ready to add the selected volumes, click Add.
6. If you are selecting a volume for a resize operation, you can select the volume that you want to resize and click Apply.
Related information
Adding volumes to a resource group
If the node has an NSX-T or NSX-V configuration, you can remove the deployment information for a resource group, but not
delete the resource group entirely. PowerFlex Manager also does not allow you to delete a resource group if the PowerFlex
gateway used in the resource group is currently being updated on the Resources page.
Standard users can delete only the resource groups that they have deployed.
To remove a resource group, perform the following steps:
1. On the menu bar, click Lifecycle > Resource Groups.
2. Select the resource group.
3. On the Resource Group Details page, under More Actions, click Remove Resource Group.
4. In the Remove Resource Group dialog box, select the Resource group removal type:
● Delete Resource Group makes configuration changes to the nodes, switch ports, virtual machine managers, and
PowerFlex to unconfigure those components. Also, it returns the components to the available inventory.
● Remove Resource Group removes deployment information, but does not make any configuration changes to the nodes,
switch ports, virtual machine managers, and PowerFlex. Also, it returns the components to the available inventory.
5. If you choose Remove Resource Group, perform the following steps:
a. To keep the nodes in the inventory, select Leave nodes in PowerFlex Manager inventory and set state to and select
the state:
● Managed
● Unmanaged
● Reserved
b. To remove the nodes, select Remove nodes from the PowerFlex Manager inventory.
c. Click Remove.
6. If you choose Delete Resource Group, perform the following steps:
a. Select Delete Clusters(s) and Remove from vCenter to delete and remove the clusters from vCenter.
b. Select Remove Protection Domain and Storage Pools from PowerFlex to remove the protection domain and
storage pools that are created during the resource group deployment.
If you select this option, you must select the target PowerFlex gateway. The PowerFlex gateway is not removed.
PowerFlex Manager removes only the protection domain and storage pools that are part of the resource group. If
multiple resource groups are sharing a protection domain, you might not want to delete the protection domain.
For a compression enabled resource group, PowerFlex Manager deletes the acceleration pool and the DAX devices when
you delete the resource group.
c. Select Delete Machine Group and remove from CloudLink Center to clean up the related components in CloudLink
Center.
d. If you are certain that you want to proceed, type DELETE RESOURCE GROUP.
e. Click Delete.
Related information
Deploying a resource group
Related information
Deploying and provisioning
Basic tasks
This section provides basic tasks for template management.
Clone a template
The Clone feature allows you to copy an existing template into a new template. A cloned template contains the components
that existed in the original template. You can edit it to add additional components or modify the cloned components.
For most environments, you can clone one of the sample templates that are provided with PowerFlex Manager and edit as
needed. Choose the sample template that is most appropriate for your environment. If you already created a template using the
Add a template option, skip this task.
3. In the Clone Template dialog box, enter a template name in the Template Name box.
4. Select a template category from the Template Category list. To create a template category, select Create New Category.
5. In the Template Description box, enter a description for the template.
6. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
7. Indicate Who should have access to the resource group deployed from this template by selecting one of the following
options:
● To restrict access to super users, select Only PowerFlex SuperUser.
● To grant access to super users and some specific lifecycle administrators and drive replacers, select the PowerFlex
SuperUser and specific LifecycleAdmin and DriveReplacer option, and perform the following steps:
a. Click Add User(s) to add one or more LifecycleAdmin or DriveReplacer users to the list.
b. Select which users will have access to this resource group.
c. To delete a user from the list, select the user and click Remove User(s).
d. After adding the users, select or clear the check box next to the users to grant or block access.
● To grant access to super users and all lifecycle administrators and drive replacers, select PowerFlex SuperUser and all
LifecycleAdmin and DriveReplacer.
8. Click Next.
9. On the Additional Settings page, provide new values for the Network Settings, PowerFlex Gateway Settings, and
Node Pool Settings.
NOTE: For more information about different networks available for PowerFlex file deployment software, see the
PowerFlex file network enhancement and flexibility topic in related information.
Related information
Configuring block storage
Sample templates
This topic lists the sample templates that are provided with PowerFlex Manager.
Add a template
The Create feature allows you to create a template, clone the components of an existing template into a new template, or
import a pre-existing template.
For most environments, you can clone one of the sample templates that are provided with PowerFlex Manager and edit as
needed. Choose the sample template that is most appropriate for your environment.
1. On the menu bar, click Templates.
2. On the Templates page, click Create.
3. In the Create dialog box, select one of the following options:
● Clone an existing PowerFlex Manager template
● Upload External Template
● Create a new template
If you select Clone an existing PowerFlex Manager template, select the Category and the Template to be Cloned.
The components of the selected template are in the new template.
For PowerFlex software file storage, ensure that you select PowerFlex-File-SW only template.
4. Enter a Template Name.
5. From the Template Category list, select a template category. To create a category, select Create New Category from the
list.
6. Enter a Template Description (optional).
7. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
8. Specify the resource group permissions for this template under Who should have access to the resource group
deployed from this template? by performing one of the following actions:
● To restrict access to super users, select Only PowerFlex SuperUser.
● To grant access to super users and some specific life-cycle administrators and drive replacers, select the PowerFlex
SuperUser and specific LifecycleAdmin and DriveReplacer option, and perform the following steps:
a. Click Add User(s) to add one or more LifecycleAdmin or DriveReplacer users to the displayed list.
b. Select which users will have access to this resource group.
c. To delete a user from the list, select the user and click Remove User(s).
d. After adding the users, select or clear the check box next to the users to grant or block access.
● To grant access to super users and all life-cycle administrators and drive replacers, select PowerFlex SuperUser and all
LifecycleAdmin and DriveReplacer.
9. Click Next.
10. In Additional Settings provide the new values for the following settings:
In the Number of Instances box, provide the number of component instances that you want to include in the template.
4. If you are adding a cluster, in the Select a Component box, choose one of the following cluster types:
● PowerFlex Cluster
● VMware Cluster
● PowerFlex File Cluster
5. If you are adding a VM, in the Select a Component box, choose one of the following cluster types:
● CloudLink Center
● PowerFlex Gateway
6. Under Related Components, perform one of the following actions:
● To associate the component with all existing components, click Associate All.
● To associate the component with only selected components, click Associate Selected and then select the components
to associate.
Based on the component type, specific settings and properties appear automatically that are required and can be edited.
7. Click Save to add the component to the template builder.
8. Repeat steps 1 through 6 to add additional components.
9. After you finish adding components to your template, click Publish Template.
A template must be published to be deployed. It remains in draft state until published.
After publishing a template, you can use the template to deploy a resource group on the Resource Groups page.
Related information
Support for full and partial network automation
Deploying a resource group
Edit a template
You can edit an existing template to change its draft state to "published" for deployment or to modify its components and their
properties.
1. On the menu bar, click Templates.
2. Open a template, and click Modify Template.
3. Make changes as required to the settings for components within the template.
PowerFlex Manager facilitates sending the active RCM version to the Data Items portal. The compliance file includes the
RCM and intelligent catalog (IC). When multiple resource groups run on different compliance files besides the default
PowerFlex Manager compliance file, all the active and default RCM and IC versions are sent to the Data Items portal. If
PowerFlex Manager is not managing any resource groups, then the default RCM of PowerFlex Manager is sent to the data
items section of the Embedded SupportAssist Enabler. By default, this information is sent every Saturday at 02:00 AM UTC.
NOTE: If an RCM associated with the template is modified, a wrench icon with the text, Modified, is displayed.
However, if the update file is moved or deleted, the wrench icon with the text, Needs Attention, is displayed.
a. To edit PowerFlex cluster settings, select the PowerFlex Cluster component and click Modify. Make the necessary
changes and click Save.
b. To edit the VMware cluster settings, select the VMware Cluster component and click Modify. Make the necessary
changes, and click Save.
c. To edit node settings, select the Node component and click Modify. Make the necessary changes, and click Save.
4. Optionally, click Publish Template to make the template ready for deployment.
Related information
Component types
d. Indicate Who should have access to the resource group deployed from this template by selecting one of the
following options:
● To restrict access to super users, select Only PowerFlex SuperUser.
● To grant access to super users and some specific lifecycle administrators and drive replacers, select the PowerFlex
SuperUser and specific LifecycleAdmin and DriveReplacer option, and perform the following steps:
i. Click Add User(s) to add one or more LifecycleAdmin or DriveReplacer users to the list displayed.
ii. Select which users will have access to this resource group.
iii. To delete a user from the list, select the user and click Remove User(s).
iv. After adding the users, select or clear the check box next to the users to grant or block access.
● To grant access to super users and all lifecycle administrators and drive replacers, select PowerFlex SuperUser and
all LifecycleAdmin and DriveReplacer.
3. Click Next.
4. On the screens that follow the Deployment Settings page, configure the settings, as needed for your deployment.
5. Click Next.
6. On the Schedule Deployment page, select one of the following options and click Next:
● Deploy Now—Select this option to deploy the resource group immediately.
● Deploy Later—Select this option and enter the date and time to deploy the resource group.
7. Review the Summary page.
The Summary page gives you a preview of what the resource group will look like after the deployment.
8. Click Finish when you are ready to begin the deployment. If you want to edit the resource group, click Back.
PowerFlex Manager displays a message indicating that a new job is being created for this resource group.
PowerFlex Manager creates a configuration file called /etc/sysctl.d/powerflex-so.conf for a storage-only resource group
and a configuration file called /etc/sysctl.d/powerflex-svm.conf for a hyperconverged resource group. PowerFlex Manager
maintains default values in a conf file for resource group deployment settings. If you have an old sysctl.conf file from a
previous release, it will override these settings. If you need to customize these settings, you can do it in sysctl.conf.
Related information
Lifecycle
Viewing resource group details
Adding components to a resource group
Build and publish a template
Component types
Removing a resource group
Related information
Cluster component settings
The second interface ports 1 and 2 are automatically replicated to the first interface. This replication applies to sample
templates as well. If you manually create a template from scratch and choose the networks for the interfaces, the second
interface's port 1 and 2 are not automatically replicated to the first interface.
3. Enter the network VLANs for each port.
a. Click Choose Networks for a port.
b. To add one or more networks to the port, select Add Networks to this Port, then click the check box for each network
you want to add from the Available Networks list. Alternatively, click the check box in the upper left corner next to the
Name label to select all the available networks.
If you want to filter the list by network type, select a Network Type, then enter a name or VLAN ID to search.
Click >> to move the selected items to the Selected Networks list on the right.
c. To mirror network settings from another port for which you have already chosen the network VLANs, select Mirror this
Port with Another Port. Then, select the other interface and port from which you want to mirror this port.
d. Click Save.
4. To view the list of nodes that match the network configuration parameters, click Validate Settings.
The list of nodes is filtered according to the target boot device and NIC type settings specified.
When you enable PowerFlex settings for the node, the Validate Settings page filters the list of nodes according to the
supported storage types (NVMe, All flash, and HDD). Within the section for each storage type, the nodes are also sorted by
health, with the healthy (green) nodes displayed first and the critical (red) nodes displayed last.
NOTE: If you select the same network on multiple interface ports or partitions, PowerFlex Manager creates a team or bond
on systems with the VMware ESXi operating system. This configuration enables redundancy.
1. On the node component page, under Network Settings, click Enabled under Static Routes.
2. Click Add New Static Route.
3. Enter the following information for the static route:
● Source Network—Select the PowerFlex data network or replication network that is the source.
If you add or remove a network for a port, the Source Network list still shows the old networks. In order to see the
changes, you must save the node settings and edit the node again.
● Destination Network—Select the PowerFlex data network or replication network that is the destination for the static
route.
● Gateway—Enter the IP address for the gateway.
Related information
Component types
Export a template
To export a template:
1. On the menu bar, click Templates.
2. Select the template that you want to export.
3. Click Export.
4. In the Export Template to ZIP File window, enter values as follows:
a. Enter a name for the template file in the File Name.
b. If you have set an encryption password, select Use Encryption Password from Backup Setting to use that password.
To set an encryption password, clear this option.
c. If you clear Use Encryption Password from Backup Setting, two additional fields display. Enter a new password in
the Set File Encryption Password field and enter the password again to confirm it.
5. Click Export to download the file. Select a location to save the file and click OK.
Import a template
The Import Template feature allows you to import the components of an existing template and its component configurations
into a template. For example, you can create a template that defines a specific cluster and node topology and import
this template definition into another template. After importing, you can modify the component properties of the imported
components.
Editing an imported template does not affect the original template.
As an alternative to this procedure, you can also start from a new template and import an existing template as you create it. This
is a better approach to use. To do this, you select Create, then choose Clone an existing VxFlex Manager template in the
wizard.
To import a template, perform the following steps:
1. On the menu bar, click Templates.
2. On the Templates page, select the template into you want to import an existing template and click Edit in the right pane.
3. On the Template Builder page, in the right pane, click Import Template.
4. In the Import Template dialog box, select a specific template from the Select a template list and click Import.
9. Indicate Who should have access to the resource group deployed from this template by selecting one of the following
options:
● To restrict access to super users, select Only PowerFlex SuperUser.
● To grant access to super users and some specific lifecycle administrators and drive replacers, select the PowerFlex
SuperUser and specific LifecycleAdmin and DriveReplacer option, and perform the following steps:
a. Click Add User(s) to add one or more LifecycleAdmin or DriverReplacer users to the list displayed.
b. Select which users will have access to this resource group.
c. To delete a user from the list, select the user and click Remove User(s).
d. After adding the users, select or clear the check box next to the users to grant or block access.
● To grant access to super users and all lifecycle administrators and drive replacers, select PowerFlex SuperUser and all
LifecycleAdmin and DriveReplacer.
10. Click Upload and Continue.
The Additional Settings page appears.
11. On the Additional Settings page, provide new values for the Network Settings, OS Settings, Cluster Settings,
PowerFlex Gateway Settings, and Node Pool Settings.
12. Click Finish.
NOTE: For SSD/NVMe drives, upload a capacity-based license. For SED drives, upload an SED-based license.
7. Select the VM and click Edit > Continue (by default, the number of CloudLink instances is two and PowerFlex Manager
supports up to three instances).
a. Under VM Settings select the Datastore and Network from the drop-down list.
b. Under Cloudlink Settings select the following:
i. For Host Name Selection, either select Specify At Deployment Time to manually enter at deployment time or
Auto Generate to have PowerFlex Manager generate the name.
ii. Enter the vault passwords.
NOTE: Other details such as OS credentials, NTP, and secadmin credentials are auto populated.
8. Under Additional Cloudlink Settings, you can choose either or both of the following settings:
● Configure Syslog Forwarding
a. Select the check box to configure syslog forwarding.
b. For Syslog Facility, select the syslog remote server from the list.
● Configure Email Notifications
a. Select the check box to configure email alerts.
b. Specify the IP address of the email server.
c. Specify the port number for the email server. The default port is 25. Enter the port numbers in a comma-separated
list, with values between 1-65535.
d. Specify the email address for the sender.
e. Optionally, specify the username and password.
9. Click Save.
10. Click Publish Template and click Yes to confirm.
11. In the Deploy Resource Group wizard, do the following:
a. Select the published template from the drop-down list, and enter Resource Group Name and description.
b. Select who should have the access to the resource group and click Next.
c. Provide Hostname and click Next.
d. Select Deploy Now or Schedule deployment and click Next.
Deploy a storage-only or hyperconverged resource group that includes a PowerFlex cluster that will be associated with the
PowerFlex file cluster. When you deploy the PowerFlex file cluster, the control volumes that are needed for file enablement will
be added automatically.
1. On the menu bar, click Templates.
2. On the Templates page, click Create.
3. In the Add a Template wizard, click Clone an existing PowerFlex Manager template.
4. For Category, select Sample Templates. For Template to be Cloned, select PowerFlex File or PowerFlex File - SW
Only. Click Next.
5. On the Template Information page, provide the template name, template category, template description, firmware and
software compliance, and who should have access to the resource group deployed from this template. Click Next.
6. On the Additional Settings page, enter new values for the Network Settings, PowerFlex Gateway Settings, OS
Settings, and Node Pool Settings.
For the Network Settings in a PowerFlex file cluster template, you must provide a NAS Management network and two
NAS Data networks.
For the OS Settings, you must choose Use Compliance File Linux Image.
For the PowerFlex Gateway Settings, select block-legacy-gateway.
7. Click Finish.
8. After creating the template, click Templates, select the cloned template, and click Modify Template.
9. Edit the PowerFlex cluster, PowerFlex file cluster, and node components as needed and click Save.
10. Publish the template and deploy the resource group.
NAS volumes are shown during the deployment, and then compressed to one icon with a number based on the number of nodes.
For example, you might see the number 4 for a two-node NAS compute-only deployment. Each time you expand the deployment
by adding another node, the number is incremented to show that another volume is added.
Related information
Configuring file storage
Storage Pool <n> One or more storage pools provided by the PowerFlex cluster
deployed as part of the storage-only or hyperconverged
resource group.
This setting will be empty if you have not yet deployed the
storage-only or hyperconverged resource group.
The control volumes are special NAS cluster volumes that cannot be
deleted or modified. These control volumes are hidden from view in
the management software appliance.
Related Components Choose All Components.
OS Settings: OS Image For a PowerFlex file cluster, you must choose Use Compliance File
Linux Image.
OS Settings: PowerFlex Role Choose Compute Only.
OS Settings: Enable PowerFlex File This option must be selected for NAS.
OS Settings: Switch Port Configuration Choose Port Channel (LACP Enabled) or Trunk Port. Port
Channel (LACP Enabled) is the preferred option.
The control volumes are special NAS cluster volumes that cannot be
deleted or modified. These control volumes are hidden from view in
the management software appliance.
Related Components Choose All Components.
OS Settings: OS Image For a PowerFlex file cluster, you must choose Use Compliance File
Linux Image.
OS Settings: PowerFlex Role Choose Compute Only.
OS Settings: Enable PowerFlex File This option must be selected for NAS.
OS Settings: Switch Port Configuration Choose Port Channel (LACP Enabled) or Trunk Port. Port
Channel (LACP Enabled) is the preferred option.
Delete a template
The Delete Template option allows you to delete a template from PowerFlex Manager.
1. On the menu bar, click Templates.
2. Select the template that you want to delete. Click More Actions > Delete Template in the right pane.
3. Click Yes to confirm the deletion.
Compliance Indicates if the resource firmware and software compliance state is Compliant,
Non-compliant, Update required, or Update failed.
Compliance is determined by the firmware/software version for the selected
resource, based on the default compliance version. Click the compliance status to
view the compliance report.
Management IP Indicates the resource IP address. Click the IP address to open the Element
Manager.
Deployment Status Indicates if the resource deployment status is In Use, Not in use, Available,
Updating resource, or Pending Updates.
Click the deployment status to view resource group details.
To filter the resources that display, click the toggle filters icon on the Resources page.
On the Resources page, you can also perform the following tasks:
View detailed information about a resource Select the resource. In the right pane, click View Details.
View a firmware and software compliance Select the resource. In the right pane, click the link corresponding to
report for a resource Compliance field.
Update password for a resource Select the resource and click Update Password.
Basic tasks
This section provides basic tasks for resource management.
b. Enter the management IP address (or hostname) of the resources that you want to discover in the IP/Hostname Range
field.
To discover one or more nodes by IP address, select IP Address and provide a starting and ending IP address.
To discover one or more nodes by hostname, select Hostname and identify the nodes to discover in one of the following
ways:
● Enter the fully qualified domain name (FQDN) with a domain suffix.
● Enter the FQDN without a domain suffix.
● Enter a hostname search string that includes one of the following variables:
If you use a variable, you must provide a start number and end number for the hostname search.
c. Select one of the following options from the Resource State list:
Unmanaged Select this option to monitor the health status of a device and the firmware
version compliance only. The discovered resources are not available for a
firmware upgrade or deploying resource groups by PowerFlex Manager. This
option is the default for the node resource type.
If you have yet not uploaded a license, PowerFlex Manager is configured
for monitoring and alerting only. In this case, Unmanaged is the only option
available.
Reserved Select this option to monitor firmware version compliance and upgrade
firmware. The discovered resources are not available for deploying resource
groups by PowerFlex Manager.
d. To discover resources into a selected node pool instead of the global pool (default), select an existing or create a node
pool from the Discover into Node Pool list. To create a node pool, click the + sign to the right of the Discover into
Node Pool box.
e. Select an existing or create a credential from the Credentials list to discover resource types. To create a credential,
click the + sign to the right of the Credentials box. PowerFlex Manager maps the credential type to the type of
resource that you are discovering. The credential types are as follows:
● Element Manager
● Node (Hardware/Software Management)
● Switch
● VM Manager
● PowerFlex Gateway
● Node (Software Management)
● PowerFlex System
The default node (Hardware/Software Management) credential type is Dell PowerEdge iDRAC Default.
f. If you want PowerFlex Manager to automatically reconfigure the iDRAC nodes it finds, select the Reconfigure
discovered nodes with new management IP and credentials check box. This option is not selected by default,
because it is faster to discover the nodes if you bypass the reconfiguration.
g. To have PowerFlex Manager automatically configure iDRAC nodes to send alerts to PowerFlex Manager, select the Auto
configure nodes to send alerts to PowerFlex Manager check box.
4. Click Next.
You might have to wait while PowerFlex Manager locates and displays all the resources that are connected to the managed
networks.
To discover multiple resources with different IP address ranges, repeat steps 2 and 3.
5. On the Discovered Resources page, select the resources from which you want to collect inventory data and click Finish.
The discovered resources are listed on the Resources page.
Related information
Getting started
Configuring block storage
Resource health status
Compliance status
b. Enter the management IP addresses of the LIA nodes in the MDM Cluster IP Address field. You must provide the IP
addresses for all the nodes in a comma-separated list. The list should include a minimum of three nodes and a maximum
of five nodes.
If you forget to add a node, the node will not be reachable after discovery. To fix this problem, you can rerun the
discovery later to provide the missing node. You can enter just the one missing node, or all the nodes again. If you enter
IP addresses for any nodes that were previously discovered, these nodes are ignored on the second run.
c. For the System ID, specify the same System ID provided when you created the MDS cluster.
d. Select an existing credential or create a new one from the Credentials list. The credential must be a PowerFlex
Management System credential. Be sure to provide the LIA password that was used for the original setup. The LIA
password is required for the mTLS configuration.
4. Click Next.
5. On the Summary page, click Finish.
After you complete the discovery, you should see a PowerFlex System resource on the Resources page. The OS
Hostname and Asset/Service tag are set to powerflex-mds. The discovery process also performs a bootstrap process
that generates the required certificates and places them on the MDS nodes. Once you have completed the steps to discover
a PowerFlex system, you must use the login certificate (not a username and password) to log in to the MDMs.
6. SSH to the two SVMs associated with the powerflex-mds system. Run scli --add_certificate --
certificate_file /opt/emc/scaleio/mdm/cfg/mgmt_CA.pem.
NOTE: You can select one of the following options from the Filter By list.
○ Logical Disks
○ Physical Disks
To see which disks are self-encrypting drives (SED), look at the Security Status. The value Encryption Capable
indicates that the disk is an SED. The value Not Capable indicates that the disk is not an SED.
On the Resources page for the node, filter by "Physical Disks" to view the details of the Vault Optimized Storage
Solution (VOSS) under the PCIe Extender section.
Related information
Reconfiguring MDM roles
Migrating vCLS VMs to shared storage
Related information
Updating firmware and software
Exporting a compliance report for all resources
6. If you are updating a PowerFlex gateway, type UPDATE POWERFLEX to confirm that you are ready to proceed with the
update.
7. Click Finish.
Related information
Viewing resource group details
Viewing a compliance report for a resource
Upgrading a PowerFlex gateway
CAUTION: Check the Alerts page before performing the upgrade. Look for major and critical alerts related to
PowerFlex Block and File to be sure the MDM cluster is healthy before proceeding.
1. Choose the PowerFlex gateway from the Resources page.
You cannot upgrade more than one PowerFlex gateway at a time.
2. Click Update Resources.
3. On the Update Details page, check the Needs Attention section to see whether any of the nodes must be reconfigured
before upgrade. Select any nodes that you want to reconfigure. To select all nodes, click the box to the left of SDS Name.
4. Click Next.
5. On the Summary page, choose Allow PowerFlex Manager to perform non-disruptive updates now or Schedule
non-disruptive updates to run later.
Specify the type of update you want to perform by selecting one of the following options:
● Instant Maintenance Mode enables you to perform updates quickly. PowerFlex Manager does not migrate the data.
● Protected Maintenance Mode enables you to perform updates that require longer than 30 minutes in a safe and
protected manner.
6. If you only selected a subset of the nodes for reconfiguration, confirm the reconfiguration by typing RECONFIGURE NODES.
Otherwise, confirm the update action by typing UPDATE POWERFLEX.
7. Click Finish.
8. Go to the Resource Groups page to update any resource groups that are not in compliance with the new version of
PowerFlex.
The PowerFlex gateway upgrade process performs some health prechecks to confirm that the resource group is healthy before
the upgrade. If the resource group is not healthy, the PowerFlex gateway upgrade is not successful.
After a successful upgrade, the PowerFlex gateway should be in compliance with the new target version. However, the
nodes in the resource group may require additional maintenance. In this case, you must update any resource groups that are
noncompliant from the Resource Groups page.
When you initiate a PowerFlex gateway update, PowerFlex Manager upgrades both the Gateway RPM and the software
components that are non-compliant.
Related information
Updating a resource group with new firmware and software
Updating firmware and software
VMs
● All CloudLink Center VMs must have at least 6 GB of vRAM, 4 vCPUs, and 64 GB disk space before you upgrade.
NOTE: If a CloudLink Center VM does not have enough resources for the upgrade, a System does not meet
requirements error occurs.
● CloudLink Center cluster servers must be online and must be synchronized with each other.
Backups
Dell Technologies recommends that you back up CloudLink Center before starting the upgrade process.
CloudLink licenses
Before starting the upgrade process, check that the licenses are valid.
7. Run an inventory on the second CloudLink Center after you upgrade the first CloudLink Center to ensure that all CloudLink
Centers are on the same version as specified by the .
8. Repeat step 1 through 7 for the second node.
NOTE: This procedure only upgrades the CloudLink Center. The CloudLink Agent is upgraded during the upgrade of
their respective resource groups. CloudLink Agent 7.1.x is permitted to connect to CloudLink Center 8.1.x and unlock
disks or volumes. However, you cannot perform new encryption or decryption of existing disks. This is based on default
deployments by PowerFlex Manager. If your deployment has a custom configuration, contact Dell Technologies Support
prior to the upgrade.
Prerequisites
● Administrator credentials for VMware vCenter where the CloudLink VMs are hosted.
● Secadmin credentials for the CloudLink deployment.
● CloudLink Center console password to configure for a new deployment.
● CloudLink Center 8.1.x OVA. Download from Dell Technologies support site or from the packaged compliance file.
● Hostnames (not fully qualified domain name), IP addresses, netmasks, and gateway for all CloudLink Centers.
● DNS server IP address.
19. Repeat this process for the remaining cluster members (up to four).
Related information
Resource health status
Compliance status
2. To view device details such as hostname, model name, and management IP address, or information about associated devices,
click the specific ports or devices.
3. To view information about intermediate devices in port view, ensure that the devices are discovered and available in the
inventory. Sometimes, connectivity cannot be determined for an existing resource group because the switches have not yet
been discovered. In this case, you see only the node in port view, but you do not see connectivity information. You can
correct this by going back and discovering the switches, and updating the resource group again.
PowerFlex Manager cannot discover interface cards that do not have integrated LLDP support (such as Intel X520).
4. To filter information based on the connectivity, select an option from the Display Connections list. Show All Connections
is the default option.
Severity Indicates the severity of the check. The severity is based on the result. The severity
levels are INFO, WARNING, HIGH, or CRITICAL. If the result is PASS, the severity
is INFO. If the result is FAIL, the severity depends on the type of check. PowerFlex
Manager supports only CRITICAL checks.
Details Provides a description of the check that was run.
Affected Resources Gives a list of the IP addresses or unique identifiers of resources that are impacted
by the check. The list of affected resources helps with troubleshooting.
Related information
Configuration checks
Importing networks
You can import a large number of general-purpose VLANs from vCenter.
This capability is available on the PowerFlex appliance and PowerFlex rack offerings only.
After importing networks, you can add them to templates or resource groups. You can also import networks when you add an
existing resource group.
1. On the menu bar, click Resources.
2. On the All Resources tab, click the VMware vCenter from which you want to import networks.
3. In the right pane, click Import Networks.
5. Click Next.
6. On the Select Credentials page, create a credential with a new password or change to a different credential.
a. Open the iDRAC, OS Password, SVM Password, or MVM Password object under the Type column to see credential
details for each node you selected on the Resources page.
The SVM Password and MVM Password sections do not appear if there is nothing to show for SVMs or MVMs.
b. To create a credential that has the new password, click the plus sign (+) under the Credentials column.
Specify the Credential Name and the User Name for which you want to change the password. Enter the new password
in the Password and Confirm Password fields.
c. To modify the credential, click the pencil icon for the nodes under the Credentials column and select a different
credential.
d. Click Save.
7. Click Finish.
8. Click Yes to confirm.
PowerFlex Manager starts a new job for the password update operation, and a separate job for the device inventory. The
node operating system, SVM, and MVM operating components are updated only if PowerFlex Manager is managing a cluster
with these components. If PowerFlex Manager is not managing a cluster with these components, these components are not
displayed and their credentials are not updated. Credential updates for iDRAC are allowed for managed and reserved nodes only.
Unmanaged nodes do not provide the option to update credentials.
c. To modify the credential, click the pencil icon for one of the nodes under the Credentials column and select a different
credential.
d. Click Save.
7. Click Finish.
8. Click Yes to confirm.
PowerFlex Manager starts a new job for the password update operation, and a separate job for the device inventory. If
PowerFlex Manager is managing a cluster for any of the selected PowerFlex gateway components, it updates the credentials for
the Gateway Admin User and Gateway OS User, as well as any related credentials, such as the LIA and lockbox credentials. If
PowerFlex Manager is not managing the cluster, it only updates the credentials for the Gateway Admin User and Gateway OS
User.
Related information
Enabling SupportAssist
Events
An event is a notification that something happened in the system. An event happens at a single point in time and has a single
timestamp. An event may be unique or be part of a series.
Each event message is associated with a severity level. The severity indicates the risk (if any) to the system, in relation to the
changes that generated the event message.
PowerFlex Manager stores up to 3 million events or events that are up to 13 months old. Once this threshold is exceeded,
PowerFlex Manager automatically purges the events to free up space. The threshold is reviewed daily.
NOTE: An event is published each day that lists the events which have been removed that day.
YYYY-MM-DD hh:mm:ss.sss
Viewing an event
You can view an event in PowerFlex Manager.
1. Go to Monitoring > Events.
The Events page opens which displays a list of system events. You can add one or more filters to control the list of events
displayed.
2. Select the event that you want to view.
NOTE: You can reset the default and advanced field columns by clicking the Reset Columns icon.
Alerts
An alert is a state in the system which is usually on or off. Alerts monitor serious events that require user attention or action.
When an alert is no longer relevant or is resolved, the system automatically clears the alerts with no user intervention. This
action ensures that cleared alerts are hidden from the default view so that only relevant issues are displayed to administrators.
Cleared alerts can be optionally displayed through table filtering options. Alerts can also be acknowledged which removes the
alert from default view. Acknowledging an alert does not indicate that the issue is resolved. You can view acknowledged alerts
through the table-filtering options.
Major A major level informs you that action is required soon. The MDM certificate is about to
expire.
Critical A critical level informs you that the system requires The MDM certificate has expired.
immediate attention.
System impact A free text, description of the system Risk of cluster unavailability
impact of the alert.
Alert Details Any extra details that are relevant to the Percentage of SP capacity usage
object and or incident involved.
Associated Events A list of events that modified the life cycle [object Object]
of the alert.
Resource ID The ID of the resource that is associated 86fb0000000000
with the alert.
Resource Name The name of the resource that is associated sds212
with the alert. This is usually the resource
that is involved with raising the event.
Resource Type The type of resource that is associated with sdses
the alert. this is usually the resource that is
involved with raising the event.
Last Updated The UTC date and time that the alert was 2022-04-28T07:58:46.16Z
last updated.
Timestamp The UTC date and time that the alert was 2022-04-28T07:58:46.16Z
initially raised.
Acknowledged status The acknowledged status indicates Y
whether the alert is acknowledged or
unacknowledged.
The total number of alerts that can be sent to Secure Remote Services gateway from the PowerFlex Manager dispatcher
service is restricted to 200 per day. The threshold for the number of alerts for a particular event is set to three per hour. After
the first three alerts are sent, if fourth alert is generated for the same event, it is not sent to the Secure Remote Services.
However, if the alerts are sent from two different systems running iDRAC, the alerts are sent to the Secure Remote Services
gateway.
A new threshold alert is triggered from PowerFlex Manager when the threshold of 200 alerts per 24 hrs is crossed per day and
is automatically sent to Secure Remote Services gateway. Similarly, a new alert is triggered from PowerFlex Manager and sent
to Secure Remote Services gateway when threshold of three alerts per hour for same alert type, symptom code, or resource is
reached.
3. You can:
● Click Acknowledge to acknowledge an alert.
● Click Unacknowledge to remove an alert acknowledgment.
User management
The User Management page allows you to manage local users, LDAP users, and directory services.
Under Settings > User Management, you can find three pages:
● Local Users
● LDAP Users
● Directory Services
User roles
User roles control the activities that can be performed by different types of users, depending on the activities that they perform
when using PowerFlex Manager.
Ensure that you configure the active directory before assigning roles. The roles that can be assigned to local users and LDAP
users are identical. Each user can only be assigned one role. If an LDAP user is assigned directly to a user role and also to a
group role, the LDAP user is provided with permissions of both roles.
NOTE: User definitions are not imported from earlier versions of PowerFlex and must be configured again.
The following table summarizes the activities that can be performed for each user role:
LifecycleAdmin A LifecycleAdmin can manage the life cycle ● Manage lifecycle operations, resource groups,
of hardware and PowerFlex systems. templates, deployment, backend operations
● Replace drives
● Hardware operations
● View resource groups and templates
● System monitoring (events, alerts)
ReplicationManager The ReplicationManager is a subset of the ● Manage replication operations, peer systems, RCGs
StorageAdmin role, for work on existing ● Manage snapshots, snapshot policies
systems for setup and management of ● View storage configurations, resource details
replication and snapshots. (volume, snapshot, replication views)
● System monitoring (events, alerts)
SnapshotManager SnapshotManager is a subset of ● Manage snapshots, snapshot policies
StorageAdmin, working only on existing ● View storage configurations, resource details
systems. This role includes all operations ● System monitoring (events, alerts)
required to set up and manage snapshots.
Local users
You can create and manage local users within PowerFlex Manager.
Creating a user
Perform this task to create a local user and assign a role to that user.
1. On the menu bar, click Settings and click User Management.
2. Click Local Users.
3. On the Local Users page, click Create.
4. Enter a unique User Name to identify the user account.
5. Enter the First Name and Last Name of the user.
6. Enter the Email address.
7. Enter a New Password that a user enters to access PowerFlex Manager. Confirm the password in the Verify Password
field.
The password must be at least 8 characters long and contain one lowercase letter, one uppercase letter, one number, and
one special character. Passwords cannot contain a username or email address.
8. In the User Role box, select a user role. Options include:
● SuperUser
● SystemAdmin
● StorageAdmin
● LifecycleAdmin
● ReplicationManager
● SnapshotManager
● SecurityAdmin
● DriveReplacer
● Technician
● Monitor
● Support
9. Select Enable User to create the account with an Enabled status, or clear this option to create the account with a
Disabled status.
10. Click Submit and click Dismiss.
Modifying a user
Perform this task to edit a PowerFlex Manager user profile.
1. On the menu bar, click Settings and click User Management.
2. Click Local Users.
3. On the Local Users page, select the user account that you want to edit.
4. Click Modify. For security purpose, confirm your password before editing the user.
5. You can modify the following user account information from this window:
● First Name
● Last Name
● User Role
● Email
● Enable User
If you select the Enable user check box, the user can log in to PowerFlex Manager. If you disable the check box, the user
cannot log in.
Deleting a user
Perform this procedure to remove an existing local user.
1. On the menu bar, click Settings and click User Management.
2. Click Local Users.
3. On the Local Users page, select one or more user accounts to delete.
4. Click Delete.
Click Apply in the warning message to delete the user. Click Dismiss.
Directory services
You can create a directory service that PowerFlex Manager can access to authenticate users.
An Active Directory or Open LDAP user is authenticated against the specific directory domain to which a user belongs.
7. Under Service Provider, select the I have configured PowerFlex as a SP in my IdP using the metadata above
checkbox and click Next.
8. Under IdP Setup, upload the identity provider metadata so that PowerFlex can establish a connection to the identity
provider.
a. To upload the metadata as a file, select Upload File and specify the file location.
b. To retrieve the metadata from a URL, select URL and specify the following URL: https://hostname/
FederationMetadata/2007-06/FederationMetadata.xml
The URL is always the same, except for the hostname, which you must specify for your environment.
c. Click Next.
9. Under Claims Mapping, review the attribute mappings that are imported from the identity provider.
a. Check the attribute mappings to ensure they are correct.
b. Click Next.
10. Under Summary, review the details and click Finish.
After you add an identity provider, PowerFlex Manager adds it to the list of identity providers on the SSO Identity Provider
(IdP) Configuration page.
In addition, PowerFlex Manager updates the login page to show a login button with the new identity provider. You can see this
button the next time you log in PowerFlex Manager.
7. On the Specify Display Name screen, enter the name that you would like to use for the service provider and click Next.
8. On the Choose Access Control Policy screen, choose a policy and click Next.
9. On the Ready to Add Trust screen, click Next.
10. On the Finish screen, select Configure claims insure policy for this application and click Close.
11. On the Relying Party Trusts screen, under Actions select the display name for the newly created service provider, and
click Edit Claim Insurance Policy....
12. Click Add Rule... to add the following LDAP attribute rule:
a. For the Claim rule template, select Send LDAP Attributes as Claims.
b. For the Claim rule name, enterLDAP attributes.
c. For the Attribute store, select Active Directory.
d. For the Mapping of LDAP attributes to outgoing claim types, select the following attributes:
e. Click Finish.
13. Click Add Rule... to add the following custom rule:
a. For the Claim rule template, select Send Claims Using a Custom Rule.
b. For the Claim rule name, enter Get groups.
c. For the Custom rule, paste in the following string:
c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/
windowsaccountname", Issuer == "AD AUTHORITY"] => add(store = "Active Directory",
types = ("http://schemas.xmlsoap.org/claims/Group"), query = ";tokenGroups;{0}",
param = c.Value);
d. Click Finish.
14. Click Add Rule... to add another custom rule:
a. For the Claim rule template, select Send Claims Using a Custom Rule.
b. For the Claim rule name, enter Claim of groups membership.
c. For the Custom rule, paste in the following string:
d. Click Finish.
15. Click OK.
<Attribute xmlns="urn:oasis:names:tc:SAML:2.0:assertion"
Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri" FriendlyName="E-Mail
Address"/>
<Attribute xmlns="urn:oasis:names:tc:SAML:2.0:assertion"
Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname"
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri" FriendlyName="Given
Name"/>
<Attribute xmlns="urn:oasis:names:tc:SAML:2.0:assertion"
Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname"
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri"
FriendlyName="Surname"/>
b. Click Save.
You must now leave the Microsoft Azure portal. Go back to PowerFlex Manager and upload the Federation Metadate XML
file to PowerFlex.
Related tasks
Troubleshooting a timeout error
You must not connect to SSO from PowerFlex Manager before you assign permissions.
1. Log in to PowerFlex Manager using the local administrator rights.
2. Go to Settings > User Management > Remote users/Groups.
3. Click Add.
a. Select the type.
If you select the Group type for the Group Name, enter the group-id that is taken from Microsoft Azure (not the Group
Name).
If you select the User type for the User Name, enter the username that is defined in Microsoft Azure.
b. Select the provider.
c. Select a role.
d. Click Apply.
After you add Microsoft Azure as an IdP, it is added to the list of IdPs on the SSO Identity Provider (IdP) Configuration
page in PowerFlex Manager.
In addition, PowerFlex Manager updates the login page to show a login button with Microsoft Azure. You can see this button the
next time you log in PowerFlex Manager.
Repositories
On the Repositories page, you can load compliance files, operating system images, and compatibility management files.
You can load compliance files and specify a default version for compliance checking on the Compliance Versions tab. You
can load multiple compliance files into PowerFlex Manager with different operating system images included. The compliance
file includes operating system images for ESXi and PowerFlex. When you add a compliance file, these images are automatically
added to the OS Images tab. If necessary, you can also create your own operating system image repositories. You can also load
the compatibility management file on the Compatibility Management tab to allow PowerFlex Manager to determine which
upgrade paths are valid and which are not.
On the Compliance Versions tab, click Add to load a compliance version. Click Change to change the default compliance
version to a new file.
Use the OS Image Repositories tab to create operating system image repositories and view the following information:
You cannot perform any actions on repositories that are in use. However, you can delete repositories that are in an Available
state, but not in use and not set as a default version.
Related information
Configuring block storage
Deploying and provisioning
Compliance versions
PowerFlex Manager displays the following information about the compliance versions:
Select the compliance version to view the following information about the firmware package:
● Bundles—Displays the number of bundles available in the firmware package.
● Components—Displays the number of software components available in the firmware package.
● Created On—Displays the date when the compliance version was created.
● Last Updated—Displays the date when the compliance version was last updated.
● Resource Groups Affected—Displays the resource groups in which the firmware is used.
● Additional Components:
○ VMware ISO—Displays the date and time when the VMware ISO component was added.
○ Filename—Displays the file name of the VMware ISO component added.
2. In the Add Compliance File dialog, select one of the following options:
● Download from SupportAssist (Recommended)—Select this option to import the compliance file that contains the
firmware bundles you need.
○ Download from local network path—Select this option to download the compliance file from a CIFS file share. This
option is intended for sites that do not have Internet accessibility to SupportAssist.
3. If you selected Download from SupportAssist (Recommended), click the Available Compliance Files drop-down list,
and select the file.
Before downloading a compliance file from SupportAssist, you must configure SupportAssist. To configure the
SupportAssist, you must enable SupportAssist in the Initial Configuration wizard.
If you are downloading a compliance file from SupportAssist, the file type is a ZIP or TAR.GZ file.
If you are downloading a compliance file from the Dell Technologies support site the file will be called
Software_Only_Complete_4.6.x_xxx.zip
4. If you selected Download from local network path, perform the following:
● In the File Path box, enter the location of the compliance file. Use the following format:
○ CIFS share for ZIP file: \\host\share\filename.zip
○ CIFS share for TAR.GZ file: \\host\share\filename.tar.gz
● If you are using a CIFS share, enter the User Name and Password.
● Mark the target upgrade catalog as the default version for compliance checking.
Download the 3.6.x and the 4.6 software catalogs. PowerFlex requires the catalog for the version that the system is on and
the target version.
5. Click Save.
PowerFlex Manager unpacks the compliance file as a repository that includes the firmware and software bundles that are
required to perform any updates.
If the compliance file contains an embedded ISO file for an operating system image (such as for an VMware ESXi or
PowerFlex image), the download process unpacks the file and adds it to the operating system image repository.
A compliance file can be large and take some time to download and unpack. To help you monitor the progress of the
downloading and unpacking operations, PowerFlex Manager provides percentage complete information for these operations
in the State column on the Compliance Versions tab.
The Resources page displays devices that are validated against the default repository.
This capability is available on the PowerFlex appliance and PowerFlex rack offerings only.
There are two types of firmware repositories:
Service level This repository is applied only to nodes that are in service and assigned the service
level firmware repository.
Devices with firmware levels below the minimum firmware level that is listed in the
service level repository are marked as non-compliant. When a service level firmware
repository is assigned to a resource group, the firmware validation is checked only
against the service level firmware repository and the default firmware repository
checks are no longer applied to the devices associated with this resource group.
OS images
To add an OS image repository:
1. On the Repositories page, click OS Images, and then click Add.
2. In the Add OS Image Repository dialog box, enter the following:
a. In the Repository Name box, enter the name of the repository. The repository name must be unique.
b. In the Image Type box, enter the image type.
d. If you are using the CIFS share, enter the User Name and Password to access the share.
3. Click Add.
After adding an OS image, you can modify it or remove it by selecting the image, then clicking Modify or Remove.
4. Click Resynchronize.
The repository state changes to Copying state.
Compatibility management
To facilitate upgrades, PowerFlex Manager uses information that is provided in the compatibility matrix file to determine which
upgrade paths are valid and which are not.
When you attempt an upgrade, PowerFlex Manager warns you if the current version of the software is incompatible with the
target version, or if any of the RCM or IC versions that are currently loaded are incompatible with the target compliance
versions. To determine which paths are valid and which are not, PowerFlex Manager uses information that is provided in the
compatibility matrix file. The compatibility matrix file maps all the known valid and invalid paths for all previous releases of the
software.
When you first install PowerFlex Manager, the software does not have the compatibility matrix file, so you must upload the
file before performing any updates. You must upload the latest compatibility matrix file to ensure that PowerFlex Manager has
access to the latest upgrade information.
You can download the file from SupportAssist or upload it from a local directory path. The file has a GPG extension and an
associated compatibility version number.
If you are uploading from a local directory path, ensure that you have access to a valid compatibility matrix file that has the GPG
extension. If you are using SupportAssist, ensure that SupportAssist has been properly configured.
1. On the menu bar, click Settings and then click Compatibility Management.
2. Click Edit Settings.
3. Click Download from Dell Technologies SupportAssist if you are using SupportAssist.
4. Click Upload from Local to use a local file. Then, click Choose File to select the GPG file.
Getting started
If you want to return to the Getting Started page, you can launch it from the Settings page.
The Getting Started page guides you through the common configurations that are required to prepare a new PowerFlex
Manager environment. A green check mark on a step indicates that you have completed the step. Only super users have access
to the Getting Started page.
Related information
Getting started
Configuring block storage
Deploying and provisioning
Networks
The Networks page displays information about networks that are defined in PowerFlex Manager, including:
● Name
● Network Type
● VLAN ID
● IP Address Setting
● Starting IP Address
● Ending IP Address
● Role
● IP Address in Use
On the Networks page, you can:
● Define or modify an existing network.
● Delete an existing network.
● Click Export All to export all the network details to a CSV file.
● Export network details for a specific network. To export the specific network details, select a network, and then click
Export Network Details.
● Click a network to see the following details in the Summary tab:
○ Name of the user who created and modified the network.
○ Date and time that the network was created and last modified.
● Sort the column by network names, click the arrow next to the column header. You can also refresh the information about
the page.
If you select a network from Networks list, the network details are displayed.
For a static network, the following information is displayed:
● Subnet Mask
● Gateway
● Primary DNS
● Secondary DNS
● DNS Suffix
● Last Updated By
● Date Last Updated
● Created By
● Date Created
● Static IP Details
For a DHCP network, the following information is displayed:
● Last Updated By
● Date Last Updated
● Created By
● Date Created
You can filter the IPs by selecting any of the following options from the View drop-down list, under the Static IP Address
Details section:
Defining a network
Adding the details of an existing network enables PowerFlex Manager to automatically configure nodes that are connected to
the network.
Ensure that the following conditions are met before you define the network:
● PowerFlex Manager can communicate with the out-of-band management network.
● PowerFlex Manager can communicate with the operating system installation network in which the appliance is deployed.
● PowerFlex Manager can communicate with the hypervisor management network.
● The DHCP node is fully functional with appropriate PXE settings to PXE boot images from PowerFlex Manager in your
deployment network.
To define a network, complete the following steps:
1. On the menu bar, click Settings and then click Networking.
2. Click Networks.
The Networks page opens.
3. Click Define. The Define Network page opens.
4. In the Name field, enter the name of the network. Optionally, in the Description field, enter a description for the network.
5. From the Network Type drop-down list, select one of the following network types:
● General Purpose LAN
● Hypervisor Management
● Hypervisor Migration
● Hardware Management
● PowerFlex Data
● PowerFlex Data (Client Traffic Only)
● PowerFlex Data (Server Traffic Only)
● PowerFlex Replication
● NAS File Management
● NAS File Data
● PowerFlex Management
For a PowerFlex configuration that uses a hyperconverged architecture with two data networks, you typically have two
networks that are defined with the PowerFlex data network type. The PowerFlex data network type supports both client and
server communications. The PowerFlex data network type is used with hyperconverged resource groups.
For a PowerFlex configuration that uses a two-layer architecture with four dedicated data networks, you typically have two
PowerFlex (client traffic only) VLANs and two PowerFlex data (server traffic only) VLANs. These network types are used
with storage-only and compute-only resource groups.
7. Optionally, select the Configure Static IP Address Ranges check box, and then do the following:
NOTE: The Configure Static IP Address Ranges check box is not available for all network types. For example, you
cannot configure a static IP address range for the operating system installation network type. You cannot select or clear
this check box to configure static IP address pools after a network is created.
a. In the Subnet box, enter the IP address for the subnet. The subnet is used to support static routes for data and
replication networks.
b. In the Subnet Mask box, enter the subnet mask.
c. In the Gateway box, enter the default gateway IP address for routing network traffic.
d. Optionally, in the Primary DNS and Secondary DNS fields, enter the IP addresses of primary DNS and secondary DNS.
8. Click Save.
Modifying a network
If a network is not associated with a template or resource group, you can edit the network name, the VLAN ID, or the IP address
range.
1. On the menu bar, click Settings and then click Networking.
2. Click Networks.
The Networks page opens.
3. Select the network that you want to edit and click Modify. The Modify Network page opens.
4. Edit the information in any of the following fields: Name, VLAN ID, IP Address Range.
For a PowerFlex data or replication network, you can specify a Subnet IP address for a static route configuration. The
subnet is used to support static routes for data and replication networks.
5. Click Save.
You can also view details for a network or export the network details by selecting the network, then clicking View Details or
Export Network Details.
Renaming a network
After you have added a system data network, you can edit the network name.
1. On the menu bar, click Settings and then click Networking.
2. Click System Data Networks.
The System Data Networks page opens.
3. Select the network that you want to edit and click Rename. The Rename System Data Network page opens.
4. Edit the information in the System Data Network Name.
5. Click Apply.
Related information
Enabling SupportAssist
(Password-based
encryption with
message digest
(MD5) with at least
eight characters)
Maximum authPriv Messages are Required Required
authenticated and
encrypted
Related tasks
Modifying an external source
Configuring a destination
Modifying a destination
Add a notification policy
Modify a notification policy
Delete a notification policy
Configuring an audit log policy
Related tasks
Configuring an external source
Configuring a destination
Modifying a destination
Add a notification policy
Modify a notification policy
Delete a notification policy
Configuring an audit log policy
Configuring a destination
Define a location where the event and alert data that PowerFlex Manager processes should be sent.
1. Go to Settings > Events and Alerts > Notification Policies.
2. From the Destinations pane, click Add.
The Create New Destination Protocol window opens.
3. From the Destination Properties window:
a. Enter the destination name and description.
b. From the Destination Type menu, select either SNMP V2c, SNMP V3, Syslog, Email (SMTP), Webhook, or Audit
Log.
4. Click Next.
The Protocol Settings window opens.
5. Depending on the destination type, enter the following information:
Related tasks
Configuring an external source
Modifying an external source
Modifying a destination
Add a notification policy
Modify a notification policy
Delete a notification policy
Configuring an audit log policy
Modifying a destination
You can edit the information about where event and alert data that is processed by PowerFlex Manager should be sent.
1. Go to Settings > Events and Alerts > Notification Policies.
2. From the Destinations pane, click the destination whose information you want to modify.
The Edit Source window opens.
3. Edit the information and click Submit.
Related tasks
Configuring an external source
Modifying an external source
Configuring a destination
Add a notification policy
Modify a notification policy
Delete a notification policy
Configuring an audit log policy
8. Click Submit.
After adding a notification policy, you might need to perform additional configuration steps. For example, If you are setting up
a notification policy for a webhook destination that uses BigPanda, you can optionally configure BigPanda to show the severity
levels from PowerFlex Manager. To do this, you must configure the status mapping on the BigPanda site. Map the major and
minor severity values from PowerFlex Manager to the Warning status for BigPanda.
Related tasks
Configuring an external source
Modifying an external source
Configuring a destination
Modifying a destination
Modify a notification policy
Delete a notification policy
Configuring an audit log policy
Related information
Enabling SupportAssist
Related tasks
Configuring an external source
Modifying an external source
Configuring a destination
Modifying a destination
Add a notification policy
Delete a notification policy
Configuring an audit log policy
Related tasks
Configuring an external source
Modifying an external source
Configuring a destination
Modifying a destination
Add a notification policy
Modify a notification policy
Configuring an audit log policy
4. Click Enable Audit Logs to create a source of the type Audit Log.
5. Click Save.
Related tasks
Configuring an external source
Modifying an external source
Configuring a destination
Modifying a destination
Add a notification policy
Modify a notification policy
License management
You can upload a Management Data Store (MDS) license and a single production license for the whole PowerFlex system. You
can also upload other software licenses (for example, for CloudLink).
No license is required for the first 90 days of use. During this period, you are running PowerFlex in trial mode.
You can only upload an MDS license if an MDS Gateway has been discovered.
When you deploy a PowerFlex cloud-based cluster, you are provided with a free evaluation license. This license gives you
90 days to use the cloud-based storage cluster. After 90 days, the storage is preserved. However, you are not allowed to
make configuration changes to the deployment. Once the evaluation license has expired, you must purchase a subscription or
permanent license to continue using the cluster with the full range of capabilities.
PowerFlex Manager shows the start date and end date for both evaluation and subscription licenses on the License
Management page. This allows you to see when your license will expire. Evaluation licenses apply only to cloud deployments,
whereas subscription licenses apply to both cloud and on-premises deployments.
Related information
Getting started
Enabling SupportAssist
Security
On the Security page, you can upload SSL trusted certificates for connecting to Active Directory, as well as appliance SSL
certificates that ensure data secure transmission for PowerFlex Manager. You can also define credentials for the resources that
PowerFlex Manager accesses and manages.
Before you upload a trusted SSL certificate, you must obtain the certificate file. The file must contain an X.509 certificate in
PEM format. It must start with ---BEGIN CERTIFICATE--- and end with ---END CERTIFICATE---.
1. On the menu bar, click Settings and then click Security.
2. Click SSL Trusted Certificates.
The SSL Trusted Certificates page opens.
3. Click Add.
4. Click Choose File and select the SSL certificate.
5. Provide a Name for the certificate. The name must be a single word.
6. To upload the certificate, click Save.
To delete an SSL trusted certificate, select the certificate and click Delete.
Create credentials
Perform this procedure to create credentials:
1. On the menu bar, click Settings and click Security.
2. Click Resource Credentials.
The Credentials Management page opens.
3. Click Create.
4. In the Create Credentials dialog box, from the Credential Type drop-down list, select one of the following resource types
for which you want to create the credentials:
● Node
● Switch
● vCenter
● Element Manager
● PowerFlex Gateway
● OS Admin
● OS User
● PowerFlex Management System
The OS Admin and OS User credential types apply to deployed items, not to PowerFlex Manager itself.
5. In the Credential Name field, enter the name to identify the credential.
6. Click Enable Key Pairs to enable login with SSH key pairs:
To enable key pairs for the Node or Switch credential type:
a. Import an existing key:
i. Click Import SSH Key Pair.
ii. Click Choose File and browse to the file that contains your public and private keys, and select the private key.
iii. Type a name for the key pair.
iv. Click Import.
To enable key pairs for the OS Admin or OS User credential type:
a. To create a key:
i. Click Create a new key.
ii. Click Create & Download Key Pair..
iii. Type a name for the key pair.
iv. Click Create.
The private key file (id_rsa) will be downloaded on your downloads folder. Click the Download Public Key button to
download the public key file (id_rsa.pub).
b. To import an existing key:
i. Click Import existing key.
ii. Click Import SSH Key Pair.
iii. Click Choose File and browse to the file that contains your public and private keys.
iv. Type a name for the Key Pair Name field.
v. Click Import.
If you enable SSH key pairs for a Node or Switch credential and use that credential for discovery, PowerFlex Manager uses
public or private RSA key pairs to SSH into your node or switch securely, instead of using a username and password. If you
enable SSH key pairs for an OS user or OS Admin credential and use that credential for a deployment,PowerFlex Manager
uses RSA public/private key pairs for the deployment operations.
For the OS Admin credential type, the User Name field is disabled because the user is assumed to be root. You must use
the root user for new deployments.
Provide two usernames for the PowerFlex gateway credential type:
● Gateway Admin User Name
● Gateway OS User Name
The Gateway admin user is the REST API administrator. The Gateway operating system user is the SSH login user. The
Gateway admin user must be the admin user, and the Gateway operating system user must be root.
9. In the Password and the Confirm Password boxes, enter the password for the credential.
NOTE: When the SSH key pair feature is enabled, the switch credential does not require the Password option.
● For VMware vCenter and element manager, in the Domain box, optionally enter the domain ID.
● For switch credentials, under Protocol, optionally click one of the following connection protocols that are used to access
the resource from remote:
○ Telnet
○ SSH
Modify credentials
Perform this procedure to modify credentials:
1. On the menu bar, click Settings and click Security.
2. Click Resource Credentials.
The Credentials Management page opens.
3. Select a credential that you want to edit, and click Modify.
4. Modify the credential information in the Modify Credentials dialog box.
5. Click Save.
Remove credentials
Perform this procedure to remove credentials:
1. On the menu bar, click Settings and then click Security.
2. Click Resource Credentials.
The Credentials Management page opens.
3. On the Credentials Management page, select the credential that you want to delete, and click Remove.
4. Click OK when the confirmation message appears.
1. To configure an SSH key pair for a software-only node, run these commands:
2. To configure an SSH key pair for a Cisco Nexus switch, run these commands to add the public key:
3. To configure an SSH key pair for an OS10 switch, run these commands to add the public key:
Note that the quotes must be used as shown in the command line above.
Option Description
Send to Configured If you are using SupportAssist, Send to Configured SupportAssist is selected by default.
SupportAssist Leave this default setting. If SupportAssist is not configured, this option is disabled.
Download to Share Provide a path using the following format with credentials: CIFS:\
\1.1.1.1\uploadDirectory. Enter a username and password, and click Test Connection
to verify the connection to the CIFS share before generating the bundle.
Download Locally Download the troubleshooting bundle to a local file: \\IP Address\Any folder
7. Click Generate.
The Backup page also displays information about the status of automatically scheduled backups (enabled or disabled).
On this page, you can:
● Manually start an immediate backup
● Edit general backup settings
● Edit automatically scheduled backup settings
After performing a backup operation, you need to run a script outside of PowerFlex Manager to restore from the backup. The
user interface does not support the ability to restore from a backup.
b. Optionally, enter a username and password in the Backup Directory User Name and Backup Directory Password
boxes, if they are required to access the location you provided.
c. Click Test Connection to confirm that the backup settings you provided are correct.
d. In the Encryption Password box, enter a password that is required to open the backup file, and verify the
encryption password by entering the password in the Confirm Encryption Password box.
5. Click Backup Now.
Restoring
Restoring PowerFlex Manager returns user-created data to an earlier configuration that is saved in a backup file. To restore
from a backup, you need to run a script outside of PowerFlex Manager. The user interface does not support the ability to
restore from a backup.
Before you begin the restore procedure, you need to satisfy these prerequisites:
● The restore cluster must be exactly the same PowerFlex version and Kubernetes version.
● The restore cluster must have exactly the same IP addresses and configuration.
The cluster configuration must be the same as the cluster configuration where the backup was taken.
○ All Kubernetes nodes must have the same IP addresses.
○ All Kubernetes nodes must have the same names.
○ All LoadBalancer IP addresses must be the same.
CAUTION: Restoring an earlier configuration restarts PowerFlex Manager and deletes data created after the
backup file to which you are restoring. Any running jobs could be terminated as well.
1. Login to the node where the PowerFlex Manager platform (PFMP) installer was initially run.
2. Run the restore script that is included with the installer bundle:
./restore_backup.sh
To complete the execution of the restore script, you must specify whether the restore operation will be performed on an
existing cluster or a new cluster.
Here is a snippet that shows a sample run of the restore script:
/usr/local/lib/python3.8/site-packages/paramiko/transport.py:236:
CryptographyDeprecationWarning: Blowfish has been deprecated "class":
algorithms.Blowfish, Installation logs are available at <Bundle root>/PFMP_Installer/
Please enter the ssh username for the nodes specified in the
PFMP_Config.json[root]:root
Please enter the ssh password for the nodes specified in the PFMP_Config.json.
Password:
Please enter CIFS password(base64 encoded). Press enter to skip if username is not
required: UmFpZDR1cyE=
Please enter encryption password for backup file (base64 encoded): UmFpZDR1cyE=
The restore process prints out status information until the restore is complete.
# Run this to make it easier to run the rest of the code and set the default
namespace.
alias k="kubectl -n $(kubectl get pods -A | grep -m 1 -E 'platform|pgo|helmrepo' |
cut -d' ' -f1)"
kubectl config set-context default --namespace=$(kubectl get pods -A | grep -m 1 -E
'platform|pgo|helmrepo|docker' | cut -d' ' -f1)
2. Verify the pgo controller pod and all database pods are up and running with no errors in the logs:
# Get the PostgreSQL operator pod and PostgreSQL cluster pods to verify.
echo $(kubectl get pods -l="postgres-operator.crunchydata.com/control-plane=pgo" --no-
headers -o name && kubectl get pods -l="postgres-operator.crunchydata.com/instance"
--no-headers -o name) | xargs kubectl get -o wide
# Trigger a shutdown
k patch $(k get postgrescluster -o name) --type merge --patch '{"spec":{"shutdown":
true}}'
4. At this point, you can shut down the VMs to take snapshots and adjust resources, as needed. Follow the next steps after the
VMs are powered back on.
# Trigger a shutdown
k patch $(k get postgrescluster -o name) --type merge --patch '{"spec":{"shutdown":
false}}'
To restore to the previous point in time, you can revert the VMs from the snapshots in the vSphere Client user interface.
Software upgrade
When a new version of the management software becomes available, you can upgrade to that version from the Software
Upgrade page.
You can upgrade from a .tgz file on a network share. This option provides an easy way to update the software in a dark
site. Provide a path using one of the following formats:
Lifecycle
This section provides reference information for the Resource Groups and Templates pages.
Related information
Managing components
Monitoring system health
Managing external changes
Deploying a resource group
Adding an existing resource group
Viewing a compliance report
Resource groups
This section provides reference information for the Resource Groups page.
Related information
Managing components
Managing external changes
Lifecycle mode
If you add an existing resource group that includes an unsupported configuration, PowerFlex Manager might put the resource
group in lifecycle mode. This mode limits the actions that you can perform within the resource group.
Lifecycle mode allows the resource group to perform only monitoring, service mode, and compliance upgrade operations. All
other resource group operations are blocked when the resource group is in lifecycle mode. Lifecycle mode is used to control the
operations that can be performed for configurations that have limited support.
When you add an existing resource group, PowerFlex Manager puts the resource group in lifecycle mode if the configuration you
want to import includes any of the following:
Templates
This section provides reference information for the Templates page.
Related information
Getting started
Managing components
Managing templates
On the Templates page, you can view information about templates in a list or tile view. To switch between views, click the tile
icon or list icon at the top of the Templates page. When in tile view, you can view the templates under a category by
clicking the graphic that represents the category.
Users with standard permissions can view details of only those templates for which the administrator has granted permission.
Templates - My Templates List View
The My Templates page displays the details of the templates that you have created.
After you add a template, you can add node, cluster, and VM
components on the template builder page.
Export All Allows you to export all templates to a CSV file.
Templates
On the Templates page, the right pane displays the name of the template, icons of components in the template, and the
following details for a selected template:
Component types
Components (physical or virtual or applications) are the main building blocks of a template.
PowerFlex Manager has the following component types:
● Node (Software/hardware and software only)
● Cluster
● VM
Related information
Adding components to a resource group
Edit a template
Deploying a resource group
View template details
Node settings
This table describes the following node settings: hardware, BIOS, operating system, and network.
Partial network automation Enables you to perform switchless deployments with partial network
automation. This feature allows you to work with unsupported switches,
but requires more manual configuration before a deployment can proceed
successfully. If you choose to use partial network automation, you give up the
error handling and network automation features that are available with a full
network configuration that includes supported switches.
For a partial network deployment, the switches are not discovered,
so PowerFlex Manager does not have access to switch configuration
information. You must ensure that the switches are configured correctly,
since PowerFlex Manager does not configure the switches for you. If your
switch is not configured correctly, the deployment may fail and PowerFlex
Manager is not able to provide information about why the deployment failed.
For a partial network deployment, you must add all the interfaces and ports,
as you would when deploying with full network automation. However, you
do not need to add the operating system installation network, since PXE
is not required for partial network automation. PowerFlex Manager uses
virtual media instead for deployments with partial network automation. The
Switch Port Configuration can be set to Port Channel (LACP enabled)
or Trunk. If Port Channel (LACP enabled) is used, the LACP fallback or
LACP ungroup option must be configured on the port channels.
Drive Encryption Type Specifies the type of encryption to use when encryption is enabled. The
encryption options are:
● Software encryption
● Self-encrypting drive (SED)
Number of Instances Enter the number of instances that you want to add.
If you select more than one instance, a single component representing
multiple instances of an identically configured component is created.
Edit the component to add extra instances. If you require different
configuration settings, you can create multiple components.
Related Components Select Associate All or Associate Selected to associate all or specific
components to the new component.
Import Configuration from Reference Click this option to import an existing node configuration and use it for the
Node node component settings. On the Select Reference Node page, select the
node from which you want to import the settings and click Select.
OS Settings
Host Name Selection If you choose Specify At Deployment Time, you must type the name for
the host at deployment time.
If you choose Auto Generate, PowerFlex Manager displays the Host Name
Template field to enable you to specify a macro that includes variables that
produce a unique hostname. For details on which variables are supported, see
the context-sensitive help for the field.
If you choose Reverse DNS Lookup, PowerFlex Manager assigns the
hostname by performing a reverse DNS lookup of the host IP address at
deployment time.
OS Image Specifies the location of the operating system image install files. You can use
the image that is provided with the target compliance file, or specify your
own location, if you created additional repositories.
To deploy a compute-only or storage-only resource group with the Linux
image that is provided with a compliance file, choose Use Compliance File
Linux image. If you want to deploy a NAS cluster, you must also choose Use
Compliance File Linux image.
To deploy a storage-only resource group with Red Hat Enterprise Linux,
you must create a repository on the Settings page and specify the path
to the Red Hat Enterprise Linux image on a file share. Dell Technologies
recommends that you use one of your own images that are published from
the customer portal at Red Hat Enterprise Linux.
For Linux, you may include one node within a resource group. For ESXi, you
must include at least two nodes.
NOTE: If you select an operating system from the OS Image drop-down
menu, the field NTP Server displays. This field is optional, but it is
highly recommended that you enter an NTP server IP to ensure proper
time synchronization with your environment and PowerFlex Manager.
Sometimes when time is not properly synchronized, resource group
deployment failure can occur.
NTP Server Specifies the IP address of the NTP server for time synchronization.
If adding more than one NTP server in the operating system section of a node
component, be sure to separate the IP addresses with commas.
Use Node For Dell PowerFlex Indicates that this node component is used for a PowerFlex deployment.
When this option is selected, the deployment installs the MDM, SDS, and
SDC components, as required for a PowerFlex deployment in a VMware
environment. The MDM and SDS components are installed on a dedicated
PowerFlex VM (SVM), and the SDC is installed directly on the ESXi host.
To deploy a PowerFlex cluster successfully, include at least three nodes in
the template. The deployment process adds an SVM for each hyperconverged
node. PowerFlex Manager uses the following logic to determine the MDM
roles for the nodes:
1. Checks the PowerFlex gateway inventory to see how many primary
MDMs, secondary MDMs, and tiebreakers are present, and the total
number of SDS components.
2. Adds the number of components being deployed to determine the overall
PowerFlex cluster size. For example, if there are three SDS components
in the PowerFlex gateway inventory, and you are deploying two more, you
will have a five node cluster after the deployment.
3. Adds a single primary MDM and determines how many secondary MDMs
and tiebreakers should be in the cluster by looking at the overall cluster
size. The configuration varies depending on the size of the cluster:
● A three-node cluster has one primary, one secondary, and one
tiebreaker.
● A five-node cluster has one primary, two secondaries, and two
tiebreakers.
4. Determines the roles for each of the new components being added, based
on the configuration that is outlined above, and the number of primary,
secondary, and tiebreakers that are already in the PowerFlex cluster.
At deployment time, PowerFlex Manager automatically sets up the DirectPath
I/O configuration on each hyperconverged node. This setup makes the
devices available for direct access by the virtual machines on the host and
also sets up the devices to run in PCI passthrough mode.
For each SDS in the cluster, the deployment adds all the available disks from
the nodes to the storage pools created.
For each compute-only or hyperconverged node, the deployment installs the
SDC VIB.
When you select this option, the teaming and failover policy for the cluster
are automatically set to Route based on IP hash. Also, the uplinks are
configured as active and active, instead of active and standby. Teaming is
configured for all port groups, except for the PowerFlex data 1 and PowerFlex
data 2 port groups.
If you select the option to Use Node For Dell PowerFlex, the Local Flash
Storage for Dell PowerFlex option is automatically selected as the Target
Boot Device under Hardware Settings.
PowerFlex Role Specifies one of the following deployment types for PowerFlex:
● Compute Only indicates that the node is only used for compute
resources.
● Storage Only indicates that the node is only used for storage resources.
Enable PowerFlex File Enables NAS capabilities on the node. If you want to enable NAS on the nodes
in a template, you need to add both the PowerFlex Cluster and PowerFlex
File Cluster components to the template.
This option is only available if you choose Use Compliance File Linux Image
as the OS Image and then choose Compute Only as the PowerFlex Role.
If Enable PowerFlex File is selected, you must ensure that the template
includes the necessary NAS File Management and NAS File Data networks.
If you do not configure these networks on the template, the template
validation fails.
Drive Encryption Type Specifies the type of encryption to use when encryption is enabled.
The options are:
● Software Encryption
● Self Encrypting Drive (SED)
Switch Port Configuration Specifies whether Cisco virtual PortChannel (vPC) or Dell Virtual Link
Trunking (VLT) is enabled or disabled for the switch port.
For hyperconverged templates, the options are:
● Port Channel turns on vPC or VLT.
● Port Channel (LACP enabled) turns on vPC or VLT with the link
aggregation control protocol enabled.
For storage-only templates, the options are:
● Port Channel (LACP enabled) turns on vPC or VLT with the link
aggregation control protocol enabled.
● Trunk Port enables a networking configuration that does not use vPC or
VLT.
For compute-only templates that use a Linux operating system image, the
options are:
● Port Channel (LACP enabled) turns on vPC or VLT with the link
aggregation control protocol enabled.
● Trunk Port enables a networking configuration that does not use vPC or
VLT.
For compute-only templates that uses an ESXi operating system image, the
Switch Port Configuration setting includes these options:
● Port Channel turns on vPC or VLT.
● Port Channel (LACP enabled) turns on vPC or VLT with the link
aggregation control protocol enabled.
● Trunk Port enables a networking configuration that does not use vPC or
VLT.
Teaming And Bonding Configuration The teaming and bonding configuration options depend on the switch port
configuration selected.
For hyperconverged and compute-only templates, the following options are
available:
● If you choose Port Channel (LACP enabled) or Port Channel as the
switch port configuration, the only teaming and bonding option is Route
Based on IP hash.
● If you choose Trunk Port as the switch configuration, there are three
teaming and bonding options:
○ Route Based on originating virtual port
○ Route Based on physical NIC load
○ Route Based on Source MAC hash
For storage-only templates, the following options are available:
Transmit Hash Policy The option is available for storage-only and compute-only Linux nodes only
when the Teaming And Bonding Configuration is set to Mode 4 (EEE 802
3ed policy). Mode 4 is the bond with LACP and supports the following load
balancing options:
● LACP in layer 2 & 3
● LACP in layer 3 & 4
LACP in layer 3 & 4 is the default option
Hardware Settings
Node Pool Specifies the pool from which nodes are selected for the deployment.
BIOS Settings
System Profile Select the system power and performance profile for the node.
User Accessible USB Ports Enables or disables the user-accessible USB ports.
Number of Cores per Processor Specifies the number of enabled cores per processor.
Virtualization Technology Enables the additional hardware capabilities of virtualization technology.
Logical Processor Each processor core supports up to two logical processors. If enabled, the
BIOS reports all logical processors. If disabled, the BIOS reports only one
logical processor per core.
Execute Disable Enables or disables execute disable memory protection.
Node Interleaving Enable or disable the interleaving of allocated memory across nodes.
Add New Static Route Click Add New Static Route to create a static route in a template. To
add a static route, you must first select Enabled under Static Routes. A
static route allows nodes to communicate across different networks. The
static route can also be used to support replication in a storage-only or
hyperconverged resource group.
A static route requires a Source Network and a Destination Network, and
a Gateway. The source and destination network must each be a PowerFlex
data network or replication network that has the Subnet field defined.
If you add or remove a network for one of the ports, the Source Network
drop-down list does not get updated and still shows the old networks. In
order to see the changes, save the node settings and edit the node again.
Validate Settings Click Validate Settings to determine what can be chosen for a deployment
with this template component.
After entering the information about operating system Installation (PXE) network in the respective field as described in the table
above, PowerFlex Manager untags vLANs entered in the operating system installation network on the switch node facing port.
For vMotion and hypervisor networks, PowerFlex Manager tags these networks on the switch node-facing ports for the entered
information. For rack node, PowerFlex Manager configures the vLANs on node facing ports (untag PXE vLANs, and tag other
vLANs).
If you select Import Configuration from Reference Node, PowerFlex Manager imports basic settings, BIOS settings, and
advanced RAID configurations from the reference node and enables you to edit the configuration. Some BIOS settings might
no longer apply once new BIOS settings are applied. PowerFlex Manager does not correct these setting dependencies. When
setting advanced BIOS settings, use caution and verify that BIOS settings on the hardware are applicable when not choosing
Not Applicable as an option. For example, when disabling an SD card, the settings for internal SD card redundancy become not
applicable.
You can edit any of the settings visible in the template, but keep in mind that many settings are hidden when using this option.
For example, only ten out of many BIOS settings that you can see and edit using template are displayed. However, you can
configure all BIOS settings. If you want to edit any of the settings that are not visible through the template feature, edit them
before importing or uploading the file.
For storage only and hyperconverged nodes, the required SDPM or NVDIMM storage capacity is specified in the following table.
Data Center Name Select the data center name from Data Center Name list.
Cluster Name Select a new cluster name from Cluster Name list.
New Cluster Name Select new cluster name from New Cluster Name list.
Cluster HA Enabled Enables or disables a highly available cluster. You can either select or clear (default)
the check box.
Cluster DRS Enabled Enables or disables distributed resource scheduler (DRS). You can either select or
clear (default) the check box.
Protection Domain Name Provide a name for the protection domain. You can automatically generate the
name (recommended), or specify a new name explicitly. A protection domain is a
logical entity that contains a group of SDSs that provide backup for each other.
Each SDS belongs to only one protection domain. Each protection domain is a
unique set of SDSs. A protection domain may also contain SDTs and SDRs.
If you automatically generate the protection domain name or specify a new name
explicitly for a hyperconverged or storage-only template, the PowerFlex cluster
must have at least three nodes before you can publish the template and deploy it
as a resource group. However, if you select an existing protection domain that
is associated with another previously deployed hyperconverged or storage-only
resource group, and this protection domain has at least three nodes, PowerFlex
Manager recognizes the new template as valid if the cluster has fewer than three
nodes. You can publish the template successfully and deploy it as a resource group,
since the protection domain it uses has enough nodes.
NOTE: A compute-only resource group only requires a minimum of two nodes,
since it does not have a protection domain.
If you choose Compute Only as the PowerFlex Role for the node component, the
PowerFlex settings do not include the Protection Domain Name field.
New Protection Domain Name Specify a new name for the protection domain.
Protection Domain NameTemplate If you choose to generate the protection domain name automatically, PowerFlex
Manager fills this field with a default template that combines static text with
variables for several pieces of the autogenerated name. If you modify the template,
be sure to include the ${num} variable to ensure that the name is unique.
For details on the rules for defining a template, see the contextual help that
appears when you hover over the field.
If you choose Compute Only as the PowerFlex Role for the node component, the
PowerFlex settings do not include the Protection Domain Name Template field.
Acceleration Pool Name Template Define a template for generating the acceleration pool name automatically.
PowerFlex Manager fills this field with a default template that combines static
text with variables for several pieces of the autogenerated name. If you modify the
template, be sure to include the $(num) variable to ensure that the name is unique.
For details on the rules for defining a template, see the contextual help that
appears when you hover over the field.
Storage Pool Name Provide a name for the storage pool. You can automatically generate the name
(recommended), select from a list of existing storage pools for the selected
protection domain, or specify a unique storage pool name. If you choose to
automatically generate a name, you are prompted to define a storage pool name
template.
Storage pools allow the generation of different storage tiers in PowerFlex. A
storage pool is a set of physical storage devices in a protection domain. Each
storage device belongs to one (and only one) storage pool.
The number of storage pools that are created at deployment time depends on the
number of disks available.
Granularity Set the granularity for compression by selecting Fine or Medium. The granularity
setting applies to the storage pool.
Storage Pool Disk Type Allows you to select the disk type - hard drive, SSD, or NVMe.
Storage Pool Name Template Define a template for generating the storage pool name automatically. PowerFlex
Manager fills this field with a default template that combines static text with
variables for several pieces of the autogenerated name. If you modify the template,
be sure to include the ${num} variable to ensure that the name is unique.
For details on the rules for defining a template, see the contextual help that
appears when you hover over the field.
If you choose Compute Only as the PowerFlex Role for the node component, the
PowerFlex settings do not include the Storage Pool Name Template field.
Fault Set Name Specify the fault set name. You can automatically generate the name
(recommended), select from a list of existing fault sets for the selected protection
domain, or specify a unique fault set name. If you choose to automatically generate
a name, you are prompted to define a fault set name template.
Fault Set Name Template If you choose to generate the fault set name automatically, PowerFlex Manager fills
this field with a default template that combines static text with variables for several
pieces of the autogenerated name. If you modify the template, be sure to include
the ${num} variable to ensure that the name is unique.
For details on the rules for defining a template, see the contextual help that
appears when you hover over the field.
Number of Fault Sets Specifies the number of fault sets to create at deployment time. PowerFlex requires
a minimum of three fault sets for a protection domain, with at least two nodes in
each fault set.
For a new protection domain name, you must specify a number between 3 and 16.
Each fault set acts as a single fault unit. If a fault set goes down, all nodes within
the fault set go down as well.
For an existing protection domain name, you must specify a number between 1 and
16. This allows you to add more fault sets to an existing protection domain. If the
selected protection domain already has 3 fault sets, for example, you can specify a
number as low as 1, to include an additional fault set for this protection domain.
PowerFlex Manager ensures that each new deployment has only one MDM role for
each fault set. For example, if you deploy 3 fault sets, one has the primary MDM,
another has the secondary MDM, and the third has the tiebreaker. You can use
the Reconfigure MDM Roles wizard to change the MDM role assignments after
deployment.
Machine Group Name Allows you to provide a name for the machine group. The Machine Group Name is
only available if you automatically generate the protection domain name or specify a
new protection name explicitly. You can automatically generate the machine group
name or specify a new machine group name explicitly.
New Machine Group Name Specify a new name for the machine group.
Number of Protection Domains Determines the number of protection domains that are used for PowerFlex file
configuration data. Control volumes will be created automatically for every node
in the PowerFlex file cluster, and spread across the number of protection domains
specified for improved cluster resiliency. To add data volumes, you need to use the
tools provided on the File tab.
You can have between one and four protection domains.
Protection Domain <n> Includes a separate section for each protection domain used in the template.
Storage Pool <n> Includes a separate section for each storage pool used in the template.
Related information
Add cluster component settings to a template
VM settings
The table describes VM settings for the CloudLink Center and the PowerFlex gateway.
Table 66. Supported configurations for full and partial network automation
Template components Full network automation Partial network automation
Related information
Adding an existing resource group
Build and publish a template
Resources
This section provides reference information for the Resources page.
Related information
Deploying and provisioning
Managing components
Monitoring system health
Warning Indicates that the resource is in a state that requires corrective action, but does not
affect overall system health. For example, the firmware running on the resource is not
at the required level or not compliant.
For supported switches, a warning health status might be because the SNMP
community string is invalid or not set correctly. In this case, PowerFlex Manager is
unable to perform health monitoring for the switch.
NOTE: If the resource group has VMware ESXi and is set to maintenance mode, a
message is displayed in the View Details pane.
Critical Indicates that an issue requiring immediate attention exists in one of the following
hardware or software components in the device:
● Battery
● CPU
● Fans
● Power supply
● Storage devices
● Licensing
For supported switches, a critical health status might indicate that the power supply is
not working properly or a CPU has overheated.
NOTE: If the resource group is in a power off state, a message displays in the
View Details pane.
Related information
Discover a resource
Removing resources
Exporting a compliance report for all resources
● Discovering resource
● Determining resource details, including firmware version
● Applying template to the resource
● Updating firmware
● Removing resource from PowerFlex Manager inventory
PowerFlex Manager keeps track of which resources it is managing. These operational states display on the Resources page, in
the Managed State column of the All Resources tab.
Managed Indicates that PowerFlex Manager manages the firmware on that node, and the node
can be used for deployments.
Reserved Indicates that PowerFlex Manager only manages the firmware on that particular
node, but that node cannot be used for deployments. You can assign a host to the
reserved state only if the host has been discovered, but is not part of a resource
group.
Compliance status
PowerFlex Manager assigns one of the following firmware statuses to the resources:
Update Required The firmware running on the resource is less than the
minimum firmware version recommended in the default
compliance version. Indicates that firmware update is
required.
Discovery overview
You can discover new resources or existing resources that are already configured within your environment. After discovery, you
can deploy resource groups on these resources from a template. Only administrator-level users can discover resources.
By default, the operational state for all discovered nodes is Unmanaged. If you want to perform firmware updates or
deployments on a discovered node, select the node and change the operational state to Managed.
If you have not yet uploaded a license, PowerFlex Manager is configured for monitoring and alerting only. In this case, all the
resources are restricted to the Unmanaged resource state, and you cannot change the state to Managed or Reserved.
For some resources such as nodes, the default credentials are prepopulated in PowerFlex Manager. If the credentials are
changed from the defaults, add the credential to PowerFlex Manager with the new login information.
Node discovery
PowerFlex Manager supports PowerFlex node discovery and allows you to onboard nodes by configuring the initial management
IP address and iDRAC credentials. To perform initial discovery and configuration, verify that the management IP address is set
on the node and that PowerFlex Manager can access the IP address through the network. While configuring IP addresses on the
node, verify that PowerFlex Manager can access any final IP address in a range used for hardware management, to complete
discovery of these nodes.
PowerFlex Manager also allows you to use name-based searches to discover a range of nodes that were assigned IP addresses
by DHCP to iDRAC. You can search for a range of DNS hostnames or a single hostname within the Discovery Wizard. After
you perform a name-based discovery, PowerFlex Manager operations to the IDRAC continue to use name-based IP resolution,
since DHCP may assign alternate addresses.
Switch discovery
If you attempt to discover a Cisco switch with terminal color configured, the discovery fails. To discover the switch successfully,
disable the terminal color option by running configure no terminal color persist.
VM manager discovery
A VMware vCenter is discovered as a VM manager. PowerFlex Manager users with the administrator role can discover a
vCenter in PowerFlex Manager. A vCenter read-only user can discover a vCenter in PowerFlex Manager only after the following
requirements are met:
● The vCenter user who is specified in the vCenter credential is granted the
VirtualMachine.Provisioning.ReadCustSpecs and StorageProfie.View privileges.
● The permission containing these privileges is granted to that user on the root vCenter object and the Propogate to
children property is set to True.
PowerFlex Manager allows you to deploy new hyperconverged and compute-only resource groups and add existing resource
groups to a vCenter that has an Enhanced Linked Mode (ELM) configuration. ELM connects multiple vCenter servers,
Node pools
In PowerFlex Manager, a node pool is made up of nodes that are grouped for specific use cases such as business units or
workload purposes. An administrator can specify which users can access these node pools.
Standard users can view details only for node pools for which they have permission.
From the Node Pools tab, you can view existing node pools.
Users with an administrator role can create, edit, or delete node pools.
Click a node pool from the list to view detailed information about the following tabs:
● Nodes: Displays the number of nodes that are associated with the node pool.
● Users: Displays the number of users with access rights to the node pool.
Configuration checks
When you initiate a resource group action, PowerFlex Manager performs critical configuration checks to ensure that the system
is healthy before proceeding. A configuration check failure may result in the resource group action being blocked. You can
export a PDF report at any time that shows detailed system health and configuration data. When you generate the report,
PowerFlex Manager initiates the full set of configuration checks and reports the results.
Related information
Exporting a configuration report for all resources and resource groups
Settings
This section provides reference information for the Settings page.
Backup details
PowerFlex Manager backup files include the following information:
● Activity logs
● Credentials
● Deployments
● Resource inventory and status
● Events
● Initial setup
● IP addresses
● Jobs
● Licensing
● Networks
● Templates
● Users and roles
● Resource module configuration files
● Performance metrics
Network types
You can manage various network types in PowerFlex Manager.
● General Purpose LAN—Used to access network resources for basic networking activities.
● Hypervisor Management—Used to identify the management network for a hypervisor or operating system that is deployed
on a node.
Example output:
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane,etcd,master 24h v1.25.3+rke2r1
node2 Ready control-plane,etcd,master 24h v1.25.3+rke2r1
node3 Ready control-plane,etcd,master 24h v1.25.3+rke2r1
Related information
Perform co-resident PowerFlex controller node maintenance
Related information
Preparing a PowerFlex management node for maintenance
7. Run the following command to discover the monitor port of the application:
kubectl get services monitor-app -n powerflex -o jsonpath="{.spec.ports[0].nodePort}
{\"\n\"}"
8. To view the monitor application, open a web browser to http://<pfmp node IP>:<port_from_previous_step>.
9. Wait until the PowerFlex management platform status indicator light goes green before moving on to the next node.
Post-maintenance status
1. Check PowerFlex node status to confirm all the PowerFlex nodes in ready state:
kubectl get nodes
1. When shutting down/rebooting a node that is a primary MDM (manager), it is recommended that you manually switch MDM
ownership to a different node:
a. From the PowerFlex CLI (SCLI), run:
scli --query_cluster
b. If the IP addresses of the node are included in the --query_cluster output, the faulty node has a role of either MDM
or tiebreaker, in addition to its SDS role.
If the IP address of the node is located in the primary MDM role, a switch-over action is required.
c. Switch MDM ownership to a different node:
The node remains in the cluster. The cluster will be in degraded mode after it is powered off, until the faulty component
or patch operation in the node is fixed and the node is powered back on.
d. Verify that the cluster status shows that the node is not the primary MDM anymore:
scli --query_cluster
Output similar to the following should appear, with the relevant node configuration and IP addresses for your deployment:
Cluster:
Mode: 5_node, State: Normal, Active: 5/5, Replicas: 3/3
Virtual IP Addresses: 9.20.10.100, 9.20.110.100
Primary MDM:
ID: 0x775afb2a65ef1f02
IP Addresses: 9.20.10.104, 9.20.110.104, Management IP Addresses:
10.136.215.239, Port: 9011, Virtual IP interfaces: sio_d_1, sio_d_2
7. Once rke2-server is running, make sure all PowerFlex management platform nodes are in a ready state kubectl get nodes.
If a message is displayed, wait for a few minutes and verify again. After nodes are indicated as ready, perform the next step.
8. Restore the PowerFlex Manager database.
a. Run the following command to set up the alias and default context:
9. Run the following command to get the service port for the PowerFlex management platform monitor utility (if you do not
have it already):
10. Access the PowerFlex management platform monitor utility by pointing a browser to any management IP of a PowerFlex
management platform node with the port obtained in step 9. For example, http://<node IP>:<port>/.
NOTE: The IP cannot be the ingress or any other service IP within the cluster.
11. In the browser, click the PowerFlex management platform status in the upper-right. Wait for all entries to turn green.
12. In case the status remains red or the PowerFlex Manager user interface is not accessible after 15 to 20 minutes, contact Dell
Technical Support for further assistance.
a. Run the following command to set up an alias for kubectl and a default context view:
NOTE: The script will not run on a primary MDM. A switch-over MDM command will be run prior to running the
script on a primary MDM. The script will not be run in parallel on multiple MDMs or tiebreakers.
The following are the example commands that are used during run script on the host process:
1. Obtain an access token from the PowerFlex Manager instance. The easiest method is to create a shell script that can be
sourced to add the proper variables to the user environment
b. Parse out the access token which is used to call the API and is valid for 5 minutes by default.
c. Parse out the refresh token which can be used to get a new JWT token if the access token has expired. It is valid for 30
minutes by default.
NOTE: The expiration time for access token is five minutes. If required, the above file can be sourced to refresh all
variables.
2. Get the JSON of a system configuration, which will be the payload of the patch command (need to replace liaPassword and
mdmPassword manually from null to some string).
a. Create and save a JSON file like the following, replacing MDM addresses, MDM user, and MDM password with the
appropriate values.
{
"mdmIps":["<MDM IP-1>","<MDM-IP2>"],
"mdmUser":"<mdm user>",
"mdmPassword":"<mdm password>”,
"securityConfiguration":
{
"allowNonSecureCommunicationWithMdm":"true",
"allowNonSecureCommunicationWithLia":"true",
"disableNonMgmtComponentsAuth":"false"
}
}
b. Insert the output of this command (with fixed passwords) into the config.json file:
NOTE: The cookiefile stores session cookies which keep the command flow directed to the same gateway pod.
or
{
"phaseStatus": "running",
"phase": "execute",
"numberOfRunningCommands": 1,
"numberOfPendingCommands": 1,
"numberOfCompletedCommands": 35,
"numberOfAbortedCommands": 0,
"numberOfFailedCommands": 0,
"failedCommands": []
}
or
{
"phaseStatus": "completed",
"phase": "validate",
"numberOfRunningCommands": 0,
"numberOfPendingCommands": 0,
"numberOfCompletedCommands": 2,
"numberOfAbortedCommands": 0,
"numberOfFailedCommands": 0,
"failedCommands": []
}
Look for:
{
"phaseStatus": "completed",
"phase": "execute",
"numberOfRunningCommands": 0,
"numberOfPendingCommands": 0,
"numberOfCompletedCommands": 37,
"numberOfAbortedCommands": 0,
"numberOfFailedCommands": 0,
"failedCommands": []
}
Logs
Gateway pod log locations:
● /usr/local/tomcat/logs/scaleio.log
● /usr/local/tomcat/logs/scaleio-trace.log
LIA log location: /opt/emc/scaleio/lia/logs/trc.x
NOTE: Special switch to keep the script in the node when troubleshooting or testing:
1. Edit file /usr/local/tomcat/webapps/ROOT/WEB-INF/classes/gatewayInternal.properties
2. Find the field "ospatching.delete.scripts=false"
3. Change to true for troubleshooting (Default is false)
where the value of the PowerFlex component is mdm, sds, sdr, sdt, or lia
Windows
"C:\Program Files\EMC\scaleio\sdc\diag\get_info.bat" -f
The get_info syntax is explained fully in Collecting debug information using get_info.
● If the selected node is the primary MDM, use the flags -u <MDMuser> -p <MDMpassword>, instead of -f.
● If the selected node contains more than one PowerFlex component, running any script will gather logs for all components
on that node.
When the log collection process is complete, an archive file (either TGZ or ZIP) containing the logs of all PowerFlex
components in the node, is created in a temporary directory. By default, the directory is /tmp/scaleio-getinfo on
Linux hosts or C:\Windows\Temp\ScaleIO-getinfo on Windows hosts.
3. Verify that output similar to the following is returned, which shows that the process of log collection was completed
successfully:
bundle available at '/tmp/scaleio-getinfo/getInfoDump.tgz'
NOTE: The script can generate numerous lines of output. Therefore, look for this particular line in the output.
4. Retrieve the log file.
get_info.sh [OPTIONS]
Optional parameters:
-a, --all
Collect all data
-A, --analyse-diag-coll
Analyze diagnostic data collector (diag coll) data
-b[COMPONENTS], --collect-cores[=COMPONENTS]
Collect existing core dumps of the space-separated list of user-land components, COMPONENTS
(default: all user-land components)
For example, -b'mdm sds' (no space between option name and COMPONENTS)
For example, --collect-cores='mdm sds' (separate option name and COMPONENTS with a
single equal sign, "=")
-d OUT_DIR, --output-dir=OUT_DIR
Store collected bundle under directory OUT_DIR (default: <WORK_DIR>/scaleio-getinfo, see --work-
dir for the value of <WORK_DIR>)
-f, --skip-mdm-login
Skip query of PowerFlex login credentials
The parameters -k NUM and --max-cores=NUM collects up to NUM core files from each component
(default: all core files)
Implies --collect-cores -l, --light
Generate light bundle (not recommended)
--ldap-authentication
Log in to PowerFlex using LDAP-based authentication
-m NUM, --max-traces=NUM
Collect up to NUM PowerFlex trace file from each component (default: all files)
--management-system-ip=ADDRESS
Connect to SSO or management at ADDRESS for PowerFlex login (default: scli default)
--mdm-port=PORT
Connect to MDM using PORT for SCLI commands (default: scli default)
-n, --use-nonsecure-communication
Connect to the MDM in non-secure mode
-N, --skip-space-check
Skip free space verification
--overwrite-output-file
Overwrite output file if it already exists
-p PASSWORD, --password=PASSWORD
Use PASSWORD for PowerFlex login (default: scli default)
--p12-password=PASSWORD
Encrypt PowerFlex login PKCS#12 file using PASSWORD (default: scli default)
--p12-path=FILE
Store PowerFlex login PKCS#12 file as FILE (default: scli default)
-q, --quiet, --silent
2. On one of the cluster member nodes, execute the data collection utility as a user with superuser permissions, such as user
root.
In the following example, the utility's executable exists under /root.
# /root/pfmp_support
estimating required space
cleaning up temporary directories
collecting kubernetes data
collecting shared kubernetes data
collecting server data
collecting general hardware data
collecting network data
collecting storage data
preparing files for collection
generating bundle
cleaning up temporary directories
bundle available at '/tmp/powerflex-pfmpsupport/pfmpSupport.tgz'
# /root/pfmp_support --skip-kubernetes-shared
estimating required space
cleaning up temporary directories
collecting kubernetes data
collecting server data
collecting general hardware data
collecting network data
collecting storage data
preparing files for collection
generating bundle
cleaning up temporary directories
bundle available at '/tmp/powerflex-pfmpsupport/pfmpSupport.tgz'
4. From all cluster member nodes, provide the resulting support bundle files to Dell Technologies Support.
By default, bundle files are named /tmp/powerflex-pfmpsupport/pfmpSupport.tgz.
# /tmp/PFMP2-4.0.0-161/PFMP_Installer/scripts/pfmp_support
estimating required space
cleaning up temporary directories
collecting kubernetes data
collecting shared kubernetes data
collecting server data
collecting general hardware data
collecting network data
collecting storage data
preparing files for collection
generating bundle
cleaning up temporary directories
bundle available at '/tmp/powerflex-pfmpsupport/pfmpSupport.tgz'
Option Description
Send to Configured If you are using SupportAssist, Send to Configured SupportAssist is selected by default.
SupportAssist Leave this default setting. If SupportAssist is not configured, this option is disabled.
Download to Share Provide a path using the following format with credentials: CIFS:\
\1.1.1.1\uploadDirectory. Enter a username and password, and click Test Connection
to verify the connection to the CIFS share before generating the bundle.
Download Locally Download the troubleshooting bundle to a local file: \\IP Address\Any folder
7. Click Generate.
Sometimes, the troubleshooting bundle does not include log information for all the nodes. The log collection may appear to
succeed, but the log for one or more of the nodes may be missing. You may see an error message in the scaleio.trace.log file
that says Could not run get_info script. If you see this message, you may need to generate the troubleshooting
bundle again to include information for all the logs.
Save the output of the request. This output is a JSON representation of the system configuration.
3. Add the M&O login information to the JSON. The get info script requires the login information to perform certain queries, to
provide this information, add the following key value pairs to JSON:
mnoUser : “<mno_username>”
mnoPassword : “<mno_password>”
mnoIp : “<mno_ip>”
For example:
{
“mnoUser” : “<mno_username>” ,
“mnoPassword”: “<mno_password>” ,
“mnoIp: “<mno_ip>”
“snmpIp”: null
… (rest of the json)
}
5. To monitor the log collection process, run the following API command:
GET https://<mno_ip>/im/types/ProcessPhase/actions/queryPhaseState
Monitor the phaseStatus value and wait until the operation is complete.
For example:
Output example:
{"phaseStatus":"completed","phase":"query","numberOfRunningCommands":0,"numberOfPendin
gCommands":0,"numberOfCompletedCommands":6,"numberOfAbortedCommands":0,"numberOfFailed
Commands":0,"failedCommand
In this example, the logs are downloaded to the current working directory under the name "get_info.zip".
8. To clear the operation and move to idle, run the following API commands:
POST https://<mno_ip>/im/types/Command/instances/actions/clear
https://<mno_ip>/im/types/ProcessPhase/actions/moveToIdlePhase
After the log collection operation is complete, mark the completed operation as completed to allow the other jobs to run.
For example:
Example:
Linux
/opt/MegaRAID/perccli/perccli64 /call show all > <file_name>
Example:
Example:
Linux
/opt/MegaRAID/perccli/perccli64 /call show events file=<file_name>
Example:
You can retrieve the RAIDevents.txt file from your local drive.
4. Retrieve the Termlog log file:
Example:
Linux
/opt/MegaRAID/perccli/perccli64 /call show termlog file=<file_name>
Example:
You can retrieve the RAIDtermlog.txt file from your local drive.
installer-30:/opt/dell/pfmp/PFMP_Installer/scripts # ./pfmp_support
estimating required space
cleaning up temporary directories
collecting kubernetes data
collecting shared kubernetes data
collecting server data
collecting general hardware data
collecting network data
collecting storage data
preparing files for collection
generating bundle
cleaning up temporary directories
bundle available at '/tmp/powerflex-pfmpsupport/pfmpSupport.tgz'
g. Select the destination that is created in the previous step and click Submit.
g. Select the destination that is created in the previous step and click Submit.
or
● NVMe-based SSDs:
a. Select the devices for encryption.
For example, /dev/nvmeXn1, where X could be 0 1 2 - 24.
svm status
svm status
3. Run the following command so that CloudLink will control the SED device:
For example:
4. Run svm status again to verify that the device is now managed:
The output should be similar to the following:
Volumes:
/ unencrypted
swap unencrypted
Devices:
/dev/sdf managed (sed SZ: 1788G MOD: KPM6WRUG1T92 SPT:
Yes )
/dev/sdd managed (sed SZ: 1788G MOD: KPM6WRUG1T92 SPT:
Yes )
/dev/sdb managed (sed SZ: 3577G MOD: KPM6WVUG3T84 SPT:
Yes )
/dev/sde managed (sed SZ: 1788G MOD: KPM6WRUG1T92 SPT:
Yes )
/dev/sdc managed (sed SZ: 3577G MOD: KPM6WVUG3T84 SPT:
Yes )
/dev/sdb unencrypted (sds SN:94917674 )
/dev/sdc unencrypted (sds SN:94917675 )
/dev/sdd unencrypted (sds SN:94917676 )
/dev/sde unencrypted (sds SN:94917677 )
/dev/sdf unencrypted (sds SN:94917678 )
/dev/sdg encrypted (sds SN:94917679 /dev/mapper/
svm_sdg)
/dev/sdh encrypted (sds SN:94917680 /dev/mapper/
svm_sdh)
/dev/sdi encrypted (sds SN:94917681 /dev/mapper/
svm_sdi)
/dev/sdj encrypted (sds SN:94917682 /dev/mapper/
svm_sdj)
/dev/sdk encrypted (sds SN:94917683 /dev/mapper/
svm_sdk)
NOTE: The status of the SED devices is displayed in the output as managed, but unencrypted.
or
● PowerFlex Manager:
For more information, see Remove devices.
2. Remove the encryption through CloudLink Center GUI or using the svm erase command:
NOTE: Removing (erasing) a device from CloudLink destroys all data on the device.
For example:
or
Persistent discovery
The persistent discovery controller ensures that the host remains connected to the discovery service after discovery. If at
any point there is a change in the discovery information that is provided to the host, the discovery controller returns an
asynchronous event notification (AEN) and the host requests the updated Discovery log page.
Here are some examples of changes in discovery information:
● A new volume is mapped to the host from a new protection domain.
● A new storage data target (SDT) is added to the system.
● Load balancing wants to move the host connection from one storage data target to another.
When configuring NVMe hosts, ensure that every host is connected for discovery at most once per subnet (data IP address
subnet). To use this functionality, ensure that the host operating system supports the Persistent Discovery Controller, and
that the Persistent Discovery flag is set in the discovery. (See the respective operating system for the NVMe over TCP host
configuration.)
If you have not defined the system data network, by default, a host can reach all system data networks. Ignoring the data
networks/subnets may result in unequal load between the host initiator ports and nonoptimized I/O performance with the
Limits
● One event pool can have maximum of five CEPA server addresses.
● One event publisher can have a maximum of three event pools.
● One NAS server can have only one event publisher associated with it.
Dell Technologies recommends using VMware Storage vMotion to migrate data from a volume that is presented by SDC to one
presented by NVMe over TCP.
This migration is intended for VMDK (noncluster) customers who want to convert their SDC to NVMe over TCP.
Linux environments, VMware ESXi clusters, and RDMs are not included in this section.
Requirements
See the following VMware product documentation links for information about Storage vMotion requirements and limitations:
Storage vMotion requirements and limitations
See the following VMware product documentation links for information about requirements and limitations of VMware NVMe
Storage: Requirements and limitations of VMware NVMe storage
Requirements for VMware NVMe storage
See the following VMware KB article for more details about storage migration (Storage vMotion), with the virtual machine
powered on: Migrating virtual machines with raw device mappings
Ensure that you satisfy these requirements before you begin the migration:
● Ensure the VMware ESXi version is 7.0u3 or later.
● Ensure the PowerFlex version is 4.5.x.
Workflow
The following steps summarize the workflow that you need to follow to migrate fro SDC to NVMe over TCP using Storage
vMotion:
1. Create and map a new volume of equal or greater size than the current VMFS datastore via NVMe over TCP to the same
host.
2. Scan for the newly mapped volume.
3. Create a new datastore on the NVMe over TCP volume.
4. Perform the standard data migration using Storage vMotion by using a non-disruptive process.
6. Click COPY and place the host NQN in the copy buffer.
7. Click CANCEL.
8. Repeat the steps for all the hosts.
Create a volume
Use this procedure to create a volume.
1. From PowerFlex Manager, click Block > Volumes.
2. Click +Create Volume.
3. Enter the number of volumes and the name of the volumes.
4. Select Thick or Thin. Thin is the default.
5. Enter the required volume size in GB, specifying the size in 8 GB increments.
6. Select the NVMe storage pool and click Create.
In the first example above, 6x is the first NVMe over TCP software adapter and 6y is the second NVMe over TCP software
adapter.
In the second example above, 192.168.x.x is first data IP Address and 192.168.x.y is second data IP Address, depends on
which VMNIC is enabled for which software NVMe over TCP adapter.
3. Connect to the PowerFlex system by appending “-c” to the discovery query command.
Option Description
Scan for New Storage Rescan all adapters to discover new storage devices. If new devices are discovered, they
Devices appear in the device list.
Scan for New VMFS Rescan all storage devices to discover new datastores that have been added since the last
Volumes scan. Any new datastores appear in the datastore list.