Flex Software Admin Guide 45x
Flex Software Admin Guide 45x
Flex Software Admin Guide 45x
x
Administration Guide
March 2024
Rev. 3.0
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2023 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Contents
Chapter 1: Introduction................................................................................................................10
Contents 3
Storage pools..................................................................................................................................................................... 34
Add storage pools........................................................................................................................................................34
Configure storage pool settings.............................................................................................................................. 35
Configure RMcache for the storage pool .............................................................................................................35
Using the background device scanner................................................................................................................... 35
Set media type for storage pool.............................................................................................................................. 36
Configuring I/O priorities and bandwidth use.......................................................................................................37
Acceleration pools............................................................................................................................................................. 37
Add an acceleration pool............................................................................................................................................37
Rename an acceleration pool....................................................................................................................................38
Remove an acceleration pool....................................................................................................................................38
Devices.................................................................................................................................................................................38
Activate devices.......................................................................................................................................................... 38
Clear device errors...................................................................................................................................................... 38
Remove devices...........................................................................................................................................................38
Rename devices........................................................................................................................................................... 39
Set media type............................................................................................................................................................. 39
Set device capacity limits..........................................................................................................................................39
Modify device LED settings...................................................................................................................................... 40
Volumes............................................................................................................................................................................... 40
Add volumes................................................................................................................................................................. 40
Delete volumes.............................................................................................................................................................40
Overwrite volume content......................................................................................................................................... 41
Create volume snapshots...........................................................................................................................................41
Set volume bandwidth and IOPS limits.................................................................................................................. 42
Increase volume size...................................................................................................................................................42
Map volumes.................................................................................................................................................................42
Unmap volumes............................................................................................................................................................43
Remove a snapshot consistency group ................................................................................................................ 43
Migrating vTrees..........................................................................................................................................................43
NVMe targets.....................................................................................................................................................................47
Add an NVMe target...................................................................................................................................................47
Modify an NVMe target............................................................................................................................................. 48
Remove an NVMe target...........................................................................................................................................48
Hosts.................................................................................................................................................................................... 48
Add an NVMe host......................................................................................................................................................48
Map hosts......................................................................................................................................................................49
Unmap hosts.................................................................................................................................................................49
Remove hosts...............................................................................................................................................................49
Configure or modify approved host IP addresses............................................................................................... 50
Approve SDCs.............................................................................................................................................................. 50
Rename hosts...............................................................................................................................................................50
Modify an SDC performance profile....................................................................................................................... 50
4 Contents
NAS server networks................................................................................................................................................. 54
NAS server naming services.....................................................................................................................................55
NAS server sharing protocols...................................................................................................................................56
NAS server protection and events..........................................................................................................................57
NAS server settings....................................................................................................................................................58
NAS server security ...................................................................................................................................................60
About file system storage............................................................................................................................................... 62
Create a file system for NFS exports.....................................................................................................................63
Create a file system for SMB shares......................................................................................................................64
Change file system settings......................................................................................................................................65
Create an SMB share................................................................................................................................................. 65
Create an NFS export................................................................................................................................................ 66
Create a global namespace....................................................................................................................................... 66
More about file systems............................................................................................................................................ 69
File system quotas ......................................................................................................................................................72
File protection.................................................................................................................................................................... 73
Create a protection policy......................................................................................................................................... 73
Create snapshot rules.................................................................................................................................................73
Create a snapshot....................................................................................................................................................... 73
Assign a protection policy to a file system............................................................................................................ 74
Unassign a protection policy..................................................................................................................................... 74
Modify a protection policy.........................................................................................................................................75
Delete a protection policy..........................................................................................................................................75
Modify a snapshot rule...............................................................................................................................................75
Delete a snapshot rule................................................................................................................................................75
Refresh a file system using snapshot..................................................................................................................... 75
Restore a file system from a snapshot...................................................................................................................76
Contents 5
Unassign a volume from a snapshot policy........................................................................................................... 83
Remote protection............................................................................................................................................................ 84
Extract and upload certificates................................................................................................................................84
Journal capacity...........................................................................................................................................................85
Add a peer system.......................................................................................................................................................86
Restart the replication cluster..................................................................................................................................87
SDRs............................................................................................................................................................................... 87
Replication consistency group..................................................................................................................................88
6 Contents
Repositories.......................................................................................................................................................................174
Compliance versions..................................................................................................................................................175
OS images.................................................................................................................................................................... 178
Compatibility management...................................................................................................................................... 178
Getting started................................................................................................................................................................. 179
Networking........................................................................................................................................................................ 179
Networks......................................................................................................................................................................179
Adding an IP verification port number...................................................................................................................181
System data networks.............................................................................................................................................. 182
Events and alerts............................................................................................................................................................. 183
Configuring an external source ............................................................................................................................. 183
Modifying an external source..................................................................................................................................184
Configuring a destination......................................................................................................................................... 184
Modifying a destination............................................................................................................................................ 186
Add a notification policy...........................................................................................................................................186
Modify a notification policy......................................................................................................................................187
Delete a notification policy...................................................................................................................................... 187
License management...................................................................................................................................................... 188
Uploading a PowerFlex license............................................................................................................................... 188
Uploading other software licenses........................................................................................................................ 188
Security.............................................................................................................................................................................. 189
Adding SSL trusted certificates............................................................................................................................. 189
Adding appliance SSL certificates......................................................................................................................... 189
Adding resource credentials....................................................................................................................................190
Serviceability.....................................................................................................................................................................194
Generating a troubleshooting bundle.................................................................................................................... 194
Back up and restore.................................................................................................................................................. 195
Backup and restore using VM snapshots.............................................................................................................198
Software upgrade............................................................................................................................................................199
Upgrading PowerFlex Manager using Dell SupportAssist................................................................................ 199
Upgrading PowerFlex Manager from a local repository path.........................................................................200
Editing the upgrade settings................................................................................................................................... 201
Contents 7
Maintenance activities...................................................................................................................................................229
Shutdown or restart a node gracefully................................................................................................................ 229
Running scripts on hosts via PowerFlex Manager...................................................................................................231
Overview of running scripts on hosts................................................................................................................... 231
Run script on host..................................................................................................................................................... 231
8 Contents
Chapter 22: Migrating to NVMe/TCP on ESXi............................................................................ 261
Prepare the VMware ESXi node for mapping NVMe/TCP volumes.................................................................. 262
Enable the NVMe/TCP VMkernel ports..............................................................................................................262
Add NVMe /TCP software storage adapter...................................................................................................... 262
Copy the host NQN.................................................................................................................................................. 262
Add a host to PowerFlex.........................................................................................................................................263
Create a volume........................................................................................................................................................ 263
Map a volume to the host.......................................................................................................................................263
Discover and connect the NVMe/TCP Target.................................................................................................. 263
Perform a rescan of the storage........................................................................................................................... 264
Create a VMFS datastore on the NVMe/TCP volume.................................................................................... 264
Migrate the data with Storage vMotion....................................................................................................................264
Contents 9
1
Introduction
This document provides procedures for using Dell PowerFlex Manager to administer your Dell PowerFlex system.
It provides the following information:
● Initial configuration and setup
● Performing common tasks
● Displaying system information at a glance
● Managing block storage
● Managing file storage
● Protecting your storage environment
● Performing lifecycle operations for a resource group
● Managing resources
● Monitoring events and alerts
● Configuring system settings
● PowerFlex Manager user interface reference
● Additional administration activities
● Retrieving logs for PowerFlex servers
● Retrieving logs for PowerFlex components
● Additional PowerFlex logs
● Managing storage devices using CloudLink
The target audience for this document includes system administrators responsible for managing PowerFlex systems.
For additional PowerFlex software documentation, go to PowerFlex software technical documentation.
10 Introduction
2
Revision history
Table 1. Revisions
Date Document revision Description of changes
March 2024 3.0 Updates for release 4.5.2
November 2023 2.0 Updates for release 4.5.1
September 2023 1.0 Initial release
Revision history 11
3
Initial configuration and setup
This section includes tasks you need to perform when you first begin using PowerFlex Manager.
Initial configuration
The first time you log in to PowerFlex Manager, you are prompted with an Initial Configuration Wizard, which prompts you to
configure the basic settings that are required to start using PowerFlex Manager.
Before you begin, have the following information available:
● SupportAssist configuration details
SupportAssist refers to Secure Connect Gateway, which is used for call home functionality and remote connectivity.
● Information about whether you intend to use a Release Certification Matrix (RCM) or Intelligent Catalog (IC)
● Information about the type of installation you want to perform, including details about your existing PowerFlex instance, if
you intend to import from another PowerFlex instance
To configure the basic settings:
1. On the Welcome page, read the instructions and click Next.
2. On the SupportAssist page, optionally enable SupportAssist and specify SupportAssist connection settings, and click Next.
3. On the Installation Type page, specify whether you want to deploy a new instance of PowerFlex or import an existing
instance, and click Next.
4. On the Summary page, verify all settings for SupportAssist and installation type. Click Finish to complete the initial
configuration.
After completing the Initial Configuration Wizard, you can get started using PowerFlex Manager from the Getting Started
page.
Enabling SupportAssist
SupportAssist is a secure support technology for the data center. SupportAssist refers to Secure Connect Gateway, which
is used for call home functionality and remote connectivity. You can enable SupportAssist, as part of the initial configuration
wizard. Alternatively, you can enable it later by adding it as a destination to a notification policy in Events and Alerts.
PowerFlex Manager provides support through integration with the secure connect gateway ensuring better alignment with Dell
Technologies services initiatives to enhance the user experience. SupportAssist is the functionality that enables this connection.
When you enable SupportAssist, you can take advantage of the following benefits, depending on the service agreement on your
device:
● Automated issue detection - SupportAssist monitors your Dell Technologies devices and automatically detects hardware
issues, both proactively and predictively.
● Automated case creation - When an issue is detected, SupportAssist automatically opens a support case with Dell
Technologies Support.
● Automated diagnostic collection - SupportAssist automatically collects system state information from your devices and
uploads it securely to Dell Technologies. Dell Technologies Support uses this information to troubleshoot the issue.
● Proactive contact - A Dell Technologies Support agent contacts you about the support case and helps you resolve the issue.
The first time that you access the initial configuration wizard, the connection status displays as not configured.
Ensure you have the details about your SupportAssist configuration.
If you do not enable SupportAssist in the initial setup, you can add it later as a destination when adding a notification policy
through Settings > Events and Alerts > Notification Policies.
1. Click Enable SupportAssist.
2. From the Connection Type tab, there are two options:
Related information
Monitoring events and alerts
License management
Add a notification policy
If you are importing an existing PowerFlex deployment that was not managed by PowerFlex Manager, make sure you have the IP
address, username, and password for the primary and secondary MDMs. If you are importing an existing PowerFlex deployment
that was managed by PowerFlex Manager, make sure you have the IP address, username, and password for the PowerFlex
Manager virtual appliance.
1. Click I want to deploy a new instance of PowerFlex if you do not have an existing PowerFlex deployment and would like
to bypass the import step.
2. Click I have a PowerFlex instance to import if you would like to import a an existing PowerFlex instance that was not
managed by PowerFlex Manager.
Specify whether you are currently running PowerFlex 3.x or PowerFlex 4.x.
For a PowerFlex 3.x system, provide the following details about the existing PowerFlex instance:
● IP addresses for the primary and secondary MDMs (separated by a comma with no spaces)
● Admin username and password for the primary MDM
● Operating system username and password for the primary MDM
● LIA password
For a PowerFlex 4.x system, indicate whether the PowerFlex instance is used for Production Storage or a Management
Cluster. The Management Cluster use case is applicable for PowerFlex appliance and PowerFlex rack install types that have
a dedicated management cluster with PowerFlex as a shared storage for the datastore hosting the Management VMs.
Then, provide the following details about the existing PowerFlex instance:
● IP addresses for all nodes with a primary and secondary MDM
● System ID for the cluster
● LIA password
3. Click I have a PowerFlex instance managed by PowerFlex Manager to import if you would like to import a an existing
PowerFlex directly from an existing PowerFlex Manager virtual appliance.
Provide the following details about the existing PowerFlex Manager virtual appliance:
● IP address or DNS name for the virtual appliance
● Username and password for the virtual appliance
4. Click Next to proceed.
For a full PowerFlex Manager migration, the import process backs up and restores information from the old PowerFlex Manager
virtual appliance. The migration process for the full PowerFlex Manager workflow imports all resources, templates, and services
from a 3.8 instance of PowerFlex Manager. The migration also connects the legacy PowerFlex gateway to the MDM cluster,
which enables the Block tab in the user interface to function.
The migrated environment includes the PowerFlex gateway resource. The operating system hostname and asset/service tag are
set to powerflex.
For a software-only PowerFlex system, there is no PowerFlex Manager information available after the migration completes. The
migrated environment does not include resources, templates, and services.
If you did not migrate an existing PowerFlex environment, you can now deploy a new instance of PowerFlex.
After completing the migration wizard for a full PowerFlex Manager import, you must perform these steps:
1. On the Settings page, upload the compatibility matrix file and upload the latest repository catalog (RCM or IC).
2. On the Resources page, select the PowerFlex entry, and perform a nondisruptive update.
3. On the Resource Groups page, perform an RCM/IC upgrade on any migrated service that must be upgraded.
The migrated resource groups are initially non-compliant, because PowerFlex Manager is running a later RCM that includes
PowerFlex 4.x. These resource groups must be upgraded to the latest RCM before they can be expanded or managed with
automation operations.
CAUTION: Check the Alerts page before performing the upgrade. Look for major and critical alerts that are
related to PowerFlex Block and File to be sure the MDM cluster is healthy before proceeding.
4. Power off the old PowerFlex Manager VM, the old PowerFlex gateway VM, and the presentation server VM.
The upgrade of the cluster causes the old PowerFlex Manager virtual appliances to stop working.
5. After validating the upgrade, decommission the old instances of PowerFlex Manager, the PowerFlex gateway, and the
presentation server.
Do not delete the old instances until you have had a chance to review the initial configuration and confirm that the old
environment was migrated successfully.
After completing the migration wizard for a PowerFlex (software-only) import, you must perform these steps:
1. On the Settings page, upload the compatibility matrix file and upload the latest software-only catalog.
The software-only catalog is new in this release. This catalog only includes the components that are required for an upgrade
of PowerFlex.
2. On the Resources page, select the PowerFlex entry, and perform a nondisruptive update.
You do not need a resource group (service) to perform an upgrade of the PowerFlex environment. In addition, PowerFlex
Manager does not support Add Existing Resource Group operations for a software-only migration. If you want to be able
to perform any deployments, you need a new resource group. Therefore, you must create a new template (or clone a sample
template), and deploy a new resource group from the template.
Getting started
The Getting Started page guides you through the common configurations that are required to prepare a new PowerFlex
Manager environment. A green check mark on a step indicates that you have completed the step. Only super users have access
to the Getting Started page.
The following table describes each step:
Step Description
Upload Compliance File Provide compliance file location and authentication information for use within
PowerFlex Manager. The compliance file defines the specific hardware
components and software version combinations that are tested and certified
by Dell for hyperconverged infrastructure and other Dell products. This step
enables you to choose a default compliance version for compliance or add new
compliance versions.
You can also click Settings > Repositories > Compliance Versions.
NOTE: Before you make an RCM or IC the default compliance version, you
must first upload a suitable compatibility management file under Settings
> Repositories > Compatibility Management.
Define Networks Enter detailed information about the available networks in the environment.
This information is used later during deployments that are based on templates
and resource groups. These deployments use the network information to
configure nodes and switches to have the right network connectivity.
PowerFlex Manager uses the defined networks in templates to specify the
networks or VLANs that are configured on nodes and switches for your
resource groups.
This step is enabled immediately after you perform an initial configuration for
PowerFlex Manager.
You can also click Settings > Networking > Networks.
If you plan to perform an advanced CSV-based deployment, this step can be
skipped.
Discover Resources Grant PowerFlex Manager access to resources (nodes, switches, virtual
machine managers) in the environment by providing the management IP and
credential for the resources to be discovered.
This step is not enabled until you define your networks.
You can also click Resources > Discover Resources.
If you plan to perform an advanced CSV-based deployment, this step can be
skipped.
Manage Deployed Resources (Optional) Add an existing resource group for a cluster that is already deployed and
manage the resources within PowerFlex Manager.
This step is not enabled until you define your networks.
You can also click Lifecycle > Resource Groups > Add Existing Resource
Group.
If you plan to perform an advanced CSV-based deployment, this step can be
skipped.
Deploy Resources Create a template with requirements that must be followed during a
deployment. Templates enable you to automate the process of configuring
and deploying infrastructure and workloads. For most environments, you can
clone one of the sample templates that are provided with PowerFlex Manager
and make modifications as needed. Choose the sample template that is most
appropriate for your environment.
For example, for a hyperconverged deployment, clone one of the
hyperconverged templates.
For a two-layer deployment, clone the compute-only templates. Then clone one
of the storage templates.
This step is not enabled until you define your networks.
You can also click Lifecycle > Templates.
If you plan to perform an advanced CSV-based deployment, this step can be
skipped.
If you would like to use the legacy PowerFlex Installer to provide an installation topology file, click Deploy With Installation
File. See the topic on deploying a PowerFlex cluster using a CSV topology file in the Dell PowerFlex 4.5.x Install and Upgrade
Guide for more information.
The topology report sent from PowerFlex Manager to Embedded System Enabler (ESE) must be updated to include SWID and
installation ID. These IDs must be sent for all cases- rack, appliance, and software. If the data is missing, fields display empty or
null values. Ensure that the file does not break.
Related information
Networking
Discover a resource
Adding an existing resource group
Templates
License management
You can also create a new template. However, for most environments, you can
simply clone one of the sample templates that are provided with PowerFlex
Manager.
Deploy a new resource group Click Lifecycle > Resource Groups. On the Resource Groups page, click Deploy
New Resource Group.
You can only deploy a resource group using a published template.
Related information
Repositories
Networking
Resources
Preparing a template for a resource group deployment
Deploying a resource configuration in a resource group
You can also create a new template. However, for most environments, you can
simply clone one of the sample templates that are provided with PowerFlex
Manager.
Import an existing block storage Click Lifecycle > Resource Groups and click +Add Existing Resource Group.
configuration
Be sure to upload a compliance file (if necessary), define the networks, and
discover resources before adding the existing resource group.
Managing block storage On the menu bar, click Block, and choose the type of block storage components
you want to manage:
● Protection Domains
● Fault Sets
● SDSs
● Storage Pools
● Acceleration Pools
● Devices
● Volumes
● NVMe Targets
● Hosts
If you create new objects on the Block tab, you need to update the inventory
on the Resources page, and then click Update Resource Group Details on the
Lifecycle > Resource Groups page for any resource group that requires the
updates.
Related information
Repositories
Networking
Discover a resource
Clone a template
Deploying a resource configuration in a resource group
Managing block storage
For a file storage deployment, clone one of the following sample templates:
● PowerFlex File
● PowerFlex File - SW Only
6. Click Lifecycle > Resource Groups. On the Resource Groups page, click
Deploy New Resource Group.
For a PowerFlex file cluster, you must have a minimum of two nodes and a
maximum of 16 nodes.
For a PowerFlex file cluster, you must choose Use Compliance File Linux Image
for the OS Image, Compute Only for the PowerFlex Role. Also, select Enable
PowerFlex File.
The sample templates for file storage configuration pull in the PowerFlex
management and PowerFlex data networks from the associated PowerFlex
gateway. In addition to the PowerFlex management and PowerFlex data networks,
you need to include a NAS management network and at least one NAS data
network. One NAS data network is enough, but, for redundancy, two networks
are recommended.
NOTE: Check the network settings carefully, as they are different for standard
configurations and software-only configurations.
Import an existing file storage Click Lifecycle > Resource Groups and click +Add Existing Resource Group.
configuration
Upload a compliance file (if necessary), define the networks, and discover
resources before adding the existing resource group.
PowerFlex Manager does not support importing existing deployments for software-
only NAS environments.
Manage file storage On the menu bar, click File. Then, choose the type of file components you want to
manage:
● NAS Servers
● File Systems
● SMB Shares
● NFS Exports
● File Protection
If you create new objects on the File tab, you need to update the inventory on
the Resources page. Then, you need to click Update Resource Group Details on
the Lifecycle > Resource Groups page for any resource group that requires the
updates.
Managing components
Once PowerFlex Manager is configured, you can use it to manage the system.
The following table describes common tasks for managing system components and what steps to take in PowerFlex Manager to
initiate each task:
Be sure to upload a compliance file (if necessary), define the networks, and
discover resources before adding the existing resource group.
Perform node expansion 1. Click Lifecycle > Resource Groups. On the Resource Groups page, select a
resource group.
2. On the Resource Group Details tab, under Add Resources, click Add Nodes.
The procedure is the same for new resource groups and existing resource groups.
Remove a node 1. Click Lifecycle > Resource Groups.
2. On the Resource Groups page, select a resource group.
3. On the Resource Group Details tab, under More Actions, click Remove
Resource.
4. Select Delete Resource for the Resource removal type.
Enter service mode (applicable for 1. Click Lifecycle > Resource Groups.
PowerFlex appliance and PowerFlex rack 2. On the Resource Groups page, select a resource group.
only) 3. On the Resource Group Details tab, under More Actions, click Enter
Service Mode.
Exit service mode (applicable for 1. Click Lifecycle > Resource Groups.
PowerFlex appliance and PowerFlex rack 2. On the Resource Groups page, select a resource group.
only) 3. On the Resource Group Details tab, under More Actions, click Exit Service
Mode.
Replace a drive 1. Click Lifecycle > Resource Groups.
2. On the Resource Groups page, select a resource group.
3. Under Physical Nodes, click Drive Replacement.
Reconfigure MDM roles 1. Click Lifecycle > Resource Groups.
2. On the Resource Groups page, select a resource group.
3. On the Resource Group Details tab, click Reconfigure MDM Roles under
More Actions.
You can also reconfigure MDM roles from the Resources page. Select a PowerFlex
gateway and click View Details. Then, click Reconfigure MDM Roles.
For information about which resource groups are healthy and in compliance, and
which are not, look at the Resource Groups section.
Monitor software and firmware 1. Click Lifecycle > Resource Groups.
compliance 2. On the Resource Groups page, select a resource group.
3. On the Resource Group Details page, click View Compliance Report.
Perform software and firmware From the compliance report, view the firmware or software components. Click
remediation Update Resources to update non-compliant resources.
Generate a troubleshooting bundle 1. Click Settings and click Serviceability.
2. Click Generate Troubleshooting Bundle.
You can also generate a troubleshooting bundle from the Resource Groups page:
1. Click Lifecycle > Resource Groups.
2. On the Resource Groups page, select a resource group.
3. On the Resource Group Details page, click Generate Troubleshooting
Bundle.
Download a report that lists compliance 1. Click Resources.
details for all resources 2. Click Export Report and download a compliance report (PDF or CSV) or a
configuration report (PDF).
View alerts On the menu bar, click Monitoring > Alerts.
Related information
Lifecycle
Resources
If you do not choose the correct options when you remove the resource group,
you could tear down the resource group and destroy it, or leave the servers in an
unmanaged state, and not be able to add the existing resource group.
Manually created volumes outside of Click Run Inventory on the Resources page to update the inventory. Then, click
PowerFlex Manager Update Resource Group Details on the Lifecycle > Resource Groups page for
any resource group that requires the updated volumes.
Manually deleted volumes outside of Click Run Inventory on the Resources page to update the inventory. Then, click
PowerFlex Manager Update Resource Group Details on the Lifecycle > Resource Groups page
for any resource group that requires updated information about volumes deleted.
PowerFlex Manager displays a message indicating that the volumes have been
removed from the resource group.
Manually added nodes outside of Click Run Inventory on the Resources page to update the inventory. Then, click
PowerFlex Manager Update Resource Group Details on the Lifecycle > Resource Groups page for
any resource group that requires the updated nodes.
Manually removed nodes outside of Click Run Inventory on the Resources page to update the inventory. Then, click
PowerFlex Manager Update Resource Group Details on the Resource Groups page for any resource
group that requires updated information about nodes deleted. Then, manually
remove the resource on the Resources page by clicking the All Resources tab,
selecting the resource, and clicking Remove.
Renamed objects such as VMware ESXi Click Run Inventory on the Resources page to update the inventory. Then, click
host, volume, datastore, VDS, port Update Resource Group Details on the Lifecycle > Resource Groups page for
group, data center, cluster, and so forth any resource group that requires the updates.
Note that vCLS datastore names are based on cluster names, so any change to a
cluster name renders the vCLS datastore un-recognizable. If you change the cluster
name manually for a deployment, the datastore name hosting the vCLS VMs needs
Created new objects on the Block or Click Run Inventory on the Resources page to update the inventory. Then, click
File tab Update Resource Group Details on the Lifecycle > Resource Groups page for
any resource group that requires the updates.
Related information
Lifecycle
Resource groups
Usable capacity
This section shows the total capacity, along with details about the physical, system, and free capacity.
Data savings
This section provides details about the overall savings and thin provisioning savings.
Resources/inventory
This section shows the number of resources in the current inventory:
● VM managers
● Nodes
● Switches
● Protection domains
● Storage pools
● Volumes
● Hosts
● File systems
● NAS servers
Resource groups
This section displays a graphical representation of the resource groups deployed based on status. The number next to each icon
indicates the number of resource groups in a particular state. The resource groups are categorized into the following states:
You can monitor node health by viewing the status of the resource group on the Resource Groups page.
If a resource group is in a yellow (or warning) state, it means that one or more nodes is in a warning or failed state. If a resource
group is in a red (or error) state, it indicates that the resource group has fewer than two nodes that are not in a failed state.
To view the status of a failed node component, hover the cursor on the image of the failed component in the resource group.
Alerts
This section lists the current alerts within the system, categorized by severity level:
● Critical
● Major
● Minor
Related information
Configuring block storage
Protection domains
A protection domain is a set of SDSs configured in separate logical groups. It may also contain SDTs and SDRs. These logical
groups allow you to physically and/or logically isolate specific data sets and performance capabilities within specified protection
domains and limit the effect of any specific device failure. You can add, modify, activate, inactivate, or remove a protection
domain in the PowerFlex system.
Fault sets
Fault sets are logical entities that contain a group of SDSs within a protection domain. A fault set can be defined for a set
of servers that are likely to fail together, for example, an entire rack full of servers. PowerFlex maintains mirrors of all chunks
within a fault set on SDSs that are outside of this fault set so that data availability is assured even if all the servers within one
fault set fail simultaneously.
NOTE: Acceleration settings can be configured later from the Block > Acceleration Pools page.
NOTE: You cannot enable zero padding after adding the devices.
Remove SDSs
Remove SDSs and devices gracefully from a system. The removal of some objects in the system can take a long time, because
removal may require data to be moved to other storage devices in the system.
If you plan to replace a device with a device containing less storage capacity, you can configure the device to a smaller capacity
than its actual capacity, in preparation for replacement. This will reduce rebuild and rebalance operations in the system later on.
The system has job queues for operations that take a long time to execute. You can view jobs by clicking the Running Storage
Jobs icon on the right side of the toolbar. Operations that are waiting in the job queue are shown as Pending. If a job in the
queue will take a long time, and you do not want to wait, you can cancel the operation using the Abort button in the Remove
command window (if you left it open), or using the Abort entering Protected Maintenance Mode command from
the More Actions menu.
CAUTION: The Remove command deletes the specified objects from the system. Use the Remove command with
caution.
1. On the menu bar, click Block > SDSs.
2. In the right pane, select the relevant SDS check box and click More Actions > Remove.
3. In the Remove SDS dialog box, click Remove.
4. Verify that the operation has finished and was successful, and click Dismiss.
Storage pools
A storage pool is a set of physical storage devices in a protection domain. A volume is distributed over all devices residing in the
same storage pool. Add, modify, or remove a storage pool in the PowerFlex system.
5. To enable validation of the checksum value of in-flight data reads and writes, select Use Inflight Checksum.
6. By default, the Use Persistent Checksum is selected to ensure persistent checksum data validation.
NOTE: This option is enabled only when HDD or SSD with medium granularity is selected.
Option Description
Enable Rebuild/ By default, the rebuild/rebalance features are enabled in the system because they are essential for
Rebalance system health, optimal performance, and data protection.
CAUTION: Rebuilding and rebalancing are essential parts of PowerFlex and should only
be disabled temporarily, in special circumstances. If rebuilds are disabled, redundancy
will not be restored after failures. Disabling rebalance may cause the system to become
unbalanced even if no capacity is added or removed.
Enable Inflight Inflight checksum protection mode can be used to validate data reads and writes in storage pools, in
Checksum order to protect data from data corruption.
Enable Persistent Persistent checksum can be used to support the medium granularity layout in protecting the storage
Checksum device from data corruption. Select validate on read to validate data reads in the storage pool.
NOTE: If you want to enable or disable persistent checksum, you must first disable the
background device scanner from the storage pool.
Enable Zero Use the zero-padded policy when the storage pool data layout is fine granularity. The zero-padded
Padding Policy policy ensures that every read from an area previously not written to returns zeros.
Enable For fine granularity storage pools, inline compression allows you to gain more effective capacity.
Compression
4. Click Apply.
NOTE: These features affect system performance, and should only be configured by an advanced user.
Acceleration pools
An acceleration pool is a group of acceleration devices within a protection domain. PowerFlex only supports acceleration of fine
granularity storage pools.
Fine granularity acceleration uses NVDIMMs devices configured to fine granularity storage pools. Configure NVDIMM
acceleration pools for fine granularity acceleration.
Option Description
Add acceleration devices from Optionally, select the Add Devices To All SDSs check box to add acceleration
all SDSs that contribute to the devices from all SDSs in the protection domain. Otherwise, select acceleration devices
relevant acceleration pool. one by one.
Add devices one by one. Enter the path and name of each acceleration device, select the SDS on which the
device is installed, and then click Add Devices. Repeat for all desired acceleration
devices in the acceleration pool.
6. Click Create.
Devices
Storage devices or acceleration devices are added to an SDS or to all SDSs in the system. There are two types of devices:
storage devices and acceleration devices.
Activate devices
Activate one or more devices that were added to a system using the Test only option for device tests.
1. On the menu bar, click Block > Devices.
2. In the list of devices, select the check boxes of the required devices, and click More Actions > Activate.
3. In the Activate Device dialog box, click Activate.
4. Verify that the operation has finished and was successful, and click Dismiss.
Remove devices
Use this procedure to remove a storage or acceleration device.
Before removing an NVDIMM acceleration device, remove all storage devices that are being accelerated by the NVDIMM. Then,
remove the NVDIMM from its acceleration pool.
1. On the menu bar, click Block > Devices.
Rename devices
Use this procedure to change the name of a device.
You can view the current device name by displaying the Name column in the device list. The Name column is hidden, by default.
When no device name has been defined, the name is set by default to the device's path name.
The device name must adhere to the following rules:
● Contains less than 32 characters
● Contains only alphanumeric and punctuation characters
● Is unique within the object type
1. On the menu bar, click Block > Devices.
2. In the list of devices, select the required device, and click Modify > Rename.
3. In the Rename Device dialog box, enter the new name, and click Apply.
4. Verify that the operation has finished and was successful, and click Dismiss.
NOTE: The capacity assigned to the storage device must be smaller than its actual physical size.
Volumes
Define, configure and manage volumes in the PowerFlex system.
Add volumes
Use the following procedure to add volumes. Dell Technologies highly recommends giving each volume a meaningful name
associated with its operational role.
There must be at least three SDS nodes in the system and there must be sufficient capacity available for the volumes.
PowerFlex objects are assigned a unique ID that can be used to identify the object in CLI commands. The default name for each
volume object is its ID. The ID is displayed in the Volumes list or can be obtained using a CLI query. Define each volume name
according to the following rules:
● Contains less than 32 characters
● Contains only alphanumeric and punctuation characters
● Is unique within the object type
To add one or multiple volumes, perform these steps:
1. On the menu bar, click Block > Volumes.
2. Click + Create Volume.
3. In the Create Volume dialog box, configure the following items:
a. Enter the number of volumes to be created.
● If you type 1, enter a name for the volume.
● If you type a number greater than 1, enter the Volume Prefix and the Starting Number of the volume. This number
will be the first number in the series that will be appended to the volume prefix. For example, if the volume prefix
is Vol%i% and the starting number value is 100, the name of the first volume created will be Vol100, the second
volume will be Vol101, and so on.
b. Select either Thin (default) or Thick provisioning options.
c. Enter the volume size in GB (basic allocation granularity is 8 GB).
d. Select a storage pool.
4. Click Create.
5. Verify that the operation has finished and was successful, and click Dismiss.
To use the created volume, you must map it to at least one host. If the restricted SDC mode is enabled for the system, you must
approve SDCs prior to mapping volumes to them.
Delete volumes
Remove volumes from PowerFlex.
Ensure that the volume that you are deleting is not mapped to any hosts. If it is, unmap it before deleting it. In addition, ensure
that the volume is not the source volume of any snapshot policy. You must remove the volume from the snapshot policy before
you can remove the volume.
NOTE: Use this command very carefully, since this will overwrite data on the target volume or snapshot.
NOTE: If the destination volume is an auto snapshot, the auto snapshot must be locked before you can continue to
overwrite volume content.
1. On the menu bar, click Block > Volumes.
2. In the list of volumes, select the required volume, and click More Actions > Overwrite Content.
3. In the Overwrite Content of Volume dialog box, in the Target Volume tab, the selected volume details are displayed. Click
Next.
4. In the Select Source Volume tab, do the following:
a. Select the source volume from which to copy content.
b. Click the Time Frame button and select the interval from which to copy content. If you choose Custom, select the date
and time and click Apply.
c. Click Next.
5. In the Review tab, review the details and click Overwrite Content.
6. Verify that the operation has finished and was successful, and click Dismiss.
Map volumes
Mapping exposes the volume to the specified host, effectively creating a block device on the host. You can map a volume to one
or more hosts.
Volumes can only be mapped to one type of host: either SDC or NVMe. Ensure that you know which type of hosts are being
used for each volume, to avoid mixing host types.
For Linux-based devices, the scini device name may change on reboot. Dell recommends that you mount a mapped volume to
the PowerFlex unique ID, which is a persistent device name, rather than to the scini device name.
To identify the unique ID, run the command ls -l /dev/disk/by-id/.
You can also identify the unique ID using VMware. In the VMware management interface, the device is called EMC Fibre
Channel Disk, followed by an ID number starting with the prefix eui.
NOTE: You cannot map a volume if the volume is an auto snapshot that is not locked.
NOTE: You cannot map the volume on the target of a peer system if it is connected to a replication consistency group.
NOTE: You cannot remove a consistency group that contains auto snapshots.
Migrating vTrees
Migration of a volume tree (vTree) allows you to move a vTree to a different storage pool.
Migration of a vTree frees up capacity in the source storage pool. For example, you can migrate a vTree from an HDD-based
storage pool to an SSD-based storage pool, or to a storage pool with different attributes such as thin or thick.
There are several possible reasons for migrating a vTree to a different storage pool:
● To move the volumes to a different storage pool type
● To move to a different storage pool or protection domain due to multitenancy
● To decrease the capacity of a system by moving out of a specific storage pool
● To change from a thin-provisioned volume to a thick-provisioned volume, or the reverse
● To move the volumes from a medium granularity storage pool to a fine granularity storage pool
● To clear a protection domain for maintenance purposes, and then return the volumes to it
During vTree migration, you can run other tasks such as creating snapshots, deleting snapshots, and entering maintenance
mode.
NOTE: You cannot create snapshots when migrating a vTree from a medium granularity storage pool to a fine granularity
storage pool.
When a user requests a vTree migration, the MDM begins the process by estimating whether the destination storage pool has
enough capacity for a successful migration. The MDM bases the estimation on its information about the current capacity of
the vTree. If there is insufficient capacity at the destination based on that estimate, migration does not start. (An advanced
option allows you to force the migration even if there is insufficient capacity at the destination, with the intention to increase
the capacity as required during the migration.) The MDM does not reserve the estimated capacity at the destination (since the
capacity of the source volume can grow during migration and the reserved capacity does not guarantee success). The MDM
does not retain source capacity once it has been migrated, but releases it immediately.
Use the following table to understand which vTree migrations are possible, and under what specific conditions:
NOTE: vTree migration is a long process and can take days or weeks, depending on the size of the vTree.
Option Description
Add migration to the Give this vTree migration the highest priority in the migration priority queue.
head of the migration
queue
Ignore destination Allow the migration to start regardless of whether there is enough capacity at the destination,
capacity or not.
Enable compression Compression is done by applying a compression-algorithm to the data.
Convert vTree from... Convert a thin-provisioned vTree to thick-provisioned, or vice-versa, at the destination,
depending on the provisioning of the source volume.
NOTE: SDCs with a version earlier than v3.0 do not fully support converting a thick-
provisioned vTree to a thin-provisioned vTree during migration; after migration, the vTree
will be thin-provisioned, but the SDC will not be able to trim it. These volumes can be
trimmed by unmapping and then remapping them, or by restarting the SDC. The SDC
version will not affect capacity allocation, and a vTree converted from thick to thin
provisioning will be reduced in size accordingly in the system.
Save current vTree The provisioning state is returned to its original state before the migration took place.
provisioning state during
migration
8. Click Migrate vTree.
The vTree migration is initiated. The vTree appears in both the source and the destination storage pools.
9. At the top right of the window, click the Running Storage Jobs icon and check the progress of the migration of the vTree.
10. Verify that the operation has finished and was successful, and click Dismiss.
NOTE: This feature is available only when there is more than one vTree migration currently in the queue.
Remove a vTree
You can remove a vTree from PowerFlex, as long as it is unmapped.
Ensure that the vTree is unmapped.
1. On the menu bar, click Block > Volumes.
2. In the list of volumes, select the volume that you want to remove.
3. In the right pane, click View Details.
4. In the left pane, click the VTree tab.
NVMe targets
NVMe targets (or SDT components) must be configured on the PowerFlex system side, in order to use NVMe over TCP
technology.
The NVMe target (or SDT component) is a frontend component that translates NVMe over TCP protocol into internal
PowerFlex protocols. The NVMe target provides I/O and discovery services to NVMe hosts configured on the PowerFlex
system. A minimum of two NVMe targets must be assigned to a protection domain before it can serve NVMe hosts, to provide
minimal path resiliency to hosts.
TCP ports, IP addresses, and IP address roles must be configured for each NVMe target (or SDT component). You can assign
both storage and host roles to the same target IP addressess. Alternatively, assign the storage role to one target IP address,
and add another target IP address for the host role. Both roles must be configured on each NVMe target.
● The host port listens for incoming connections from hosts, over the NVMe protocol.
● The storage port listens for connections from the MDM.
Once the NVMe targets have been configured, add hosts to PowerFlex, and then map volumes to the hosts. Connect hosts to
NVMe targets, preferably using the discovery feature.
On the operating system of the compute nodes, NVMe initiators must be configured. Network connectivity is required between
the NVMe targets and the NVMe initiators, and between NVMe targets (or SDT components) and SDSs.
Usage: scli --add_sdt --sdt_ip <IP> [--sdt_ip_role <ROLE>] [--storage_port <PORT>] [--
nvme_port <PORT>] [--discovery_port <PORT>] [--sdt_name <NAME>] (--protection_domain_id
<ID> | --protection_domain_name <NAME>) [--fault_set_id <ID> | --fault_set_name <NAME>]
[--profile <PROFILE>] [--force_clean [--i_am_sure]]
Description: Add an SDT
Parameters:
--sdt_ip <IP> A comma separated list of IP addresses assigned
to the SDT
--sdt_ip_role <ROLE> A comma separated list of roles assigned to each
SDT IP address
Role options: storage_only, host_only, or
Hosts
Hosts are entities that consume PowerFlex storage for application usage. There are two methods of consuming PowerFlex block
storage: using the SDC kernel driver, or using NVMe over TCP connectivity. Therefore, a host is either an SDC or an NVMe
host.
Once a host is configured, volumes may be mapped to it. In each case, hosts must be mapped to volumes.
Map hosts
Map hosts to volumes.
Volumes can only be mapped to one type of host: either SDC or NVMe. Ensure that you know which type of hosts are being
used for each volume, to avoid mixing host types.
1. On the menu bar, click Block > Hosts.
2. In the list of hosts, select the relevant host, and click Mapping > Map.
3. In the Map Hosts to Volumes dialog box, select the volumes to be mapped to the selected hosts, and click Map.
4. Verify that the operation has finished and was successful, and click Dismiss.
Unmap hosts
Remove mapping between volumes and hosts.
1. On the menu bar, click Block > Hosts.
2. In the list of hosts, select the relevant host, and click Mapping > Unmap.
3. In the Unmap dialog box, ensure that the desired host is selected.
4. Click Unmap.
5. Verify that the operation has finished and was successful, and click Dismiss.
Remove hosts
Remove hosts from PowerFlex.
1. On the menu bar, click Block > Hosts.
2. In the list of hosts, select the relevant host, and click Remove.
3. In the Remove Host dialog box, ensure that you have selected the desired host for removal.
4. Click Remove.
5. Verify that the operation has finished and was successful, and click Dismiss.
Approve SDCs
When the system's restricted host (SDC) mode is set to GUID restriction, approve SDCs before mapping them to volumes.
1. On the menu bar, click Block > Hosts.
2. In the list of hosts, select one or more hosts and click Modify > Approve.
3. In the Approve host dialog box, verify that the hosts listed are the ones that you want to approve, and click Approve.
4. Verify that the operation has finished and was successful, and click Dismiss.
Rename hosts
The host name must adhere to the following rules:
● Contains less than 32 characters
● Contains only alphanumeric and punctuation characters
● Is unique within the object type
1. On the menu bar, click Block > Hosts.
2. In the list of hosts, select the relevant host, and click Modify > Rename.
3. In the Rename Host dialog box, enter the new name, and click Apply.
4. Verify that the operation has finished and was successful, and click Dismiss.
Related information
Configuring file storage
Wizard Description
Screen
Details Select a protection domain, and enter a NAS server name, description, and network information.
NOTE: You cannot reuse VLANs that are being used for the management and storage networks.
User Select automatic user mapping, or enable the default account for both a Windows and Linux user.
Mapping
Summary Review the content and select Back to go back and make any corrections.
4. Select Create NAS Server to create the NAS server.
The Status window opens, and you are redirected to the NAS Servers page once the server is listed on the page.
Once you have created the NAS server for NFS, you can continue to configure the server settings.
If you enabled Secure NFS, you must continue to configure Kerberos.
Select the NAS server to continue to configure, or edit the NAS server settings.
Wizard Description
Screen
Details Enter a NAS server name, description, and network details.
Sharing Select Sharing Protocol
Protocol
Select SMB.
NOTE: If you select SMB and an NFS protocol, you automatically enable the NAS server to support
multiprotocol. Multiprotocol configuration is not described in this help system.
Windows Server Settings
Select Standalone to create a stand-alone SMB server or Join to the Active Directory Domain to
create a domain member SMB server.
If you join the NAS server to the AD, optionally Select Advanced to change the default NetBios name and
organizational unit.
DNS
If you selected to Join to the Active Directory Domain, it is mandatory to add a DNS server.
Optionally, enable DNS if you want to use a DNS server for your stand-alone SMB server.
User Mapping
Keep the default Enable automatic mapping for unmapped Windows accounts/users, to support
joining the active directory domain. Automatic mapping is required when joining the active directory
domain.
Summary Review the content and select Back to go back and make any corrections.
4. Select Create NAS Server.
The Status window opens, and you are redirected to the NAS Servers page once the server is added.
Once you have created the NAS server for SMB, you can continue to configure the server settings, or create file systems.
Select the NAS server to continue to configure or edit the NAS server settings.
Option Description
Modify To modify the NAS server name or description.
Remove To remove the NAS server from the system. This option is not available if file systems have been created
on the NAS server. You must remove all file systems from the NAS server before it can be deleted.
Move NAS To move the NAS server from one node to another node (when a cluster contains more than two nodes).
Server
Swap Nodes To swap roles between the primary and the secondary nodes for the selected NAS Server.
NAS servers
Use the File > NAS Servers page to create, view, access, and modify NAS servers.
NOTE: If you plan to join the NAS server to Active Directory (AD), you must have NTP configured on your system.
You provide the following information the first time you create a NAS server. You can add or modify the settings after you have
created the server.
Option Description
Details NAS server name and network details
Sharing Protocol Type of protocol:
● For Windows, select SMB.
● For UNIX, select NFSv3, or NFSv4, or both.
● For multiprotocol, pick SMB and one or more of the UNIX protocols.
Enter the Windows Server Settings, or Unix Directory Services setting, or both for
multiprotocol.
See the NAS Server Sharing Protocols help for more information.
Enable DNS and enter the DNS server details. DNS is required for:
User Mapping If you have enabled SMB to join the active directory domain, or enabled the NAS server for both
SMB and NFS, then you must provide the user mapping information.
See the NAS Server Sharing Protocols help for more information.
Swap Nodes
Use this option to swap the roles between the primary and the secondary nodes for the selected NAS Server.
File interfaces
Presents the NAS server file interfaces.
You can add more interfaces, and define which will be the preferred interface to use with the NAS server. PowerFlex assigns a
preferred interface by default, but you can set which interface to use first for production and backup, and IPv4, and IPv6.
DNS
DNS is required for Secure NFS.
You cannot disable DNS for:
● NAS servers that support multiprotocol file sharing.
● NAS servers that support SMB file sharing and that are joined to an Active Directory (AD).
You can also configure LDAP with SSL (LDAP Secure) and can enforce the use of a Certificate Authority certificate for
authentication.
Local files
Local files can be used instead of, or in addition to DNS, LDAP, and NIS directory services.
To use local files, configuration information must be provided through the files listed in PowerFlex Manager. If you have not
created your own files ahead of time, use the download arrows to download the template for the type of file you need to
provide, and then upload the edited version.
To use local files for NFS, FTP access, the passwd file must include an encrypted password for the users. This password is
used for FTP access only. The passwd file uses the same format and syntax as a standard Unix system, so you can leverage
this to generate the local passwd file. On a Unix system, use useradd to add a new user and passwd to set the password for
that user. Then, copy the hashed password from the /etc/shadow file, add it to the second field in the /etc/passwd file,
and upload the /etc/passwd file to the NAS server.
SMB server
This section contains options for configuring a Windows server.
If you are configuring SMB with Kerberos security, you must select to Join to the Active Directory Domain.
If you select to Join to the Active Directory Domain, you must have added a DNS server. You can add a DNS server from the
Naming Services card.
If the Windows Server Type is set to Join to the Active Directory Domain, then Enable automatic mapping for
unmapped Windows accounts/users must be selected in the User Mapping tab.
NFS server
This section contains options for configuring an NFS, or NFS secure server for Linux or UNIX support.
Task Description
Extend the Linux or UNIX credential Select or clear Enable extended Unix credentials.
to enable the storage system to ● If this field is selected, the NAS server uses the User ID (UID) to obtain the primary
obtain more than 16 group GIDs. Group ID (GID) and all group GIDs to which it belongs. The NAS server obtains the
GIDs from the local password file or UDS.
● If this field is cleared, the UNIX credential of the NFS request is directly extracted
from the network information that is contained in the frame. This method has better
performance, but it is limited to including up to only 16 group GIDs.
NOTE: With secure NFS, the UNIX credential is always built by the NAS server, so
this option does not apply.
Specify a Linux or UNIX credential In the Credential cache retention field, enter a time period (in minutes) for which
cache retention period. access credentials are retained in the cache. The default value is 15 minutes.
NOTE: This option can lead to better performance, because it reuses the UNIX
credential from the cache instead of building it for each request.
You can configure Secure NFS when you create or modify a multiprotocol NAS server or one that supports Unix-only shares.
Secure NFS provides Kerberos-based user authentication, which can provide network data integrity and network data privacy.
There are two methods for configuring Kerberos for secure NFS:
● Use the Kerberos realm (Windows realm) associated with the SMB domain configured on the NAS server, if any. If you
configure secure NFS using this method, SMB support cannot be deleted from the NAS server while secure NFS is enabled
and configured to use the Windows realm.
This method of configuring secure NFS requires fewer steps than configuring a custom realm.
● Configure a custom realm to point to any type of Kerberos realm (AD, MIT, Heimdal). If you configure secure NFS using this
method, you must upload the keytab file to the NAS server being defined.
FTP
FTP or Secure FTP can only be configured after a NAS server has been created.
Passive mode FTP is not supported.
FTP access can be authenticated using the same methods as NFS or SMB. Once authentication is complete, access is the same
as SMB or NFS for security and permission purposes. The method of authentication that is used depends on the format of the
username:
● If the format is domain@user or domain\user, SMB authentication is used. SMB authentication uses the Windows
domain controller.
● For any other single username format, NFS authentication is used. NFS authentication uses local files, LDAP, NIS, or local
files with LDAP or NIS. To use local files for NFS, FTP access, the passwd file must include an encrypted password for the
users. This password is used for FTP access only. The passwd file uses the same format and syntax as a standard Unix
system, so you can leverage this to generate the local passwd file. On a Unix system, use useradd to add a new user
and passwd to set the password for that user. Then, copy the hashed password from the /etc/shadow file, add it to the
second field in the /etc/passwd file, and upload the /etc/passwd file to the NAS server.
User mapping
If you are configuring a NAS server to support both types of protocols, SMB and NFS, you must configure the user mapping.
When configured for both types of protocol, the user mapping requires that the NAS server is joined with an AD domain. You
can configure the SMB server with AD from the SMB Server card.
If the Windows Server Type is set to Join to the Active Directory Domain, then you must select Enable automatic
mapping for unmapped Windows accounts/users.
Event publishers
The events publisher specifies one to three publishing pools and enables configuration of advanced settings.
● Pre-Events Failure Policy—Determines the pre-event behavior if PowerFlex File cannot reach the CEPA Server.
○ Ignore (default)—Consider preevents acknowledged when CEPA servers are offline.
○ Deny—Deny user access when a corresponding pre-event request to CEPA servers failed.
● Post-Events Failure Policy—Determines the post-event behavior if PowerFlex File cannot reach CEPA Server.
○ Ignore—Continue and tolerate lost events.
○ Accumulate (default)—Continue and persist lost events in an internal buffer.
○ Guarantee—Persist lost events, deny file systems access when the buffer is full.
○ Deny—Deny access to file systems when CEPA servers are offline.
● Connectivity and protocol settings
○ HTTP and Port—HTTP and 12228, by default
○ Microsoft RPC and Accounts—Enabled and SMB, by default
○ Heartbeat and Timeout—10 sec and 1000 millisecond, by default
In the Event Publishers tab, you can create, modify, delete, associate, or dissociate CEPA event publishers.
Kerberos
Kerberos is a distributed authentication service designed to provide strong authentication with secret-key cryptography. It
works on the basis of "tickets" that allow nodes communicating over a non-secure network to prove their identity in a secure
manner. When configured to act as a secure NFS server, the NAS server uses the RPCSEC_GSS security framework and
Kerberos authentication protocol to verify users and services.
● Using Kerberos with NFS requires that DNS and a UDS, are configured for the NAS server and that all members of the
Kerberos realm are registered in the DNS server.
● For authentication Kerberos can be configured with either a custom realm, or with Active Directory (AD).
● The storage system must be configured with an NTP server. Kerberos relies on the correct time synchronization between
the KDC, servers, and the client network.
Component Description
NAS server A file server configured with its network interfaces and other settings exclusively exporting the set of
specified file systems through mount points called shares. Client systems connect to a NAS server on
the storage system to get access to the file system shares. A NAS server can have more than one file
system, but each file system can only be associated with one NAS server.
File system A manageable container for file-based storage that is associated with the following properties:
● A specific quantity of storage.
● A particular file access protocol (SMB, NFS, or multiprotocol).
● One or more shares (through which network hosts or users can access shared files or folders).
Share or export A mountable access point to file system storage that network users and hosts can use for file-based
storage.
Windows users or A user, host, netgroup, or subnet that has access to the share and can mount or map the resource. For
Linux/UNIX hosts Windows file systems, access to the share is based on share permissions and ACLs that assign privileges
NAS servers
NAS servers provide access to file systems. Each NAS server supports Windows (SMB) file systems, Linux/UNIX (NFS)
exports, or both. To provide isolated access to a file system, you can configure a NAS server to function as independent file
server with server-specific DNS, NIS, and other settings. The IP address of the NAS server provides part of the mount point
that users and hosts use to map to the file system storage resource, with the share name providing the rest. Each NAS server
exposes its own set of files systems through the file system share, either SMB or NFS.
Once a NAS server is running, you can create and manage file systems and shares on that NAS server.
NOTE: You can create file system only if there is a NAS server running on the storage system. The types of file systems
that you can create are determined by the file sharing protocols (SMB, NFS, or multiprotocol) enabled for the NAS server.
This is because the metadata space is reserved from the usable file system capacity.
6. Configure security, access permissions, and host access for the system.
Option Description
Local The path to the file system storage resource on the storage system. This path specifies the unique location of
path the share on the storage system.
● Each NFS share must have a unique local path. PoweFlex automatically assigns this path to the initial export
created within a new file system. The local path name is based on the file system name.
● Before you can create more exports within an NFS file system, create a directory to share from a Linux/
UNIX host that is connected to the file system. Then you can create an export from PowerFlex Manager and
set access permissions accordingly.
Export The path used by the host to connect to the export. PowerFlex creates the export path that is based on the IP
path address of the host, and the name of the export. Hosts use either the file name or the export path to mount or
map to the export from a network host.
7. Optionally, add a protection policy to the file system.
If you are adding a protection policy to the file system, the policy must have been created before creating the file system.
Only snapshots are supported for protection for file systems. Replication is not supported on file system.
8. Review the summary and click Create File System.
The file system is added to the File System tab. If you created an export simultaneously, then the export displays in the
NFS export tab.
Option Description
Select NAS Select a NAS server enabled for SMB.
Server
Advanced SMB Optionally choose from the following:
Settings ● Sync Writes Enabled
● Oplocks Enabled
● Notify on Write Enabled
● Notify on Access Enabled
File System Provide the file system name, and the size of the file system.
Details
The file system size can be from 3 GB to 256 TB.
NOTE: All thin file systems, regardless of size, have 1.5GB reserved for metadata upon creation.
For example, after creating a 100GB thin file system, immediately shows 1.5GB used. When the file
system is mounted to a host, it shows 98.5GB of usable capacity.
This is because the metadata space is reserved from the usable file system capacity.
SMB Share Optionally, configure the initial SMB Share. You can add shares to the file system after the initial file
system configuration.
Protection Optionally, provide a protection policy for the file system.
Policy NOTE: PowerFlex supports snapshots for file storage protection. Replication protection is not
supported for file systems. If a protection policy is set for both replication and snapshot protections,
PowerFlex implements the snapshot policy on the file system, and ignores the replication policy for
the file system.
Option Description
Modify To modify the file system name, description, or size.
More Actions To perform one of the following operations:
● Refresh quotas.
● Remove the file system from the NAS server. This option is not available if there are NFS exports or
SMB shares on the file system.
Option Description
Select File Select a file system that has been enabled for SMB.
System
Select a Optionally, select one of the file system snapshots on which to create the share.
snapshot of the
Only snapshots are supported for file system protection policies. Replication is not supported for file
file system
systems.
SMB share Enter a name, and local path for the share. When entering the local path:
details ● You can create multiple shares with the same local path on a single SMB file system. In these cases,
you can specify different host-side access controls for different users, but the shares within the file
system have access to common content.
● A directory must exist before you can create shares on it. If you want the SMB shares within the
same file system to access different content, you must first create a directory on the Windows
3. Click Next.
Once you create a share, you can modify the share from PowerFlex Manager or using the Microsoft Management Console.
To modify the share from PowerFlex Manager, select the share from the list on the SMB Share page, and click Modify.
Option Description
Create new Filesystem Create a dedicated file system to host the GNS.
(Recommended)
Select from available General Type Select an existing file system. Do not select the existing NFS file system if the file
Filesystems system root has already been exported.
6. Specify GNS details for the namespace:
Option Description
Name of the The name allows remote hosts to connect to the Global Namespace over the network.
server
Description Optional description for the namespace.
Local Path Local path relative to the NAS server. This path is the local path to the storage resource or any
existing subfolder of the storage resource that is shared over the network. The path is relative to the
NAS Server and must start with the file system's mountpoint path, which is the file system name.
For example, to share the top level of a file system named powerflexfs1, which is mounted on the /
powerflexfs1 mountpoint on the NAS Server, use / powerflexfs1 in the path parameter.
NOTE: The Namespace Path is generated based on the interface of the selected NAS server and
namespace server name.
7. Review the Summary page and choose one of the following options to create the GNS.
● Run in the background
● Add to the Job List to schedule later
The root shares and exports are automatically created on the file system.
NOTE: These shares and exports cannot be deleted without deleting the namespace.
Option Description
Local path A path name relative to the namespace root (without a forward slash or trailing slash). Remote
hosts use this path to connect to the target file system.
Description (Optional) Description of the link.
Client Cache Timeout Client cache timeout is the amount of time that clients cache namespace root referrals. A
(Seconds) referral is an ordered list of targets that a client system receives from a namespace server
when the user accesses a namespace root or folder with targets in the namespace.
Add Target UNC Select the target from UNC path from the available exports or shares, or add the target UNC
(Universal Naming path manually.
Convention) Path
Setting Description
Sync Writes Enabled When you enable the synchronous writes option for a Windows (SMB) or multiprotocol
file system, the storage system performs immediate synchronous writes for storage
operations, regardless of how the SMB protocol performs write operations. Enabling
synchronous writes operations enables you to store and access database files (for
example, MySQL) on storage system SMB shares. This option guarantees that any
write to the share is done synchronously and reduces the chances of data loss or file
corruption in various failure scenarios, for example, loss of power.
This option is disabled by default.
NOTE: The synchronous writes option can have a significant impact on
performance. It is not recommended unless you intend to use Windows file systems
to provide storage for database applications.
Oplocks Enabled (Enabled by default) Opportunistic file locks (oplocks, also known as Level 1 opslock)
enable SMB clients to buffer file data locally before sending it to a server. SMB clients
can then work with files locally and periodically communicate changes to the storage
system rather than having to communicate every operation over the network to the
storage system. This is enabled by default for Windows (SMB) and multiprotocol file
systems. Unless your application handles critical data or has specific requirements that
make this mode or operation unfeasible, leaving the oplocks enabled is recommended.
The following oplocks implementations are supported:
● Level II oplocks, which informs a client that multiple clients are accessing a file, but
no client has yet modified it. A level II oplock lets the client perform read operations
and file attribute fetches by using cached or read-ahead local information. All other
file access requests must be sent to the server.
● Exclusive oplocks, which informs a client that it is the only client opening the file.
An exclusive oplock lets a client perform all file operations by using cached or
read-ahead information until it closes the file, at which time the server must be
updated with any changes that are made to the state of the file (contents and
attributes).
● Batch oplocks, which informs a client that it is the only client opening the file. A
batch oplock lets a client perform all file operations by using cached or read-ahead
information (including opens and closes). The server can keep a file opened for
a client even though the local process on the client machine has closed the file.
This mechanism curtails the amount of network traffic by letting clients skip the
extraneous close and open requests.
Notify on Write Enabled Enable notification when a file system is written to.
This option is disabled by default.
Read/Write, allow Root Hosts have permission to read and write to the storage resource or share, and
to grant revoke access permissions (for example, permission to read, modify and
execute specific files and directories) for other login accounts that access the
storage. The root of the NFS client has root access to the share.
NOTE: Unless the hosts are part of a supported cluster configuration, a void
granting Read/Write access to more than one host.
NOTE: VMware ESXi hosts must have Read/Write, allow Root access in order
to mount an NFS datastore using NFSv4 with NFS Owner:root authentication.
Read-only, allow Root (NFS Exports) Hosts have permission to view the contents of the share, but not to write to it. The
root of the NFS client has root access to the share.
Option Description
Continuous Availability Gives host applications transparent, continuous access to a share following a failover of
the NAS server on the system (with the NAS server internal state saved or restored
during the failover process).
NOTE: Enable continuous availability for a share only when you want to use
Microsoft Server Message Block (SMB) 3.0 protocol clients with the specific share.
Protocol Encryption Enables SMB encryption of the network traffic through the share. SMB encryption is
supported by SMB 3.0 clients and above. By default, access is denied if an SMB 2 client
attempts to access a share with protocol encryption enabled.
You can control this by configuring the RejectUnencryptedAccess registry key on the
NAS Server. 1 (default) rejects non-encrypted access and 0 allows clients that do not
support encryption to access the file system without encryption.
Access-Based Enumeration Filters the list of available files and directories on the share to include only those to
which the requesting user has read access.
NOTE: Administrators can always list all files.
Branch Cache Enabled Copies content from the share and caches it at branch offices. This allows client
computers at branch offices to access the content locally rather than over the WAN.
Branch Cache is managed from Microsoft hosts.
Option Description
Name The name provided for the export or share, along with the NAS server name
is the name by which the hosts will access the export or share.
NFS export, and SMB share names must be unique at the NAS server level
per protocol. However, you can specify the same name for SMB shares and
NFS exports.
Local path The path to the file system storage resource on the storage system. This
path specifies the unique location of the share on the storage system.
SMB shares
● An SMB file system allows you to create multiple shares with the same
local path. In these cases, you can specify different host-side access
controls for different users, but the shares within the file system will all
access common content.
● A directory must exist before you can create shares on it. Therefore,
if you want the SMB shares within the same file system to access
different content, you must first create a directory on the Windows host
that is mapped to the file system. Then, you can create corresponding
shares using Unisphere. You can also create and manage SMB shares
from the Microsoft Management Console.
NFS exports
● Each NFS export must have a unique local path. PowerFlex
automatically assigns this path to the initial export created within a new
file system. The local path name is based on the file system name.
● Before you can create additional exports within an NFS file system,
you must create a directory to share from a Linux/UNIX host that
is connected to the file system. Then, you can create a share from
PowerFlex Manager and set access permissions accordingly.
SMB share path or export path The path used by the host to connect to the share or export.
PowerFlex Manager creates the export path based on the IP address of the
file system, and the name of the export or share. Hosts use either the file
name or the export path to mount or map to the export or share from a
network host.
Quotas are supported on SMB, NFS, FTP, NDMP, and multiprotocol file systems.
You can set the following types of quotas for a file system.
Type Description
User quotas Limits the amount of storage that is consumed by an individual user storing data on
the file system.
Tree quota Tree quotas limit the total amount of storage that is consumed on a specific directory
tree. You can use tree quotas to:
● Set storage limits on a project basis. For example, you can establish tree quotas
for a project directory that has multiple users sharing and creating files in it.
● Track directory usage by setting the tree quota hard and soft limits to 0 (zero).
NOTE: If you change the limits for a tree quota, the changes take effect
immediately without disrupting file system operations.
User quota on a quota tree Limits the amount of storage that is consumed by an individual user storing data on
the quota tree.
Quota limits
To track space consumption without setting limits, set Soft Limit and Hard Limit to 0, which indicates no limit.
Type Descriptions
Hard A hard limit is an absolute limit on storage usage.
If a hard limit is reached for a user quota on a file system or quota tree, the user cannot write
data to the file system or tree until more space becomes available. If a hard limit is reached
for a quota tree, no user can write data to the tree until more space becomes available.
Create a snapshot
Creating a snapshot saves the state of the file system and all files and data within it at a particular point in time. You can use
snapshots to restore the entire file system to a previous state.
Before creating a snapshot, consider:
● Snapshots are not full copies of the original data. Do not rely on snapshots for mirrors, disaster recovery, or high-availability
tools. Because snapshots are partially derived from the real-time data of the file systems, they can become inaccessible if
the storage resource becomes inaccessible.
● Although snapshots are space efficient, they consume overall system storage capacity. Ensure that the system has enough
capacity to accommodate snapshots.
● When configuring snapshots, review the snapshot retention policy that is associated with the storage resource. You may
want to change the retention policy in the associated rules or manually set a different retention policy, depending on the
purpose of the snapshot.
● Manual snapshots that are created with PowerFlex Manager are retained for one week after creation (unless configured
otherwise).
5. Click Restore.
NOTE: You can also restore the file system by selecting the file system snapshot from the Snapshots view. Click File
> File Systems, and select the file systems from the list, click View Details, and click More Actions > Restore from
Snapshot.
Snapshots
A snapshot is a copy of a volume at a specific point in time. With snapshots, you can overwrite the contents of the volume, map
to a host, and set bandwidth and IOPS limits.
Create snapshots
PowerFlex lets you to create instantaneous snapshots of one or more volumes.
The Use secure snapshots option prohibits deletion of the snapshots until the defined expiration period has elapsed.
When you create a snapshot of more than one volume, PowerFlex generates a consistency group by default. The snapshots
under the consistency group are taken simultaneously for all listed volumes, thereby ensuring their consistency. You can view
the consistency group by clicking View Details in the right pane and then clicking the Snapshots Consistency Group tab in
the left pane.
NOTE: The consistency group is for convenience purposes only. No protection measures are in place to preserve the
consistency group. You can delete members from the group.
1. On the menu bar, click Protection > Snapshots.
2. In the list, select the relevant volumes, and click More > Create Snapshot.
3. In the Create snapshot of volume dialog box, enter the name of the snapshot. You can accept the default name, or create
a snapshot a name according to the following rules:
● Contains less than 32 characters
● Contains only alphanumeric and punctuation characters
● Is unique within the object type
4. Optionally, configure the following parameters:
● To set read-only permission for the snapshot, select the Read Only check box.
● To prevent deletion of the snapshot during the expiration period, select the Use secure snapshot check box, enter the
Expiration Time, and select the time unit type.
5. Click Create Snapshot.
6. Verify that the operation has finished and was successful, and click Dismiss.
NOTE: Use this command very carefully, since this will overwrite data on the target volume or snapshot.
NOTE: If the destination volume is an auto snapshot, the auto snapshot must be locked before you can continue to
overwrite volume content.
1. On the menu bar, click Protection > Snapshots.
2. In the list of snapshots, select the snapshot to be overwritten, and then click More Actions > Overwrite Content.
3. Click Next.
4. In the Select Source Volume tab, do the following:
a. Select the source volume from which to copy content.
b. Click Time Frame, and then select the interval from which to copy content. If you choose Custom, select the date and
time and click Apply.
c. Click Next.
5. In the Review tab, review the details and then click Overwrite Content.
6. Verify that the operation has finished and was successful, and click Dismiss.
Delete snapshots
Remove snapshots of volumes from PowerFlex.
Ensure that the snapshot that you are removing is not mapped to any hosts. If the snapshot is mapped, unmap it before
removing it. In addition, ensure that the snapshot is not the source volume of any snapshot policy. You must remove the volume
from the snapshot policy before you can remove the snapshot.
To prevent causing a data unavailability scenario, avoid deleting volumes or snapshots while the MDM cluster is being upgraded.
CAUTION: Removing a snapshot erases all the data in the corresponding snapshot.
Map snapshots
Mapping exposes the snapshot to the specified host, effectively creating a block device on the host. You can map a snapshot to
one or more hosts.
For Linux-based devices, the scini device name may change on reboot. Dell recommends that you mount a mapped volume to
the /dev/disk/by-id unique ID, which is a persistent device name, rather than to the scini device name.
To identify the unique ID, run the /dev/disk/by-id/ command.
You can also identify the unique ID using VMware. In the VMware management interface, the device is called EMC Fibre
Channel Disk, followed by an ID number starting with the prefix eui.
NOTE: You cannot map a volume if the volume is an auto snapshot that is not locked, and you cannot map the volume on
the target of a peer system if it is connected to an RCG..
1. On the menu bar, click Protection > Snapshots.
2. In the list of snapshots, select one or more snapshots, and then click Mapping > Map.
3. In the Map Volume dialog box, select one or more hosts to which you want to map the snapshots.
4. Click Map, and click Apply.
5. Verify that the operation has finished and was successful, and then click Dismiss.
Unmap snapshots
Unmap one or more snapshot volumes from hosts.
1. On the menu bar, click Protection > Snapshots.
2. In the list of snapshots, select the relevant snapshots, and click Mapping > Unmap.
3. Select the host from which to remove mapping to the snapshots.
4. Click Unmap, and click Apply.
5. Verify that the operation has finished and was successful, and click Dismiss.
NOTE: vTree migration is a long process and can take days or weeks, depending on the size of the vTree.
Option Description
Add migration to the Give this vTree migration the highest priority in the migration priority queue.
head of the migration
queue
Ignore destination Allow the migration to start regardless of whether there is enough capacity at the destination.
capacity
Enable compression A compression algorithm is applied to the data.
Convert vTree from... Convert a thin-provisioned vTree to thick-provisioned, or a thick-provisioned vTree to thin-
provisioned at the destination, depending on the provisioning of the source volume.
NOTE: SDCs with a version earlier than v3.0 do not fully support converting a thick-
provisioned vTree to a thin-provisioned vTree during migration; after migration, the vTree
will be thin-provisioned, but the SDC will not be able to trim it. These volumes can
be trimmed by unmapping and then remapping them, or by restarting the SDC. The
SDC version will not affect capacity allocation and a vTree converted from thick to thin
provisioning will be reduced in size accordingly in the system.
Save current vTree The provisioning state is returned to its original state before the migration took place.
provisioning state during
migration
8. Click Migrate vTree.
The vTree migration is initiated. The vTree appears in both the source and the destination storage pools.
9. At the top right of the page, click the Running Storage Jobs icon, and check the progress of the migration.
10. Verify that the operation has finished and was successful, and click Dismiss.
Snapshot policies
Snapshot policies enable you to define policies for the number of snapshots that PowerFlex takes of one or more defined
volumes at a given time.
Snapshots are taken according to the defined rules. You can define the time interval between two rounds of snapshots, as
well as the number of snapshots to retain, in a multi-level structure. For example, take snapshots every x minutes/hours/days/
weeks. You can define a maximum of six levels, with the first level having the most frequent snapshots.
For example:
Rule: Take snapshots every 60 minutes
Retention Levels:
● 24 snapshots
● 7 snapshots
● 4 snapshots
After defining the parameters, select the source volume to add to the snapshot policy. You can add multiple source volumes to
a snapshot policy, but only a single policy per source volume is allowed. Only one volume per vTree may be used as a source
volume of a policy (any policy).
When you remove the source volume from the policy, you must choose how to handle snapshots. Snapshots created by the
policy are referred to as auto snapshots. This is indicated if a snapshot policy is displayed for the snapshot.
● If the source volume has auto snapshots, you cannot unassign the source volume from the snapshot policy. You can remove
auto snapshots from Snapshots.
● If the source volume has auto snapshots but none of them are locked, you are prompted to confirm that you would like to
delete all auto snapshots. If any of the auto snapshots are locked, the locked auto snapshots are just detached from the
snapshot policy, but not deleted. You can manually delete auto snapshots. It doesn't matter if the auto snapshot is locked or
not, you can still delete.
where <FILE_PATH> is the location where the certificate will be saved, and the file name. For example: /opt/
source_sys.crt
3. Copy the certificate file to the target system.
where <PATH_TO_LOCAL_COPY_OF_TARGET_CERT> is the copy of the target system's certificate that you copied to
the source system.
7. On the target system, add the source system's certificate:
where <PATH_TO_LOCAL_COPY_OF_SOURCE_CERT> is the copy of the source system's certificate that you copied to
the target system.
Journal capacity
You should consider several factors when allocating journal capacity.
Journal capacity is defined as a percentage of the total storage capacity in the storage pool and must equal at least 28 GB per
SDR. In general, journal capacity should be at least 5% of replicated usable capacity in the protection domain, including volumes
used as source and targets. It is important to assign enough storage capacity for the replication journal.
The amount of capacity needed for the journal is based on the following factors:
● Minimal requirements—108 GB multiplied by the number of SDR sessions. The number of SDR sessions is equal to the
number of SDRs plus one. The extra SDR session is to ensure that a new session can be allocated for an SDR during a
system upgrade.
● The capacity needed to sustain an outage—application WAN bandwidth multiplied by the planned WAN outage. In general,
journal capacity in the protection domain should be at least 5% of the application pool. If the application has a heavy I/O
load, larger capacity should be used. Similarly, if a long outage is expected, a larger capacity should be allocated. If there
are replicated volumes in more than one storage pool in the protection domain, this calculation should be repeated for
each storage pool, and the allocated journal capacity in the protection domain must at least equal the sum of the size per
application pool.
Use the following steps to calculate exactly how much journal capacity to allocate:
1. Select the storage pools from which to allocate the journal capacity. The journal is shared between all of the replicated RCGs
in the protection domain. Journal capacity should be allocated from storage pools as fast as (or faster than) the storage pool
of the fastest replicated application in the protection domain. It should use the same drive technology and about the same
drive count and distribution in nodes.
2. Consider the minimal requirements needed (28 GB multiplied by the number of SDR sessions) and the capacity needed to
sustain an outage. Journal capacity will be at least the maximum of these two factors.
3. Take into account the expected outage time. The minimal outage allowance is one hour, but at least three hours are
recommended.
4. Calculate the journal capacity needed per application: maximal application throughput x maximum outage interval.
5. Since journal capacity is defined as a percentage of storage pool capacity, calculate the percentage of capacity based on the
previously calculated needs.
For example:
● An application generates 1 GB/s of writes.
● The maximal supported outage is 3 hours (3 hours x 3600 seconds = 10800 seconds).
● The journal capacity needed for this application is 1 GB/s x 10800 s = ~10.547 TB.
● Since the journal capacity is expressed as a percentage of the storage pool capacity, divide the 10.547 TB by the size of the
storage pool, which is 200 TB: 100 x 10.547 TB/200 TB = 5.27%. Round this up to 6%.
● Repeat this for each application being replicated.
When a protection domain has several storage pools and several replicated applications, the journal capacity should be
calculated as in the example above, and the capacity can be divided among all the storage pools (provided they are fast
enough). For higher availability, the journal capacity should be allocated from multiple storage pools.
NOTE: When storage pool capacity is critical, capacity cannot be allocated for new volumes or for expanding existing
volumes. This behavior must be taken into account when planning the capacity available for journal usage. The volume
SDRs
Storage Data Replicators (SDRs) are responsible for processing all I/Os of replication volumes.
All application I/Os of replicated volumes are processed by the source SDRs. At the source, application I/Os are sent by the
SDC to the SDR. The I/Os are sent to the target SDRs and stored in their journals. The target SDRs’ journals apply the I/Os to
the target volumes. A minimum of two SDRs are deployed at both the source and target systems to maintain high availability. If
one SDR fails, the MDM directs the SDC to send the I/Os to an available SDR.
Add SDR
Add an additional SDR to an existing PowerFlex system, or add back a new SDR if a previously existing SDR was removed. A
minimum of two SDRs are required on each replication system. Each SDR must be configured with one or more IP addresses and
roles.
The SDR communicates with several components, including: SDC (application), SDS (storage) and remote SDR (external).
When an IP address is added to an SDR, the role or roles of the IP address must be defined. The IP address role determines the
component with which that IP address communicates. For example, the application role means that the associated IP address is
used for SDR-SDC communication. By default, all the roles are selected for an IP address.
SDR components must be deployed as resources before you can add them using this procedure.
1. On the menu bar, click Protection > SDRs.
2. Click Add SDR.
3. In the Add SDR dialog box, enter the connection information of the SDR:
a. Enter the SDR name.
b. If necessary, modify the SDR port number.
c. Select the relevant protection domain.
d. Enter the IP address of the SDR.
e. Select one or more roles, for example, default: all roles are selected.
f. If the SDR has more than one IP address, click Add IP to add more IP addresses and their roles.
g. Click Add SDR to initiate a connection with the peer system.
4. Verify that the operation has finished and was successful, and click Dismiss.
After failover Reverse/restore N/A - data is not replicated By default access to the
volume is allowed through
Remove
the original target (system
B).
It is possible to enable
access through the original
source (system A).
Create an RCG
Create a replication consistency group (RCG) and add it to the system. Replication occurs between volumes, and RCGs maintain
consistency between volume pairs in an RCG.
● Volumes in an RCG pair must be exactly the same size.
● Protection domains must be configured on both source and target systems.
The recovery point objective (RPO) configured in RCGs defines the maximum amount of time during which data is lost. Dell
recommends setting a low RPO to ensure that not much data is lost during a possible compromise of data transfer from
source to target. If for example, if one minute is set as the RPO, you will not lose more than 30 seconds of data. Dell highly
recommends that the RPO be low, because this ensures that minimal data is lost. There is no replication unless the source
volume is consistent with the data from the target volume.
1. On the menu bar, click Protection > RCGs.
2. Click Add RCG.
3. In the Add RCG wizard, enter the information for the RCG:
8. Click Finish.
Modify RPO
Update recovery point objective (RPO) time as required.
The Recovery Point Objective (RPO) defines the maximum amount of time during which data is lost. Dell recommends setting
a low RPO to ensure that not much data is lost during a possible compromise of data transfer from source to target. If for
example, if one minute is set as the RPO, you will not lose more than 30 seconds of data. Dell highly recommends that the RPO
be low, because this ensures that minimal data is lost.
1. On the menu bar, click Protection > RCGs.
2. Click the relevant RCG check box, and click Modify > Modify RPO.
3. In the Modify RPO for RCG dialog box, modify the RPO time, and click Apply.
4. Verify that the operation has finished and was successful, and click Dismiss.
Perform a failover
If the system is not healthy, you can fail over the source role to the target system. When the source is compromised, data from
the host stops sending I/Os to the source volume, replication is then stopped and the target system takes on the source role.
The host on the target starts sending I/Os to the volume. The target takes on the role of source, and the source takes on the
role of target.
Before performing a failover, stop the application and unmount the file systems at the source (if the source is available). Target
volumes can only be mapped after performing a failover.
There are two options when choosing to fail over an RCG:
● Switchover—This option is a complete synchronization and failover between the source and the target. Application I/Os are
stopped at the source, and then source and target volumes are synchronized. Access mode is changed of the target volumes
to the target host, roles are switched, and finally, the access mode of the new source volumes is changed to read/write.
● Latest PIT—The system prevents any writes to the source volumes.
1. On the menu bar, click Protection > RCGs.
2. Click the relevant RCG check box, and then click More Actions > Failover.
3. In the Failover RCG dialog box, select one of the following options: Switchover (Sync & Failover) or Latest PIT: (date
& time).
4. Click Apply Failover.
5. In the RCG Sync & Failover dialog box, click Proceed.
6. Verify that the operation has finished and was successful, and click Dismiss.
7. From the upper right, click the Running Jobs icon and check the progress of the failover.
Reverse replication
When the RCG is in failover or switchover mode, you can reverse or restore the replication. Reversing replication changes the
direction, so that the original target becomes the source. All data at the original source is overwritten by the data at the target
side. This option may be selected from either source or target systems.
This option is available when RCG is in failover mode, or when the target system is not available. It is recommended to take a
snapshot of the original source before reversing the replication for backup purposes.
1. On the menu bar, click Protection > RCGs.
2. Click the relevant RCG check box, and click More Actions > Reverse.
Restore replication
When the replication consistency group is in failover or switchover mode, you can reverse or restore the replication. Restoring
replication maintains the replication direction from the original source and overwrites all data at the target side. This option may
be selected from either source or target systems.
This option is available when an RCG is in failover mode, or when the target system is not available. Dell recommends taking a
snapshot of the original destination for backup purposes, before restoring replication.
1. On the menu bar, click Protection > RCGs.
2. Click the relevant RCG check box, and click More Actions > Restore.
3. In the Restore Replication RCG dialog box, click Apply.
4. Verify that the operation has finished and was successful, and click Dismiss.
Test Failover
Test failover of the latest copy of snapshots of source and target systems before performing a failover.
Replication is still running and is in a healthy state.
1. On the menu bar, click Protection > RCGs.
2. Click the relevant RCG check box, and then click More Actions > Test Failover.
3. In the RCG Test Failover dialog box, click Start Test Failover.
4. In the RCG Test Failover using target volumes dialog box, click Proceed.
5. Verify that the operation has finished and was successful, and click Dismiss.
Freeze an RCG
The freeze command stops the writing of data from the target journal to the target volume. This option is used while creating a
snapshot or copy of the replicated volume.
1. On the menu bar, click Protection > RCGs.
2. Click the relevant RCG check box, and click More Actions > Freeze Apply.
3. Click Freeze Apply.
4. Verify that the operation has finished and was successful, and click Dismiss.
Unfreeze an RCG
The unfreeze apply command resumes data transfer from the target journal to the target volume.
1. On the menu bar, click Protection > RCGs.
2. Click the relevant RCG check box, and click More Actions > Unfreeze Apply.
3. Click Unfreeze Apply.
4. Verify that the operation has finished and was successful, and click Dismiss.
Delete an RCG
Delete an RCG.
If you no longer require replication of the pairs in an RCG, you can delete it.
1. On the menu bar, click Protection > RCGs.
2. Click the relevant RCG check box, and click More Actions > Delete.
3. In the Delete RCG dialog box, verify that you have selected the desired RCG, and click Delete.
4. Verify that the operation has finished and was successful, and click Dismiss.
The Resource Groups page displays the resource groups that are in the following states in both Tile and List view.
State Description
Healthy The resource group is successfully deployed and is healthy.
Warning One or more resources in the resource group requires corrective action.
Critical The resource group is in a severely degraded or nonfunctional state and requires
attention.
Pending The deployment is scheduled for a later time or date.
In Progress The resource group deployment is in progress, or has other actions currently in
process, such as a node expansion or removal.
Cancelled The resource group deployment has been stopped. You can update the resources or
retry the deployment, if necessary.
Incomplete The resource group is not fully functional because it has no volumes that are
associated with it. Click Add Resources to add volumes.
Service Mode The resource group is in service mode.
Lifecycle Mode The resource group is in life-cycle mode. Resource groups in life cycle mode
are enabled with health and compliance monitoring, and non-disruptive upgrade
features only.
Managed Mode The resource group is in managed mode. Resource groups in managed mode are
enabled with health and compliance monitoring, non-disruptive update, automated
resource addition, and automated resource replacement features.
To switch views, click the Tile View icon or List View icon .
To view the resource groups based on a particular resource group state, select an option from the Filter By drop-down list.
Alternately, in the Graphical view, click the graphic in a particular state.
In the Tile view, each square tile represents a resource group and has the status of the resource group at the bottom of
the graphic. The state icon on the graphic indicates the state of the resource group. The components in blue indicate the
component types that are in the deployment. The components that are in gray indicate the component types that are not in the
resource group.
In the List view, the following information displays:
● Status—Status of the resource group.
● Name—Name of the resource group.
● Deployed By—Name of the user who deployed the resource group.
● Deployed On—Date and time when the resource group is deployed.
● Components—Components used in the resource group.
Click the resource group in the List view or Tile view to view the following information about the resource group in the right
pane:
● Resource group name and description to identify the resource group.
● Name of the user who deployed the resource group.
● Date and time when the resource group is deployed.
● Name of the reference template that is used in the resource group.
● Number of resources that are in the resource group for deployment, based on component type (cluster or node).
Click View Details to view more details about the resource group. You can also generate troubleshooting bundles from the
resource group details page.
Click the resource group name in the List view to open the Resource Group Details page.
Click Update Resources to update the firmware of all nodes in the resource group that are not compliant.
Related information
Configuring block storage
Deploying and provisioning
Basic tasks
This section provides basic tasks for resource group management.
d. Indicate Who should have access to the resource group deployed from this template by selecting one of the
following options:
● To restrict access to super users, select Only PowerFlex SuperUser.
● To grant access to super users and some specific lifecycle administrators and drive replacers, select the PowerFlex
SuperUser and specific LifecycleAdmin and DriveReplacer option, and perform the following steps:
i. Click Add User(s) to add one or more LifecycleAdmin or DriveReplacer users to the list displayed.
ii. Select which users will have access to this resource group.
iii. To delete a user from the list, select the user and click Remove User(s).
iv. After adding the users, select or clear the check box next to the users to grant or block access.
● To grant access to super users and all lifecycle administrators and drive replacers, select PowerFlex SuperUser and
all LifecycleAdmin and DriveReplacer.
3. Click Next.
4. On the screens that follow the Deployment Settings page, configure the settings, as needed for your deployment.
5. Click Next.
6. On the Schedule Deployment page, select one of the following options and click Next:
● Deploy Now—Select this option to deploy the resource group immediately.
● Deploy Later—Select this option and enter the date and time to deploy the resource group.
7. Review the Summary page.
The Summary page gives you a preview of what the resource group will look like after the deployment.
8. Click Finish when you are ready to begin the deployment. If you want to edit the resource group, click Back.
Related information
Lifecycle
Viewing resource group details
Adding components to a resource group
Build and publish a template
Component types
Removing a resource group
5. To specify the compliance version to use for compliance, select the version from the Firmware and Software Compliance
list or select Use PowerFlex Manager appliance default catalog.
You cannot specify a minimal compliance version when you add an existing resource group, since it only includes
server firmware updates. The compliance version for an existing resource group must include the full set of compliance
update capabilities. PowerFlex Manager does not show any minimal compliance versions in the Firmware and Software
Compliance list.
NOTE: Changing the compliance version might update the firmware level on nodes for this resource group. Firmware on
shared devices is maintained by the global default firmware repository.
6. Specify the resource group permissions under Who should have access to the resource group deployed from this
template? by performing one of the following actions:
● To restrict access to super users, select Only PowerFlex SuperUser.
● To grant access to super users and some specific lifecycle administrators and drive replacers, select the PowerFlex
SuperUser and specific LifecycleAdmin and DriveReplacer option, and perform the following steps:
a. Click Add User(s) to add one or more LifecycleAdmin or DriveReplacer users to the list.
b. Select which users will have access to this resource group.
c. To delete a user from the list, select the user and click Remove User(s).
d. After adding the users, select or clear the check box next to the users to grant or block access.
● To grant access to super users and all lifecycle administrators and drive replacers, select PowerFlex SuperUser and all
LifecycleAdmin and DriveReplacer.
7. Click Next.
8. Choose one of the following network automation types:
● Full Network Automation
● Partial Network Automation
When you choose Partial Network Automation, PowerFlex Manager skips the switch configuration step, which is normally
performed for a resource group with Full Network Automation. Partial network automation allows you to work with
unsupported switches. However, it also requires more manual configuration before a deployment can proceed successfully.
If you choose to use partial network automation, you give up the error handling and network automation features that are
available with a full network configuration that includes supported switches.
In the Number of Instances box, provide the number of component instances that you want to include in the template.
9. On the Cluster Information page, enter a name for the cluster component in the Component Name field.
10. Select values for the cluster settings:
For a hyperconverged or compute-only resource group, select values for these cluster settings:
a. Target Virtual Machine Manager—Select the vCenter name where the cluster is available.
b. Data Center Name—Select the data center name where the cluster is available.
NOTE: Ensure that selected vCenter has unique names for clusters in case there are multiple clusters in the
vCenter.
c. Cluster Name—Select the name of the cluster you want to discover.
d. OS Image—Select the image or choose Use Compliance File ESXi image if you want to use the image provided with
the target compliance version. PowerFlex Manager filters the operating system image choices to show only ESXi images
for a hyperconverged or compute-only resource group.
For a storage-only resource group, select values for these cluster settings:
a. Target PowerFlex Gateway—Select the gateway where the cluster is available.
b. Protection Domain—Select the name of the protection domain in PowerFlex.
c. OS Image—Select the image or choose Use Compliance File Linux image if you want to use the image provided with
the target compliance version. PowerFlex Manager filters the operating system image choices to show only Linux images
for a storage-only resource group.
For a PowerFlex file resource group, select values for these cluster settings:
17. To import many general-purpose VLANs from vCenter, perform these steps:
a. Click Import Networks on the Network Mapping page.
PowerFlex Manager displays the Import Networks wizard. In the Import Networks wizard, PowerFlex Manager lists
the port groups that are defined on the vCenter as Available Networks. You can see the port groups and the VLAN IDs.
b. Optionally, search for a VLAN name or VLAN ID.
PowerFlex Manager filters the list of available networks to include only those networks that match your search.
c. Click each network that you want to add under Available Networks. If you want to add all the available networks, click
the check box to the left of the Name column.
d. Click the double arrow (>>) to move the networks you chose to Selected Networks.
PowerFlex Manager updates the Selected Networks to show the ones that you have chosen.
e. Click Save.
Related information
Getting started
Lifecycle
Support for full and partial network automation
Migrating vCLS VMs to shared storage
Related information
Upgrading a PowerFlex gateway
10. If you encounter any errors while performing firmware or software updates, you can view the PowerFlex Manager logs for
the resource group. On the Resource Group Details page, click Generate Troubleshooting Bundle.
This action creates a compressed file that contains:
● PowerFlex Manager application logs
● SupportAssist logs
● PowerFlex gateway logs
● iDRAC life-cycle logs
● Dell PowerSwitch switch logs
● Cisco Nexus switch logs
● VMware ESXi logs
● CloudLink Center logs
The logs are for the current resource group only.
Alternatively, you can access the logs from a VMware console, or by using SSH to log in to PowerFlex Manager.
Related information
Lifecycle
5. Specify the type of maintenance that you want to perform by selecting one of the following options:
● Instant Maintenance Mode enables you to perform short-term maintenance that lasts less than 30 minutes. PowerFlex
Manager does not migrate the data.
● Protected Maintenance Mode enables you to perform maintenance that requires longer than 30 minutes in a safe
and protected manner. When you use protected maintenance mode, PowerFlex makes a temporary copy of the data
so that the cluster is fully protected from data loss. Protected maintenance mode applies only to hyperconverged and
storage-only resource groups.
6. Click Finish.
PowerFlex Manager displays a yellow warning banner at the top of the Resource Groups page. The Service Mode icon
displays for the Deployment State and Overall Resource Group Health, and for the Resource Health for the selected
nodes.
7. When you are ready to leave service mode, click More Actions > Exit Service Mode.
Replacing a drive
You can replace a failed drive in a deployed node. PowerFlex Manager supports the replacement of SSD and NVMe drives for
PowerFlex storage-only nodes and hyperconverged nodes.
Ensure that you have a replacement drive and can access the node.
This capability is available on the PowerFlex appliance and PowerFlex rack offerings only.
PowerFlex Manager supports drive replacement for:
● Nodes that have NVDIMM compression enabled.
● SSD for HBA330 controllers.
NOTE: RAID controllers are not supported.
● CloudLink-enabled PowerFlex nodes with self-encrypting drives (SEDs) or software encryption.
If either DAS Cache is installed on a node, or if the node has an NSX-T or NSX-V configuration, PowerFlex Manager does not
enable you to replace a drive.
1. On the menu bar, click Lifecycle > Resource Groups.
2. On the Resource Groups page, select a resource group and click View Details in the right pane.
3. Scroll down to the Physical Nodes section of the page.
4. Under Physical Nodes, click Drive Replacement.
PowerFlex Manager displays the Node List panel in the Drive Replacement wizard.
5. Click the node for which you want to perform a drive replacement.
6. Click Next.
PowerFlex Manager displays the Select Drive panel. Any empty slots are shown in gray. The slots that have drives are
shown in black.
7. Select the drive that you want to replace by clicking the drive slot in the hardware image, or select the drive from the table.
Be sure to click the correct drive, because the drive replacement process is irreversible.
The color for the selected drive changes to blue and the table below the hardware image is selected. The table shows details
about the selected drive. To help you pick the correct drive, PowerFlex Manager provides the iDRAC name for the drive, the
PowerFlex drive name, and the serial number.
8. Optionally, click Launch iDRAC GUI to see iDRAC details about the selected drive before proceeding.
When you launch iDRAC, the iDRAC GUI opens in a different tab. Log in and go to the drive details.
Related information
Viewing PowerFlex system details
3. For a hyperconverged or compute-only resource group, select a storage pool in the Storage Pool drop-down.
The list of storage pools available for selection is filtered to list the storage pools in the selected protection domain.
4. Review the Destination Datastores. The destination datastores are the two heartbeat datastores that PowerFlex Manager
creates automatically when you migrate the vCLS VMs to shared storage.
PowerFlex Manager also creates two resource group volumes and maps these volumes to the destination datastores.
Only two datastores and resource group volumes are created. If you already have existing datastores, PowerFlex Manager
adds the new datastores to the list. If you already have datastores with the same names in the same cluster, PowerFlex
Manager does not create new ones, but simply uses the ones that exist already.
Related information
Adding an existing resource group
Viewing PowerFlex system details
If a node has an NSX-T or NSX-V configuration, PowerFlex Manager removes the Add Resources button under Resource
Actions.
If the PowerFlex gateway used in the resource group is being updated on the Resources page, PowerFlex Manager also
removes the Add Resources button.
If the PowerFlex gateway used in the resource group is being updated on the Resources page, PowerFlex Manager does not
allow you to add a node.
Related information
Component types
Deploying a resource group
3. Click Save.
g. When you are ready to add the selected volumes, click Add.
h. After you have selected the volumes that you want to add, define a template for datastore names in the Datastore
Name Template field and click Next.
The template must include a variable that allows PowerFlex Manager to produce a unique datastore name.
3. To create new volumes for a storage-only or hyperconverged resource group, select Create New Volumes and follow these
steps:
If you want to create a single volume with a specified name:
If you want to create multiple volumes that share a common naming pattern:
e. In the Datastore Name Template field, define a template for datastore names.
The template must include a variable that allows PowerFlex Manager to produce a unique datastore name.
f. In the Storage Pool drop-down, choose the storage pool where the volume will reside.
g. Select the Enable Compression check box to take advantage of the PowerFlex NVDIMM compression feature.
h. In the Volume Size (GB) field, select the size in GB. The minimum size is 8 GB and the value you specify must be
divisible by eight.
i. In the Volume Type field, select thick or thin.
A thick volume provides a larger amount of storage in advance, whereas a thin volume provides on-demand storage and
faster setup and startup times.
If you enable compression on a hyperconverged or storage-only resource group with the granularity of the storage pool
set to fine, the only option for Volume Type is thin. This is the case regardless of whether you deploy a compressed or
non-compressed volume.
4. Optionally, click Add volume again to add another volume section. Then, provide the required information for that section.
5. Click Next once you have included information about all of the volumes you want to add.
6. On the Summary screen, review the volume details to be sure that everything looks correct.
If you added existing volumes, you can click View Volumes to review the list of volumes previously selected.
7. Click Finish.
The resource group moves to the In Progress state and the new volume icons appear on the Resource Group Details
page. You may see multiple volume components while the add operation is still in progress. One the operation is complete,
you will see just one volume component with the count updated.
After the deployment completes successfully, you can click View Volumes in the Storage list on the Resource Group
Details page to search for volumes that are part of the resource group.
The PowerFlex user interface shows the new volumes under the storage pool. For a storage-only resource group, the
volumes are created, but not mapped. For a compute-only or hyperconverged resource group, the volumes are mapped to
SDCs. In the vSphere client, you can see the volumes in the storage section and also see the hosts that are mapped to the
volumes, once the mappings are in place.
Related information
Resize a volume
Viewing and selecting volumes
Resize a volume
After adding volumes to a resource group, you can resize the volumes.
For a storage-only resource group, you can increase the volume size. For a VMware ESXi compute-only resource group, you can
increase the size of the datastore that is associated with the volume. For a hyperconverged resource group, you can increase
the size of both the volume and the datastore.
If you resize a volume in a storage-only resource group, you must update the datastore size in the corresponding VMware ESXi
compute-only resource group. The datastore size cannot exceed the size of the volume.
1. On the Resource Groups page, click the volume component and choose Volume Actions > Resize.
2. Choose the volume that you want to resize:
a. Click Select Volume.
b. Enter a volume or datastore name search string in the Search Text box.
c. Optionally, apply additional search criteria by specifying values for the Size, Type, Compression, and Storage filters.
d. Click Search.
PowerFlex Manager updates the results to show only those volumes that satisfy the search criteria. If the search returns
more than 50 volumes, you must refine the search criteria to return only 50 volumes.
e. Select the row for the volume you want to resize.
f. Click Apply.
3. Update the sizing information:
If you are resizing a volume for a hyperconverged resource group, perform these steps:
a. In the New Volume Size (GB) field, specify a value that is greater than the current volume size.
b. Optionally, select Resize Datastore to increase the size of the datastore.
If you are resizing a volume for a storage-only resource group, enter a value in the New Volume Size (GB) field. Specify a
value that is greater than the current volume size. Values must be in multiples of eight, or an error occurs.
If you are resizing a volume for a compute-only resource group, review the Volume Size (GB) field to see if the volume size
is greater than Current Datastore Size (GB). If it is, PowerFlex Manager expands the datastore size.
4. Click Save.
Related information
Adding volumes to a resource group
6. Specify the permissions for this resource group under Who should have access to the resource group deployed from
this template? by performing one of the following actions:
● To restrict access to super users, select Only PowerFlex SuperUser.
● To grant access to super users and some specific lifecycle administrators and drive replacers, select the PowerFlex
SuperUser and specific LifecycleAdmin and DriveReplacer option, and perform the following steps:
CloudLink View the following information about the CloudLink Centers participating in the resource
group:
● Health
● Hostname
● Management IP
Storage View details about the storage volumes added for the resource group. Click View
Volumes to search for volumes and see the following information about the volumes:
● Name
● Size
● Type
● Compression
● Storage Pool
● Datastore
Physical Nodes View the following information about the nodes that are part of the resource group:
● Health
● Asset/Service Tag
● iDRAC IP/Hostname
● PowerFlex Mode
The mode for each node is one of the following:
○ Hyper-converged includes both SDS and SDC components.
○ Storage Only includes only the SDS component.
○ Compute Only includes only the SDC component.
● MVM Hostname
For a hyperconverged resource group, PowerFlex Manager adds IP addresses for the
MVM to the node IP list. PowerFlex Manager appends (MVM) to all network names
for the MVM. If there are no MVM hostnames, the MVM Hostname column is not
shown.
● Associated IPs
● MDM Role
The MDM role is the metadata manager role. The MDM role applies only to those
nodes that are part of a PowerFlex cluster. The MDM role is one of the following:
○ Primary: The MDM in the cluster that controls the SDSs and SDCs. The primary
MDM contains and updates the MDM repository, the database that stores the SDS
configuration, and how data is distributed between the SDSs. This repository is
constantly replicated to the secondary MDMs, so they can take over with no delay.
Every PowerFlex cluster has one primary MDM.
○ Secondary: An MDM in the cluster that is ready to take over the primary MDM
role if necessary.
○ Tie Breaker: An MDM whose sole role is to help determine which MDM is the
primary.
○ Standby MDM: A standby MDM can be called on to assume the position of a
manager MDM when it is promoted to be a cluster member.
○ Standby Tie Breaker: A standby node that is prepared to take over as a
tiebreaker.
● Fault Set: A logical group of SDSs within a protection domain that defines by the way
it is grouped where the copies of data exist. If there is no fault set, this column is not
shown.
Add resources
Add Resources allows you to add nodes, volumes, and networks to a resource group.
More actions
Under More Actions, you can perform the following tasks:
Click... To...
Update Resource Group Update resource group definition.
Details
Enter Service Mode Put nodes in service mode so that maintenance operations can be performed.
Exit Service Mode Take nodes out of service mode.
Reconfigure MDM Roles Change the MDM role for a node in a PowerFlex cluster. For example, if you add a node
to the cluster, you might want to switch the MDM role from an existing node to the new
node.
Remove Resource Group Remove a resource group that is no longer required.
PowerFlex Manager supports two types of removal:
● Delete the entire resource group, which deletes the deployment information and makes
any required configuration changes for components that are associated with the
resource group. This includes making destructive changes such as unconfiguring the
nodes in the resource group and powering them down.
● Delete the deployment information for the resource group without making any
configuration changes to the deployed components.
Resource actions
Under Resource Actions, you can perform the following tasks:
Click... To...
Add Resources Select the type of the resources that you want to add to the resource group.
If you must input data for the template that is used to create the running resource group,
click Confirm Resource Group Settings. In the Update Resource Group Component
window, enter values for all displayed fields. Click Save.
● Deployed By—Displays the name of the user who deployed the resource group.
● Deployed On—Displays the date and time when the resource group is deployed.
● Reference Template—Displays the name of the reference template that is used in the resource group.
NOTE: For existing resource groups, the name displays as User Generated Template and not a template name from
the inventory.
● User Permissions—Displays one of the following:
○ Enabled—Indicates that the permission is granted for one or more standard users to deploy this resource group.
○ Disabled—Indicates that the permission is not granted for the standard users to deploy this resource group.
Recent activity
Recent Activity displays component deployment status and information about the current deployed resource group.
You can also click the Port View tab to display port view details.
If you want to see the current firmware or software repository that is in use, look at Target Version. To change the compliance
version, click Change Target.
If you select a minimal compliance version for a resource group, PowerFlex Manager puts the resource group in lifecycle mode
and restricts the actions that can be performed. In lifecycle mode, the resource group supports monitoring, service mode, and
compliance upgrade operations only. All other resource group operations are blocked.
c. When you are ready to add the selected volumes, click Add.
6. If you are selecting a volume for a resize operation, you can select the volume that you want to resize and click Apply.
Related information
Adding volumes to a resource group
If the node has an NSX-T or NSX-V configuration, you can remove the deployment information for a resource group, but not
delete the resource group entirely. PowerFlex Manager also does not allow you to delete a resource group if the PowerFlex
gateway used in the resource group is currently being updated on the Resources page.
Standard users can delete only the resource groups that they have deployed.
To remove a resource group, perform the following steps:
1. On the menu bar, click Lifecycle > Resource Groups.
2. Select the resource group.
3. On the Resource Group Details page, under More Actions, click Remove Resource Group.
4. In the Remove Resource Group dialog box, select the Resource group removal type:
● Delete Resource Group makes configuration changes to the nodes, switch ports, virtual machine managers, and
PowerFlex to unconfigure those components. Also, it returns the components to the available inventory.
● Remove Resource Group removes deployment information, but does not make any configuration changes to the nodes,
switch ports, virtual machine managers, and PowerFlex. Also, it returns the components to the available inventory.
5. If you choose Remove Resource Group, perform the following steps:
a. To keep the nodes in the inventory, select Leave nodes in PowerFlex Manager inventory and set state to and select
the state:
● Managed
● Unmanaged
● Reserved
b. To remove the nodes, select Remove nodes from the PowerFlex Manager inventory.
c. Click Remove.
6. If you choose Delete Resource Group, perform the following steps:
a. Select Delete Clusters(s) and Remove from vCenter to delete and remove the clusters from vCenter.
b. Select Remove Protection Domain and Storage Pools from PowerFlex to remove the protection domain and
storage pools that are created during the resource group deployment.
If you select this option, you must select the target PowerFlex gateway. The PowerFlex gateway is not removed.
PowerFlex Manager removes only the protection domain and storage pools that are part of the resource group. If
multiple resource groups are sharing a protection domain, you might not want to delete the protection domain.
For a compression enabled resource group, PowerFlex Manager deletes the acceleration pool and the DAX devices when
you delete the resource group.
c. Select Delete Machine Group and remove from CloudLink Center to clean up the related components in CloudLink
Center.
d. If you are certain that you want to proceed, type DELETE RESOURCE GROUP.
e. Click Delete.
Related information
Deploying a resource group
Related information
Deploying and provisioning
Basic tasks
This section provides basic tasks for template management.
Clone a template
The Clone feature allows you to copy an existing template into a new template. A cloned template contains the components
that existed in the original template. You can edit it to add additional components or modify the cloned components.
For most environments, you can simply clone one of the sample templates that are provided with PowerFlex Manager and edit
as needed. Choose the sample template that is most appropriate for your environment.
3. In the Clone Template dialog box, enter a template name in the Template Name box.
4. Select a template category from the Template Category list. To create a template category, select Create New Category.
5. In the Template Description box, enter a description for the template.
6. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
You cannot select a minimal compliance version for a template, since it only includes server firmware updates. The
compliance version for a template must include the full set of compliance update capabilities. PowerFlex Manager does
not show any minimal compliance versions in the Firmware and Software Compliance list.
7. Indicate Who should have access to the resource group deployed from this template by selecting one of the following
options:
● To restrict access to super users, select Only PowerFlex SuperUser.
● To grant access to super users and some specific lifecycle administrators and drive replacers, select the PowerFlex
SuperUser and specific LifecycleAdmin and DriveReplacer option, and perform the following steps:
a. Click Add User(s) to add one or more LifecycleAdmin or DriveReplacer users to the list displayed.
b. Select which users will have access to this resource group.
c. To delete a user from the list, select the user and click Remove User(s).
d. After adding the users, select or clear the check box next to the users to grant or block access.
● To grant access to super users and all lifecycle administrators and drive replacers, select PowerFlex SuperUser and all
LifecycleAdmin and DriveReplacer.
8. Click Next.
9. On the Additional Settings page, provide new values for the Network Settings, OS Settings, Cluster Settings,
PowerFlex Gateway Settings, and Node Pool Settings.
If you clone a template that has a Target CloudLink Center setting, the cloned template shows this setting in the Original
Target CloudLink Center field. Change this setting by selecting a new target for the cloned template in the Select New
Target CloudLink Center setting.
When defining a template, you choose a single CloudLink Center as the target for the deployed resource group. If the
CloudLink Center for the resource group shuts down, PowerFlex Manager loses communication with the CloudLink Center.
If the CloudLink Center is part of a cluster, PowerFlex Manager moves to another CloudLink Center when you update the
resource group details.
Related information
Configuring block storage
Add a template
The Create feature allows you to create a template, clone the components of an existing template into a new template, or
import a pre-existing template.
For most environments, you can simply clone one of the sample templates that are provided with PowerFlex Manager and edit
as needed. Choose the sample template that is most appropriate for your environment.
1. On the menu bar, click Templates.
2. On the Templates page, click Create.
3. In the Create dialog box, select one of the following options:
● Clone an existing PowerFlex Manager template
● Upload External Template
● Create a new template
8. Specify the resource group permissions for this template under Who should have access to the resource group
deployed from this template? by performing one of the following actions:
● To restrict access to super users, select Only PowerFlex SuperUser.
● To grant access to super users and some specific lifecycle administrators and drive replacers, select the PowerFlex
SuperUser and specific LifecycleAdmin and DriveReplacer option, and perform the following steps:
a. Click Add User(s) to add one or more LifecycleAdmin or DriveReplacer users to the list displayed.
b. Select which users will have access to this resource group.
c. To delete a user from the list, select the user and click Remove User(s).
d. After adding the users, select or clear the check box next to the users to grant or block access.
● To grant access to super users and all lifecycle administrators and drive replacers, select PowerFlex SuperUser and all
LifecycleAdmin and DriveReplacer.
9. Click Next to select the remaining options to complete the wizard.
In the Number of Instances box, provide the number of component instances that you want to include in the template.
4. If you are adding a cluster, in the Select a Component box, choose one of the following cluster types:
● PowerFlex Cluster
● VMware Cluster
● PowerFlex File Cluster
5. If you are adding a VM, in the Select a Component box, choose one of the following cluster types:
● CloudLink Center
● PowerFlex Gateway
6. Under Related Components, perform one of the following actions:
Related information
Support for full and partial network automation
Deploying a resource group
Edit a template
You can edit an existing template to change its draft state to "published" for deployment or to modify its components and their
properties.
1. On the menu bar, click Templates.
2. Open a template, and click Modify Template.
3. Make changes as required to the settings for components within the template.
Based on the component type, required settings and properties are displayed automatically. You can edit these settings by
performing these steps:
PowerFlex Manager facilitates sending the active RCM version to the Data Items portal. The compliance file includes the
RCM and intelligent catalog (IC). When multiple resource groups run on different compliance files besides the default
PowerFlex Manager compliance file, all the active and default RCM and IC versions are sent to the Data Items portal. If
PowerFlex Manager is not managing any resource groups, then the default RCM of PowerFlex Manager is sent to the data
items section of the Embedded SupportAssist Enabler. By default, this information is sent every Saturday at 02:00 AM UTC.
NOTE: If an RCM associated with the template is modified, a wrench icon with the text, Modified, is displayed.
However, if the update file is moved or deleted, the wrench icon with the text, Needs Attention, is displayed.
a. To edit PowerFlex cluster settings, select the PowerFlex Cluster component and click Modify. Make the necessary
changes and click Save.
b. To edit the VMware cluster settings, select the VMware Cluster component and click Modify. Make the necessary
changes, and click Save.
c. To edit node settings, select the Node component and click Modify. Make the necessary changes, and click Save.
4. Optionally, click Publish Template to make the template ready for deployment.
Related information
Component types
d. Indicate Who should have access to the resource group deployed from this template by selecting one of the
following options:
● To restrict access to super users, select Only PowerFlex SuperUser.
● To grant access to super users and some specific lifecycle administrators and drive replacers, select the PowerFlex
SuperUser and specific LifecycleAdmin and DriveReplacer option, and perform the following steps:
i. Click Add User(s) to add one or more LifecycleAdmin or DriveReplacer users to the list displayed.
ii. Select which users will have access to this resource group.
iii. To delete a user from the list, select the user and click Remove User(s).
iv. After adding the users, select or clear the check box next to the users to grant or block access.
● To grant access to super users and all lifecycle administrators and drive replacers, select PowerFlex SuperUser and
all LifecycleAdmin and DriveReplacer.
3. Click Next.
4. On the screens that follow the Deployment Settings page, configure the settings, as needed for your deployment.
5. Click Next.
6. On the Schedule Deployment page, select one of the following options and click Next:
Related information
Lifecycle
Viewing resource group details
Adding components to a resource group
Build and publish a template
Component types
Removing a resource group
Related information
Cluster component settings
The second interface ports 1 and 2 are automatically replicated to the first interface. This replication applies to sample
templates as well. If you manually create a template from scratch and choose the networks for the interfaces, the second
interface's port 1 and 2 are not automatically replicated to the first interface.
3. Enter the network VLANs for each port.
NOTE: If you select the same network on multiple interface ports or partitions, PowerFlex Manager creates a team or bond
on systems with the VMware ESXi operating system. This configuration enables redundancy.
1. On the node component page, under Network Settings, click Enabled under Static Routes.
2. Click Add New Static Route.
3. Enter the following information for the static route:
● Source Network—Select the PowerFlex data network or replication network that is the source.
If you add or remove a network for a port, the Source Network list still shows the old networks. In order to see the
changes, you must save the node settings and edit the node again.
● Destination Network—Select the PowerFlex data network or replication network that is the destination for the static
route.
● Gateway—Enter the IP address for the gateway.
Related information
Component types
Export a template
To export a template:
1. On the menu bar, click Templates.
2. Select the template that you want to export.
3. Click Export.
4. In the Export Template to ZIP File window, enter values as follows:
a. Enter a name for the template file in the File Name.
b. If you have set an encryption password, select Use Encryption Password from Backup Setting to use that password.
To set an encryption password, clear this option.
c. If you clear Use Encryption Password from Backup Setting, two additional fields display. Enter a new password in
the Set File Encryption Password field and enter the password again to confirm it.
5. Click Export to download the file. Select a location to save the file and click OK.
Import a template
The Import Template feature allows you to import the components of an existing template and its component configurations
into a template. For example, you can create a template that defines a specific cluster and node topology and import
this template definition into another template. After importing, you can modify the component properties of the imported
components.
Editing an imported template does not affect the original template.
As an alternative to this procedure, you can also start from a new template and import an existing template as you create it. This
is a better approach to use. To do this, you select Create, then choose Clone an existing VxFlex Manager template in the
wizard.
To import a template, perform the following steps:
9. Indicate Who should have access to the resource group deployed from this template by selecting one of the following
options:
● To restrict access to super users, select Only PowerFlex SuperUser.
● To grant access to super users and some specific lifecycle administrators and drive replacers, select the PowerFlex
SuperUser and specific LifecycleAdmin and DriveReplacer option, and perform the following steps:
a. Click Add User(s) to add one or more LifecycleAdmin or DriverReplacer users to the list displayed.
b. Select which users will have access to this resource group.
c. To delete a user from the list, select the user and click Remove User(s).
d. After adding the users, select or clear the check box next to the users to grant or block access.
● To grant access to super users and all lifecycle administrators and drive replacers, select PowerFlex SuperUser and all
LifecycleAdmin and DriveReplacer.
10. Click Upload and Continue.
The Additional Settings page appears.
11. On the Additional Settings page, provide new values for the Network Settings, OS Settings, Cluster Settings,
PowerFlex Gateway Settings, and Node Pool Settings.
12. Click Finish.
7. Select the VM and click Edit > Continue (by default, the number of CloudLink instances is two and PowerFlex Manager
supports up to three instances).
a. Under VM Settings select the Datastore and Network from the drop-down list.
b. Under Cloudlink Settings select the following:
i. For Host Name Selection, either select Specify At Deployment Time to manually enter at deployment time or
Auto Generate to have PowerFlex Manager generate the name.
8. Under Additional Cloudlink Settings, you can choose either or both of the following settings:
● Configure Syslog Forwarding
a. Select the check box to configure syslog forwarding.
b. For Syslog Facility, select the syslog remote server from the list.
● Configure Email Notifications
a. Select the check box to configure email alerts.
b. Specify the IP address of the email server.
c. Specify the port number for the email server. The default port is 25. Enter the port numbers in a comma-separated
list, with values between 1-65535.
d. Specify the email address for the sender.
e. Optionally, specify the username and password.
9. Click Save.
10. Click Publish Template and click Yes to confirm.
11. In the Deploy Resource Group wizard, do the following:
a. Select the published template from the drop-down list, and enter Resource Group Name and description.
b. Select who should have the access to the resource group and click Next.
c. Provide Hostname and click Next.
d. Select Deploy Now or Schedule deployment and click Next.
e. Review the details in Summary page and click Finish.
Deploy a storage-only or hyperconverged resource group that includes a PowerFlex cluster that will be associated with the
PowerFlex file cluster. When you deploy the PowerFlex file cluster, the control volumes that are needed for file enablement will
be added automatically.
1. On the menu bar, click Templates.
2. On the Templates page, click Create.
3. In the Add a Template wizard, click Clone an existing PowerFlex Manager template.
4. For Category, select Sample Templates. For Template to be Cloned, select PowerFlex File or PowerFlex File - SW
Only. Click Next.
5. On the Template Information page, provide the template name, template category, template description, firmware and
software compliance, and who should have access to the resource group deployed from this template. Click Next.
6. On the Additional Settings page, enter new values for the Network Settings, PowerFlex Gateway Settings, OS
Settings, and Node Pool Settings.
For the Network Settings in a PowerFlex file cluster template, you must provide a NAS Management network and two
NAS Data networks.
For the OS Settings, you must choose Use Compliance File Linux Image.
For the PowerFlex Gateway Settings, select block-legacy-gateway.
7. Click Finish.
8. After creating the template, click Templates, select the cloned template, and click Modify Template.
9. Edit the PowerFlex cluster, PowerFlex file cluster, and node components as needed and click Save.
10. Publish the template and deploy the resource group.
NAS volumes are shown during the deployment, and then compressed to one icon with a number based on the number of nodes.
For example, you might see the number 4 for a two-node NAS compute-only deployment. Each time you expand the deployment
by adding another node, the number is incremented to show that another volume is added.
Related information
Configuring file storage
Setting Description
Target PowerFlex Gateway Choose the block-legacy-gateway.
Setting Description
PowerFlex File Gateway Name of the PowerFlex file gateway.
Number of Protection Domains Number of protection domains you want to use in this
template.
Protection Domain <n> One or more protection domains provided by the
PowerFlex cluster deployed as part of the storage-only or
hyperconverged resource group.
This setting will be empty if you have not yet deployed the
storage-only or hyperconverged resource group.
Storage Pool <n> One or more storage pools provided by the PowerFlex cluster
deployed as part of the storage-only or hyperconverged
resource group.
This setting will be empty if you have not yet deployed the
storage-only or hyperconverged resource group.
Setting Description
Number of Instances For a PowerFlex file cluster, you must have a minimum of two nodes
and a maximum of 16 nodes.
The control volumes are special NAS cluster volumes that cannot be
deleted or modified. These control volumes are hidden from view in
the management software appliance.
Related Components Choose All Components.
OS Settings: OS Image For a PowerFlex file cluster, you must choose Use Compliance File
Linux Image.
OS Settings: PowerFlex Role Choose Compute Only.
OS Settings: Enable PowerFlex File This option must be selected for NAS.
OS Settings: Switch Port Configuration Choose Port Channel (LACP Enabled) or Trunk Port. Port
Channel (LACP Enabled) is the preferred option.
Setting Description
Number of Instances For a PowerFlex file cluster, you must have a minimum of 2
nodes and a maximum of 16 nodes. Software-only NAS nodes also
need to be discovered with a resource type of Node (Software
Management).
The control volumes are special NAS cluster volumes that cannot be
deleted or modified. These control volumes are hidden from view in
the management software appliance.
Related Components Choose All Components.
OS Settings: OS Image For a PowerFlex file cluster, you must choose Use Compliance File
Linux Image.
OS Settings: PowerFlex Role Choose Compute Only.
OS Settings: Enable PowerFlex File This option must be selected for NAS.
OS Settings: Switch Port Configuration Choose Port Channel (LACP Enabled) or Trunk Port. Port
Channel (LACP Enabled) is the preferred option.
Related information
Deployment checklist for software management
Building a software management template
Node settings (software management)
Sample templates
Discover a software management node
Item Checked
1 There is a supported operating system that is installed on any hardware, for example, a VM or a server.
2 There is at least one free 100 GB drive with no partitions (required for PowerFlex storage-only nodes and
PowerFlex hyperconverged nodes only).
3 There are no PowerFlex packages preinstalled.
4 The networks are configured including any static routes.
5 Any repository that is configured should be functional.
6 The firewall is enabled and the SSH port is open.
7 The root user is configured in the software management node operating system, and the OS Admin user
with root credentials are configured in PowerFlex Manager.
8 There must be at least one network that is configured in the software management node operating
system that PowerFlex Manager can reach. In addition, there is a minimum of one PowerFlex data type
network that must be configured in PowerFlex Manager, and added to a software management template.
NOTE: When creating a PowerFlex data network to be used with a software management node and
resource group, the VLAN is optional, but is still a required parameter to create the network. If no
VLAN is configured on the network, then enter 1 for this value.
9 The software management node requires access to the PowerFlex Manager HTTP share:
https://[external_loadbalancer_ip_address]/httpshare/download/
Related information
Deploy software management
Deploy a software management node
Related information
Deploy software management
Related information
Deploy software management
Related information
Deployment checklist for software management
Node settings (software management)
Operating System Specifies the operating system. Specify one of the following operating systems:
● Red Hat Enterprise Linux/CentOS
● SUSE Linux Enterprise Server (SLES)
● Oracle Enterprise Linux
● Ubuntu Linux
The server is deployed based on this selection.
Use Node For Dell PowerFlex Indicates that this node component is Select the check box for software only
used for a PowerFlex deployment. (software management).
Enable PowerFlex File Enables NAS capabilities on the node. Select or clear the check box.
Client Storage Access Determines how clients access storage. For a storage-only role, select one of the
following options:
● Storage Data Client (SDC) Only
● SDC and NVMe/TCP Initiator
For a compute-only role, the Client Storage
Access control is not displayed, and the client
access is set to SDC automatically.
For a hyperconverged role, the Client Storage
Access control is not displayed, and the client
access is set to SDC/SDS automatically.
Enable Compression Enables compression on the protection Select or clear the check box.
domain.
Enable Replication Enables replication for a storage-only or Select or clear the check box.
hyperconverged resource group.
Node Settings
Component Name Indicates the node component name. Select Node (Software Only).
Number of Instances Selects the number of instances that If you select more than one instance, a single
you want to add. component representing multiple instances
of an identically configured component are
created.
Edit the component to add extra instances. If
you require different configuration settings, you
can create multiple components.
The limit on the number of instances for
software only is 128.
Node Pool Specifies the pool from which nodes are You can select the default Global setting from
selected for the deployment. the menu or create one.
Network Settings
Add New Interface Creates a network interface where Click Add New Interface.
network settings are specified for a
A minimum of one PowerFlex data network that
node.
PowerFlex Manager can reach is required.
PowerFlex Manager requires a VLAN to create
a network. If there is no VLAN configured on
the network, enter 1.
Each network has one interface. The
recommendation is to create one interface per
IP address that you want PowerFlex Manager
to reach.
Validate Settings Determines what can be chosen for Click Validate Settings to determine what can
deployment. be chosen for a deployment with this template
component.
The Validate Settings wizard displays a
banner when one or more resources in the
template do not match the configuration
settings that are specified in the template. The
wizard displays the following tabs:
● Valid ( number ) lists the resources that
match the configuration settings.
● Invalid ( number ) lists the resources that
do not match the configuration settings.
The reason for the mismatch is shown at
the bottom of the wizard. For example,
you might see Network Configuration
Mismatch as the reason for the mismatch
if you set the port layout to use a 100-GB
network architecture, but one of the nodes
is using a 25 GB architecture.
Related information
Deploy a software management node
Deploy software management
Delete a template
The Delete Template option allows you to delete a template from PowerFlex Manager.
1. On the menu bar, click Templates.
2. Select the template that you want to delete. Click More Actions > Delete Template in the right pane.
3. Click Yes to confirm the deletion.
Compliance Indicates if the resource firmware and software compliance state is Compliant,
Non-compliant, Update required, or Update failed.
Compliance is determined by the firmware/software version for the selected
resource, based on the default compliance version. Click the compliance status to
view the compliance report.
Management IP Indicates the resource IP address. Click the IP address to open the Element
Manager.
Deployment Status Indicates if the resource deployment status is In Use, Not in use, Available,
Updating resource, or Pending Updates.
Click the deployment status to view resource group details.
This capability is available on the PowerFlex appliance and PowerFlex rack offerings
only.
To filter the resources that display, click the toggle filters icon on the Resource page.
On the Resources page, you can also perform the following tasks:
View detailed information about a resource Select the resource. In the right pane, click View Details.
View a firmware and software compliance Select the resource. In the right pane, click the link corresponding to
report for a resource Compliance field.
Update password for a resource Select the resource and click Update Password.
Basic tasks
This section provides basic tasks for resource management.
Discover a resource
A resource must be discovered in PowerFlex Manager in order for PowerFlex Manager to manage it.
Before you start discovering a resource, complete the following:
b. Enter the management IP address (or hostname) of the resources that you want to discover in the IP/Hostname Range
field.
To discover one or more nodes by IP address, select IP Address and provide a starting and ending IP address.
To discover one or more nodes by hostname, select Hostname and identify the nodes to discover in one of the following
ways:
● Enter the fully qualified domain name (FQDN) with a domain suffix.
● Enter the FQDN without a domain suffix.
● Enter a hostname search string that includes one of the following variables:
Variable Description
${num} Produces an automatically generated unique number.
${num_2d} Produces an automatically generated unique number that
has two digits.
${num_3d} Produces an automatically generated unique number that
has three digits.
If you use a variable, you must provide a start number and end number for the hostname search.
c. Select one of the following options from the Resource State list:
Option Description
Managed Select this option to monitor the firmware version compliance, upgrade
firmware, and deploy resource groups on the discovered resources. A managed
state is the default option for the switch, vCenter, element manager, and
PowerFlex gateway resource types.
Resource state must be set to Managed for PowerFlex Manager to send alerts
to SupportAssist.
Unmanaged Select this option to monitor the health status of a device and the firmware
version compliance only. The discovered resources are not available for a
Reserved Select this option to monitor firmware version compliance and upgrade
firmware. The discovered resources are not available for deploying resource
groups by PowerFlex Manager.
d. To discover resources into a selected node pool instead of the global pool (default), select an existing or create a node
pool from the Discover into Node Pool list. To create a node pool, click the + sign to the right of the Discover into
Node Pool box.
e. Select an existing or create a credential from the Credentials list to discover resource types. To create a credential,
click the + sign to the right of the Credentials box. PowerFlex Manager maps the credential type to the type of
resource that you are discovering. The credential types are as follows:
● Element Manager
● Node (Hardware/Software Management)
● Switch
● VM Manager
● PowerFlex Gateway
● Node (Software Management)
● PowerFlex System
The default node (Hardware/Software Management) credential type is Dell PowerEdge iDRAC Default.
f. If you want PowerFlex Manager to automatically reconfigure the iDRAC nodes it finds, select the Reconfigure
discovered nodes with new management IP and credentials check box. This option is not selected by default,
because it is faster to discover the nodes if you bypass the reconfiguration.
g. To have PowerFlex Manager automatically configure iDRAC nodes to send alerts to PowerFlex Manager, select the Auto
configure nodes to send alerts to PowerFlex Manager check box.
4. Click Next.
You might have to wait while PowerFlex Manager locates and displays all the resources that are connected to the managed
networks.
To discover multiple resources with different IP address ranges, repeat steps 2 and 3.
5. On the Discovered Resources page, select the resources from which you want to collect inventory data and click Finish.
The discovered resources are listed on the Resources page.
Related information
Getting started
Configuring block storage
Resource health status
Compliance status
b. Enter the management IP addresses of the LIA nodes in the MDM Cluster IP Address field. You must provide the IP
addresses for all the nodes in a comma-separated list. The list should include a minimum of three nodes and a maximum
of five nodes.
If you forget to add a node, the node will not be reachable after discovery. To fix this problem, you can rerun the
discovery later to provide the missing node. You can enter just the one missing node, or all the nodes again. If you enter
IP addresses for any nodes that were previously discovered, these nodes are ignored on the second run.
c. For the System ID, specify the same System ID provided when you created the MDS cluster.
d. Select an existing credential or create a new one from the Credentials list. The credential must be a PowerFlex
Management System credential. Be sure to provide the LIA password that was used for the original setup. The LIA
password is required for the mTLS configuration.
4. Click Next.
5. On the Summary page, click Finish.
After you complete the discovery, you should see a PowerFlex System resource on the Resources page. The OS
Hostname and Asset/Service tag are set to powerflex-mds. The discovery process also performs a bootstrap process
that generates the required certificates and places them on the MDS nodes. Once you have completed the steps to discover
a PowerFlex system, you must use the login certificate (not a username and password) to log in to the MDMs.
6. SSH to the two SVMs associated with the powerflex-mds system. Run scli --add_certificate --
certificate_file /opt/emc/scaleio/mdm/cfg/mgmt_CA.pem.
○ Logical Disks
○ Physical Disks
To see which disks are self-encrypting drives (SED), look at the Security Status. The value Encryption Capable
indicates that the disk is an SED. The value Not Capable indicates that the disk is not an SED.
Related information
Reconfiguring MDM roles
Migrating vCLS VMs to shared storage
Related information
Updating firmware and software
Exporting a compliance report for all resources
6. If you are updating a PowerFlex gateway, type UPDATE POWERFLEX to confirm that you are ready to proceed with the
update.
Related information
Viewing resource group details
Viewing a compliance report for a resource
Upgrading a PowerFlex gateway
Upgrading CloudLink Center
CAUTION: Check the Alerts page before performing the upgrade. Look for major and critical alerts related to
PowerFlex Block and File to be sure the MDM cluster is healthy before proceeding.
1. Choose the PowerFlex gateway from the Resources page.
You cannot upgrade more than one PowerFlex gateway at a time.
2. Click Update Resources.
3. On the Update Details page, check the Needs Attention section to see whether any of the nodes must be reconfigured
before upgrade. Select any nodes that you want to reconfigure. To select all nodes, click the box to the left of SDS Name.
4. Click Next.
5. On the Summary page, choose Allow PowerFlex Manager to perform non-disruptive updates now or Schedule
non-disruptive updates to run later.
Specify the type of update you want to perform by selecting one of the following options:
● Instant Maintenance Mode enables you to perform updates quickly. PowerFlex Manager does not migrate the data.
● Protected Maintenance Mode enables you to perform updates that require longer than 30 minutes in a safe and
protected manner.
6. If you only selected a subset of the nodes for reconfiguration, confirm the reconfiguration by typing RECONFIGURE NODES.
Otherwise, confirm the update action by typing UPDATE POWERFLEX.
If you reconfigured only a subset of the nodes, you must restart the wizard later to reconfigure the remaining nodes before
you can complete the upgrade process.
7. Click Finish.
8. Go to the Resource Groups page to update any resource groups that are not in compliance with the new version of
PowerFlex.
The PowerFlex gateway upgrade process performs some health prechecks to confirm that the resource group is healthy before
the upgrade. If the resource group is not healthy, the PowerFlex gateway upgrade is not successful.
After a successful upgrade, the PowerFlex gateway should be in compliance with the new target version. However, the
nodes in the resource group may require additional maintenance. In this case, you must update any resource groups that are
noncompliant from the Resource Groups page.
When you initiate a PowerFlex gateway update, PowerFlex Manager upgrades both the Gateway RPM and the software
components that are non-compliant.
Related information
Updating a resource group with new firmware and software
Updating firmware and software
Removing resources
Only super users can remove resources from PowerFlex Manager.
To remove a resource from PowerFlex Manager, perform the following steps:
1. On the menu bar, click Resources.
2. On the Resources page, click the All Resources tab.
3. From the list of resources, select one or more resources, and click Remove.
4. Click OK when the confirmation message appears.
If you remove a node, the node state changes to Pending and it powers off.
NOTE: Before you add new nodes using an existing iDRAC IP address in inventory, ensure that you remove the old nodes
from PowerFlex Manager before discovering the new nodes.
Related information
Resource health status
Compliance status
2. To view device details such as hostname, model name, and management IP address, or information about associated devices,
click the specific ports or devices.
3. To view information about intermediate devices in port view, ensure that the devices are discovered and available in the
inventory. Sometimes, connectivity cannot be determined for an existing resource group because the switches have not yet
been discovered. In this case, you see only the node in port view, but you do not see connectivity information. You can
correct this by going back and discovering the switches, and updating the resource group again.
PowerFlex Manager cannot discover interface cards that do not have integrated LLDP support (such as Intel X520).
4. To filter information based on the connectivity, select an option from the Display Connections list. Show All Connections
is the default option.
Related information
Viewing a compliance report for a resource
Resource health status
Compliance status
Related information
Configuration checks
Importing networks
You can import a large number of general-purpose VLANs from vCenter.
This capability is available on the PowerFlex appliance and PowerFlex rack offerings only.
After importing networks, you can add them to templates or resource groups. You can also import networks when you add an
existing resource group.
1. On the menu bar, click Resources.
2. On the All Resources tab, click the VMware vCenter from which you want to import networks.
3. In the right pane, click Import Networks.
PowerFlex Manager displays the Import Networks wizard. In the Import Networks wizard, PowerFlex Manager lists the
port groups that are defined on the vCenter as Available Networks. You can see the port groups and the VLAN IDs.
4. Optionally, search for a VLAN name or VLAN ID.
PowerFlex Manager filters the list of available networks to include only those networks that match your search.
5. Click each network you want to add under Available Networks. If you want to add all the available networks, click the
check box to the left of the Name column.
6. Click the double arrow (>>) to move the networks you chose to Selected Networks.
PowerFlex Manager updates the Selected Networks to show the ones you have chosen.
7. Click Save.
PowerFlex Manager creates a job to add the networks. The job may finish quickly, so you may not see on the Jobs page.
Go to Settings > Networks to see the imported networks.
5. Click Next.
6. On the Select Credentials page, create a credential with a new password or change to a different credential.
a. Open the iDRAC, OS Password, SVM Password, or MVM Password object under the Type column to see credential
details for each node you selected on the Resources page.
The SVM Password and MVM Password sections do not appear if there is nothing to show for SVMs or MVMs.
b. To create a credential that has the new password, click the plus sign (+) under the Credentials column.
Specify the Credential Name and the User Name for which you want to change the password. Enter the new password
in the Password and Confirm Password fields.
c. To modify the credential, click the pencil icon for the nodes under the Credentials column and select a different
credential.
d. Click Save.
You must perform the same steps for the node operating system and SVM operating system password changes. For a node
operating system credential, only the OS admin credential type is updated.
7. Click Finish.
8. Click Yes to confirm.
PowerFlex Manager starts a new job for the password update operation, and a separate job for the device inventory. The
node operating system, SVM, and MVM operating components are updated only if PowerFlex Manager is managing a cluster
with these components. If PowerFlex Manager is not managing a cluster with these components, these components are not
displayed and their credentials are not updated. Credential updates for iDRAC are allowed for managed and reserved nodes only.
Unmanaged nodes do not provide the option to update credentials.
Related information
Enabling SupportAssist
Events
An event is a notification that something happened in the system. An event happens at a single point in time and has a single
timestamp. An event may be unique or be part of a series.
Each event message is associated with a severity level. The severity indicates the risk (if any) to the system, in relation to the
changes that generated the event message.
PowerFlex Manager stores up to 3 million events or events that are up to 13 months old. Once this threshold is exceeded,
PowerFlex Manager automatically purges the events to free up space. The threshold is reviewed daily.
NOTE: An event is published each day that lists the events which have been removed that day.
YYYY-MM-DD hh:mm:ss.sss
Alerts
An alert is a state in the system which is usually on or off. Alerts monitor serious events that require user attention or action.
When an alert is no longer relevant or is resolved, the system automatically clears the alerts with no user intervention. This
action ensures that cleared alerts are hidden from the default view so that only relevant issues are displayed to administrators.
Cleared alerts can be optionally displayed through table filtering options. Alerts can also be acknowledged which removes the
alert from default view. Acknowledging an alert does not indicate that the issue is resolved. You can view acknowledged alerts
through the table-filtering options.
Major A major level informs you that action is required soon. The MDM certificate is about to
expire.
Critical A critical level informs you that the system requires The MDM certificate has expired.
immediate attention.
System impact A free text, description of the system Risk of cluster unavailability
impact of the alert.
Repair Flow Free text, actions that you can take in order Check that all MDM cluster nodes are
to repair the issue. functioning correctly, and fix and
replace faulty nodes, if necessary,
to return to full protection
Alert Details Any extra details that are relevant to the Percentage of SP capacity usage
object and or incident involved.
Associated Events A list of events that modified the life cycle [object Object]
of the alert.
The total number of alerts that can be sent to Secure Remote Services gateway from the PowerFlex Manager dispatcher
service is restricted to 200 per day. The threshold for the number of alerts for a particular event is set to three per hour. After
the first three alerts are sent, if fourth alert is generated for the same event, it is not sent to the Secure Remote Services.
However, if the alerts are sent from two different systems running iDRAC, the alerts are sent to the Secure Remote Services
gateway.
A new threshold alert is triggered from PowerFlex Manager when the threshold of 200 alerts per 24 hrs is crossed per day and
is automatically sent to Secure Remote Services gateway. Similarly, a new alert is triggered from PowerFlex Manager and sent
to Secure Remote Services gateway when threshold of three alerts per hour for same alert type, symptom code, or resource is
reached.
3. You can:
● Click Acknowledge to acknowledge an alert.
● Click Unacknowledge to remove an alert acknowledgment.
Jobs
In PowerFlex Manager, you can view the details of discovery, firmware update, inventory, and resource group deployment jobs.
Only a user with an administrator role can view jobs.
The Jobs page displays the following information about the jobs that are scheduled or running in PowerFlex Manager:
● State — Displays one of the following states that is based on the job status:
○ Error —Job has completed with errors (job is complete but failed on one or more resources).
○ Scheduled—Job is scheduled to run at a specific time. It can be scheduled to run at a single time or at several times as
a recurring job.
○ In progress—Job is running.
● Job Name—Identifies the name of the job.
User management
The User Management page allows you to manage local users, LDAP users, and directory services.
Under Settings > User Management, you can find three pages:
● Local Users
● LDAP Users
● Directory Services
User roles
User roles control the activities that can be performed by different types of users, depending on the activities that they perform
when using PowerFlex Manager.
Ensure that you configure the active directory before assigning roles. The roles that can be assigned to local users and LDAP
users are identical. Each user can only be assigned one role. If an LDAP user is assigned directly to a user role and also to a
group role, the LDAP user is provided with permissions of both roles.
NOTE: User definitions are not imported from earlier versions of PowerFlex and must be configured again.
The following table summarizes the activities that can be performed for each user role:
LifecycleAdmin A LifecycleAdmin can manage the life cycle ● Manage lifecycle operations, resource groups,
of hardware and PowerFlex systems. templates, deployment, backend operations
● Replace drives
● Hardware operations
● View resource groups and templates
● System monitoring (events, alerts)
ReplicationManager The ReplicationManager is a subset of the ● Manage replication operations, peer systems, RCGs
Storage Admin role, for work on existing ● Manage snapshots, snapshot policies
systems for setup and management of ● View storage configurations, resource details
replication and snapshots. (volume, snapshot, replication views)
● System monitoring (events, alerts)
SnapshotManager SnapshotManager is a subset of ● Manage snapshots, snapshot policies
StorageAdmin, working only on existing ● View storage configurations, resource details
systems. This role includes all operations ● System monitoring (events, alerts)
required to set up and manage snapshots.
Legacy role (prior to version PowerFlex 4.0) New role (from PowerFlex 4.0 and later)
PowerFlex Monitor Monitor
PowerFlex Back-end configurator LifecycleAdmin
PowerFlex Front-end configurator StorageAdmin
PowerFlex Configurator SystemAdmin
PowerFlex Security SecurityAdmin
PowerFlex Administrator StorageAdmin
PowerFlex local Super User SuperUser
PowerFlex technician commands Support
PowerFlex Manager Administrator SuperUser
PowerFlex Manager Read only Monitor
PowerFlex Manager Standard owner LifecycleAdmin
PowerFlex Manager Standard member LifecycleAdmin
PowerFlex Manager Operator DriveReplacer
NAS Storage admin StorageAdmin
NAS Administrator StorageAdmin
Local users
You can create and manage local users within PowerFlex Manager.
Creating a user
Perform this task to create a local user and assign a role to that user.
1. On the menu bar, click Settings and click User Management.
2. Click Local Users.
3. On the Local Users page, click Create.
4. Enter a unique User Name to identify the user account.
5. Enter the First Name and Last Name of the user.
6. Enter the Email address.
7. Enter a New Password that a user enters to access PowerFlex Manager. Confirm the password in the Verify Password
field.
The password must be at least 8 characters long and contain one lowercase letter, one uppercase letter, one number, and
one special character. Passwords cannot contain a username or email address.
8. In the User Role box, select a user role. Options include:
● SuperUser
● SystemAdmin
● StorageAdmin
● LifecycleAdmin
● ReplicationManager
● SnapshotManager
● SecurityAdmin
● DriveReplacer
● Technician
● Monitor
● Support
9. Select Enable User to create the account with an Enabled status, or clear this option to create the account with a
Disabled status.
10. Click Submit and click Dismiss.
PowerFlex Manager creates the new user with the specified password. The first time you login with the new user and password,
PowerFlex Manager asks you to change the password.
Deleting a user
Perform this procedure to remove an existing local user.
1. On the menu bar, click Settings and click User Management.
2. Click Local Users.
3. On the Local Users page, select one or more user accounts to delete.
4. Click Delete.
Click Apply in the warning message to delete the user. Click Dismiss.
Directory services
You can create a directory service that PowerFlex Manager can access to authenticate users.
An Active Directory or Open LDAP user is authenticated against the specific directory domain to which a user belongs.
The Directory Services page displays the following information about PowerFlex Manager active directories:
● LDAP configuration
● User search settings
● Group search settings
From this page, you can:
● Add a directory service (only available when no service is defined in the system)
● Modify a directory service
● Remove a directory service
c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/
windowsaccountname", Issuer == "AD AUTHORITY"] => add(store = "Active
Directory", types = ("http://schemas.xmlsoap.org/claims/Group"), query =
";tokenGroups;{0}", param = c.Value);
● Click Finish.
xiv.Click Add Rule... to add another custom rule:
● For the Claim rule template, select Send Claims Using a Custom Rule.
● For the Claim rule name, type Claim of groups membership.
● For the Custom rule, paste in the following string:
● Click Finish.
xv. Click OK.
d. Return to PowerFlex Manager and click I have configured PowerFlex as a SP in my IdP using the metadata above
on the Service Provider page of the Add Identity Provider (IdP) wizard.
e. Click Next.
6. On the IdP Setup page, upload the identity provider metadata so that PowerFlex can establish a connection to the identity
provider.
a. To upload the metadata as a file, select Upload File and specify the file location.
b. To retrieve the metadata from a URL, select URL and specify the following URL:https://hostname/
FederationMetadata/2007-06/FederationMetadata.xml
c. Click Next.
7. On the Settings page, review the attribute mappings that are imported from the identity provider.
a. Check the attribute mappings to be sure they are correct.
b. Click Next.
8. Review the Summary page and click Finish.
After you add a new identity provider, PowerFlex Manager adds it to the list of identity providers on the SSO Identity
Provider (IdP) Configuration page. In addition, PowerFlex Manager updates the login page to show a login button with the
new identity provider. You can see this button the next time you log in PowerFlex Manager.
Field Description
State Displays an icon indicating one of the following states:
● Pending—The compliance file download process is in progress.
● Downloading—The compliance file is being downloaded and PowerFlex Manager
provides the percentage complete for the download operation.
● Unpacking—The compliance file is being unpacked and provides the percentage
complete for the unpacking operation.
● Synchronizing—The compliance file is being synchronized with the management
software after unpacking.
● Available—The compliance file is downloaded and copied successfully.
● Error—There is an issue downloading the compliance file.
● Needs Approval—The compliance file is unsigned and needs approval for use.
Version Displays the compliance version.
Source Displays the share path of the compliance version in a file share.
Signature Displays whether the digital signature on the compliance file is signed or unsigned.
File Size Displays the size of the compliance file in GB.
Type Displays Minimal if the compliance file only contains firmware updates, or Full if it
contains firmware and software updates.
View bundles Displays details about any bundles added for the compliance version.
Available Actions Select an option for a compliance file that is in an Error state:
● Delete the compliance version
● Resynchronize a compliance version
Select an option for a compliance file that is in a Needs Approval state:
● Allow unsigned file
● Delete
Use the OS Image Repositories tab to create operating system image repositories and view the following information:
Field Description
State Displays the following states:
● Available—The operating system image repository is downloaded and copied
successfully.
● Pending—The operating system image repository download process is in
progress.
● Error—There is an issue downloading the operating system image repository.
Repository Displays the repository name.
You cannot perform any actions on repositories that are in use. However, you can delete repositories that are in an Available
state, but not in use and not set as a default version.
All the options are available only for repositories in an Error state. The Resynchronize option appears only when you must
perform a backup and restore of a previous image.
Use the Compatibility Management tab to load the compatibility management file. To facilitate upgrades, PowerFlex Manager
uses information that is provided in the compatibility matrix file to determine which upgrade paths are valid and which are
not. When you attempt an upgrade, PowerFlex Manager warns you if the current version of the software is incompatible with
the target version, or if any of the RCM or IC versions that are currently loaded are incompatible with the target compliance
versions.
Related information
Configuring block storage
Deploying and provisioning
Compliance versions
PowerFlex Manager displays the following information about the compliance versions:
Field Description
State Displays an icon indicating one of the following states:
● Pending—The compliance file download process is in progress.
● Downloading—The compliance file is being downloaded and PowerFlex Manager
provides the percentage complete for the download operation.
● Unpacking—The compliance file is being unpacked and provides the percentage
complete for the unpacking operation.
● Synchronizing—The compliance file is being synchronized with the management
software after unpacking.
● Available—The compliance file is downloaded and copied successfully.
● Error—There is an issue downloading the compliance file.
● Needs Approval—The compliance file is unsigned and needs approval for use.
Version Displays the compliance version. If one or many components in the RCM are
modified, a wrench icon and the text, "Modified" is displayed next to the name of
the RCM. But, if the file is deleted from the file share after a component is updated,
the wrench ion with the text, "Modified | Needs Attention", is displayed.
Source Displays the share path of the compliance version in a file share.
Signature Displays whether the digital signature on the compliance file is signed or unsigned.
File Size Displays the size of the compliance file in GB.
Type Displays Minimal if the compliance file only contains firmware updates, or Full if it
contains firmware and software updates.
View bundles Displays details about any bundles added for the compliance version.
Available Actions Select an option if the compliance file is in an Error or Needs Approval state:
● Delete the compliance version
● Resynchronize a compliance version
● Allow unsigned file
2. In the Add Compliance File dialog, select one of the following options:
● Download from SupportAssist (Recommended)—Select this option to import the compliance file that contains the
firmware bundles you need.
○ Download from local network path—Select this option to download the compliance file from a CIFS file share. This
option is intended for sites that do not have Internet accessibility to SupportAssist.
3. If you selected Download from SupportAssist (Recommended), click the Available Compliance Files drop-down list,
and select the file.
Before downloading a compliance file from SupportAssist, you must configure SupportAssist. To configure the
SupportAssist, you must enable SupportAssist in the Initial Configuration wizard.
If you are downloading a compliance file from SupportAssist, the file type is a ZIP or TAR.GZ file.
4. If you selected Download from local network path, perform the following:
● In the File Path box, enter the location of the compliance file. Use the following format:
○ CIFS share for ZIP file: \\host\share\filename.zip
○ CIFS share for TAR.GZ file: \\host\share\filename.tar.gz
● If you are using a CIFS share, enter the User Name and Password.
● Mark the target upgrade catalog as the default version for compliance checking.
Download the 3.6.x and the 4.5.x software catalogs. PowerFlex requires the catalog for the version that the system is on
and the target version.
5. Click Save.
PowerFlex Manager unpacks the compliance file as a repository that includes the firmware and software bundles that are
required to perform any updates.
If the compliance file contains an embedded ISO file for an operating system image (such as for an VMware ESXi or
PowerFlex image), the download process unpacks the file and adds it to the operating system image repository.
The Resources page displays devices that are validated against the default repository.
This capability is available on the PowerFlex appliance and PowerFlex rack offerings only.
There are two types of firmware repositories:
Repository Description
Default The default firmware repository is applied to all devices that are not in a resource
group.
To set a default firmware repository, you must download a compliance file from
either SupportAssist or an internal share through PowerFlex Manager. If you set
a default compliance version, you can view compliance with this repository on the
Resources page.
Devices with firmware levels below the minimum firmware that is listed in the
default compliance version are viewed as non-compliant (out of compliance).
Service level This repository is applied only to nodes that are in service and assigned the service
level firmware repository.
Devices with firmware levels below the minimum firmware level that is listed in the
service level repository are marked as non-compliant. When a service level firmware
repository is assigned to a resource group, the firmware validation is checked only
against the service level firmware repository and the default firmware repository
checks are no longer applied to the devices associated with this resource group.
d. If you are using the CIFS share, enter the User Name and Password to access the share.
3. Click Add.
After adding an OS image, you can modify it or remove it by selecting the image, then clicking Modify or Remove.
4. Click Resynchronize.
The repository state changes to Copying state.
Compatibility management
To facilitate upgrades, PowerFlex Manager uses information that is provided in the compatibility matrix file to determine which
upgrade paths are valid and which are not.
When you attempt an upgrade, PowerFlex Manager warns you if the current version of the software is incompatible with the
target version, or if any of the RCM or IC versions that are currently loaded are incompatible with the target compliance
versions. To determine which paths are valid and which are not, PowerFlex Manager uses information that is provided in the
compatibility matrix file. The compatibility matrix file maps all the known valid and invalid paths for all previous releases of the
software.
When you first install PowerFlex Manager, the software does not have the compatibility matrix file, so you must upload the
file before performing any updates. You must upload the latest compatibility matrix file to ensure that PowerFlex Manager has
access to the latest upgrade information.
You can download the file from SupportAssist or upload it from a local directory path. The file has a GPG extension and an
associated compatibility version number.
If you are uploading from a local directory path, ensure that you have access to a valid compatibility matrix file that has the GPG
extension. If you are using SupportAssist, ensure that SupportAssist has been properly configured.
1. On the menu bar, click Settings and then click Compatibility Management.
2. Click Edit Settings.
3. Click Download from Dell Technologies SupportAssist if you are using SupportAssist.
4. Click Upload from Local to use a local file. Then, click Choose File to select the GPG file.
Networking
On the Networking page, you can define, edit, or delete a network. You can also verify which IP addresses are in use before
deployment.
Related information
Getting started
Configuring block storage
Deploying and provisioning
Networks
The Networks page displays information about networks that are defined in PowerFlex Manager, including:
● Name
● Network Type
● VLAN ID
● IP Address Setting
● Starting IP Address
● Ending IP Address
● Role
● IP Address in Use
On the Networks page, you can:
● Define or modify an existing network.
● Delete an existing network.
● Click Export All to export all the network details to a CSV file.
● Export network details for a specific network. To export the specific network details, select a network, and then click
Export Network Details.
● Click a network to see the following details in the Summary tab:
○ Name of the user who created and modified the network.
○ Date and time that the network was created and last modified.
● Sort the column by network names, click the arrow next to the column header. You can also refresh the information about
the page.
If you select a network from Networks list, the network details are displayed.
For a static network, the following information is displayed:
● Subnet Mask
● Gateway
● Primary DNS
● Secondary DNS
● DNS Suffix
● Last Updated By
● Date Last Updated
● Created By
● Date Created
● Static IP Details
Defining a network
Adding the details of an existing network enables PowerFlex Manager to automatically configure nodes that are connected to
the network.
Ensure that the following conditions are met before you define the network:
● PowerFlex Manager can communicate with the out-of-band management network.
● PowerFlex Manager can communicate with the operating system installation network in which the appliance is deployed.
● PowerFlex Manager can communicate with the hypervisor management network.
● The DHCP node is fully functional with appropriate PXE settings to PXE boot images from PowerFlex Manager in your
deployment network.
To define a network, complete the following steps:
1. On the menu bar, click Settings and then click Networking.
2. Click Networks.
The Networks page opens.
3. Click Define. The Define Network page opens.
4. In the Name field, enter the name of the network. Optionally, in the Description field, enter a description for the network.
5. From the Network Type drop-down list, select one of the following network types:
● General Purpose LAN
● Hypervisor Management
● Hypervisor Migration
● Hardware Management
● PowerFlex Data
● PowerFlex Data (Client Traffic Only)
● PowerFlex Data (Server Traffic Only)
● PowerFlex Replication
● NAS File Management
● NAS File Data
● PowerFlex Management
For a PowerFlex configuration that uses a hyperconverged architecture with two data networks, you typically have two
networks that are defined with the PowerFlex data network type. The PowerFlex data network type supports both client and
server communications. The PowerFlex data network type is used with hyperconverged resource groups.
For a PowerFlex configuration that uses a two-layer architecture with four dedicated data networks, you typically have two
PowerFlex (client traffic only) VLANs and two PowerFlex data (server traffic only) VLANs. These network types are used
with storage-only and compute-only resource groups.
7. Optionally, select the Configure Static IP Address Ranges check box, and then do the following:
a. In the Subnet box, enter the IP address for the subnet. The subnet is used to support static routes for data and
replication networks.
b. In the Subnet Mask box, enter the subnet mask.
c. In the Gateway box, enter the default gateway IP address for routing network traffic.
d. Optionally, in the Primary DNS and Secondary DNS fields, enter the IP addresses of primary DNS and secondary DNS.
e. Optionally, in the DNS Suffix field, enter the DNS suffix to append for hostname resolution.
f. To add an IP address range, click Add IP Address Range. In the row, indicate the role in PowerFlex nodes for the IP
address range and then specify a starting and ending IP address for the range. For the Role, select either:
● Server or Client: Range is assigned to the server and client roles.
● Client Only: Range is assigned to the client role on PowerFlex hyperconverged nodes and PowerFlex compute-only
nodes.
● Server Only: Range is assigned to the server role on PowerFlex hyperconverged nodes and PowerFlex storage-only
nodes.
Repeat this step to add IP address ranges based on the requirement. For example, you can use one range for SDC and
another for SDS.
NOTE: IP address ranges cannot overlap. For example, you cannot create an IP address range of 10.10.10.1–
10.10.10.100 and another range of 10.10.10.50–10.10.10.150.
8. Click Save.
Modifying a network
If a network is not associated with a template or resource group, you can edit the network name, the VLAN ID, or the IP address
range.
1. On the menu bar, click Settings and then click Networking.
2. Click Networks.
The Networks page opens.
3. Select the network that you want to edit and click Modify. The Modify Network page opens.
4. Edit the information in any of the following fields: Name, VLAN ID, IP Address Range.
For a PowerFlex data or replication network, you can specify a Subnet IP address for a static route configuration. The
subnet is used to support static routes for data and replication networks.
5. Click Save.
You can also view details for a network or export the network details by selecting the network, then clicking View Details or
Export Network Details.
Deleting a network
You cannot delete a network that is associated with a template or resource group.
1. On the menu bar, click Settings and then click Networking.
2. Click Networks.
The Networks page opens.
3. Click the network that you want to delete, and then click Delete.
4. Click OK when the confirmation message is displayed.
Renaming a network
After you have added a system data network, you can edit the network name.
1. On the menu bar, click Settings and then click Networking.
2. Click System Data Networks.
The System Data Networks page opens.
3. Select the network that you want to edit and click Rename. The Rename System Data Network page opens.
4. Edit the information in the System Data Network Name.
5. Click Apply.
Related information
Enabling SupportAssist
(Password-based
encryption with
message digest
(MD5) with at least
eight characters)
Related tasks
Modifying an external source
Configuring a destination
Modifying a destination
Add a notification policy
Modify a notification policy
Delete a notification policy
Related tasks
Configuring an external source
Configuring a destination
Modifying a destination
Add a notification policy
Modify a notification policy
Delete a notification policy
Configuring a destination
Define a location where event and alert data that has been processed by PowerFlex Manager should be sent.
1. Go to Settings > Events and Alerts > Notification Policies.
2. From the Destinations pane, click Add.
The Create New Destination Protocol window opens.
3. From the Destination Properties window:
a. Enter the destination name and description.
b. From the Destination Type menu, select either SNMP V2c, SNMP V3, Syslog, Email (SMTP), or Webhook.
4. Click Next.
The Protocol Settings window opens.
5. Depending on the destination type, enter the following information:
Related tasks
Configuring an external source
Modifying a destination
You can edit the information about where event and alert data that is processed by PowerFlex Manager should be sent.
1. Go to Settings > Events and Alerts > Notification Policies.
2. From the Destinations pane, click the destination whose information you want to modify.
The Edit Source window opens.
3. Edit the information and click Submit.
Related tasks
Configuring an external source
Modifying an external source
Configuring a destination
Add a notification policy
Modify a notification policy
Delete a notification policy
5. From the Resource Domain menu, select the resource domain that you want to add a notification policy to. The resource
domain options are:
● All
● Management
● Block (Storage)
● File (Storage)
● Compute (Servers, Operating Systems, virtualization)
● Network (Switches, connectivity etc.)
● Security (RBAC, certificates, CloudLink etc.)
6. Select the check box beside the severity levels that you want to associate with this policy. The severity indicates the risk (if
any) to the system, in relation to the changes that generated the event message.
7. Select the required destination. You can choose one or more destinations. For a webhook, you can have up to three
destinations defined.
8. Click Submit.
After adding a notification policy, you might need to perform additional configuration steps. For example, If you are setting up
a notification policy for a webhook destination that uses BigPanda, you can optionally configure BigPanda to show the severity
Related tasks
Configuring an external source
Modifying an external source
Configuring a destination
Modifying a destination
Modify a notification policy
Delete a notification policy
Related information
Enabling SupportAssist
Related tasks
Configuring an external source
Modifying an external source
Configuring a destination
Modifying a destination
Add a notification policy
Delete a notification policy
Related tasks
Configuring an external source
Modifying an external source
Configuring a destination
Modifying a destination
Add a notification policy
Modify a notification policy
Related information
Getting started
Enabling SupportAssist
Security
On the Security page, you can upload SSL trusted certificates for connecting to Active Directory, as well as appliance SSL
certificates that ensure data secure transmission for PowerFlex Manager. You can also define credentials for the resources that
PowerFlex Manager accesses and manages.
Before you upload a trusted SSL certificate, you must obtain the certificate file. The file must contain an X.509 certificate in
PEM format. It must start with ---BEGIN CERTIFICATE--- and end with ---END CERTIFICATE---.
1. On the menu bar, click Settings and then click Security.
2. Click SSL Trusted Certificates.
The SSL Trusted Certificates page opens.
3. Click Add.
4. Click Choose File and select the SSL certificate.
5. Provide a Name for the certificate. The name must be a single word.
6. To upload the certificate, click Save.
To delete an SSL trusted certificate, select the certificate and click Delete.
Create credentials
Perform this procedure to create credentials:
1. On the menu bar, click Settings and click Security.
5. In the Credential Name field, enter the name to identify the credential.
6. Click Enable Key Pairs to enable login with SSH key pairs:
To enable key pairs for the Node or Switch credential type:
a. Import an existing key:
i. Click Import SSH Key Pair.
ii. Click Choose File and browse to the file that contains your public and private key, and select the private key.
iii. Type a name for the key pair.
iv. Click Import.
To enable key pairs for the OS Admin or OS User credential type:
a. To create a new key:
i. Click Create a new key.
ii. Click Create & Download Key Pair..
iii. Type a name for the key pair.
iv. Click Create.
The private key file (id_rsa) will be downloaded on your downloads folder. Click the Download Public Key button to
download the public key file (id_rsa.pub).
b. To import an existing key:
i. Click Import existing key.
ii. Click Import SSH Key Pair.
iii. Click Choose File and browse to the file that contains your public and private key.
iv. Type a name for the Key Pair Name field.
v. Click Import.
If you enable SSH key pairs for a Node or Switch credential and use that credential for discovery, PowerFlex Manager uses
public or private RSA key pairs to SSH into your node or switch securely, instead of using a username and password. If you
enable SSH key pairs for an OS user or OS Admin credential and use that credential for a deployment,PowerFlex Manager
uses RSA public/private key pairs for the deployment operations.
NOTE: PowerFlex Manager does not consume SSH keys for all component types. For example, if you enable SSH key
pairs for an admin credential, the SSH keys are not used for the deployment of a CloudLink Center VM. In this case, the
username and password would be used instead for all communication.
7. In the Domain box, optionally specify an LDAP domain for the user.
8. In the User Name field, enter the username for the credential.
root is the only valid username for root-level credentials on nodes (iDRAC). You can add iDRAC users with a username
other than root.
For the OS User credential type, you can enter a user other than root. For the embedded operating system, this user
account must have SSH enabled and have sudo access. For ESXi, the account must be configured with the administrator
role on the local server permission setting, which should enable SSH and other tools like esxcli. You can add existing resource
groups with a nonroot user.
For the OS Admin credential type, the User Name field is disabled because the user is assumed to be root. You must use
the root user for new deployments.
Provide two usernames for the PowerFlex gateway credential type:
● Gateway Admin User Name
● Gateway OS User Name
The Gateway admin user is the REST API administrator. The Gateway OS user is the SSH login user. The Gateway admin
user must be the admin user, and the Gateway OS user must be root.
9. In the Password and the Confirm Password boxes, enter the password for the credential.
NOTE: When the SSH key pair feature is enabled, the switch credential does not require the Password option.
● For VMware vCenter and element manager, in the Domain box, optionally enter the domain ID.
● For switch credentials, under Protocol, optionally click one of the following connection protocols that are used to access
the resource from remote:
○ Telnet
○ SSH
NOTE: SSH is enabled on supported switches by default.
10. To configure trap receiving for SNMPv2:
a. Under SNMP Configuration, select V2 as the SNMP type.
b. Click + beside the SNMP v2 Community String box.
The SNMP v2 Community String page opens.
c. Enter the community string by which PowerFlex Manager receives traps from devices and by which it forwards traps to
destinations.
d. Click Save.
NOTE: You can add more than one community string. For example, add more than one if the community string by which
PowerFlex Manager receives traps differs from the community string by which it forwards traps to a remote destination.
Modify credentials
Perform this procedure to modify credentials:
1. On the menu bar, click Settings and click Security.
2. Click Resource Credentials.
The Credentials Management page opens.
3. Select a credential that you want to edit, and click Modify.
4. Modify the credential information in the Modify Credentials dialog box.
5. Click Save.
Remove credentials
Perform this procedure to remove credentials:
1. On the menu bar, click Settings and then click Security.
2. Click Resource Credentials.
The Credentials Management page opens.
3. On the Credentials Management page, select the credential that you want to delete, and click Remove.
4. Click OK when the confirmation message appears.
1. To configure an SSH key pair for a software-only node, run these commands:
2. To configure an SSH key pair for a Cisco Nexus switch, run these commands to add the public key:
3. To configure an SSH key pair for an OS10 switch, run these commands to add the public key:
Note that the quotes must be used as shown in the command line above.
Serviceability
On the Serviceability page, you can generate a troubleshooting bundle and also perform a backup of the appliance. To restore
from a backup, you need to run a script outside of PowerFlex Manager. The user interface does not support the ability to
restore from a backup.
For example:
CIFS: \\192.168.1.1\uploadDirectory
4. Click Test Connection to verify the connection to the CIFS share before generating the bundle.
5. Optionally, select Include PowerFlex File Core Dump logs, if you want to include core dump logs for NAS.
The NAS directory structure, nodes, and files are always collected regardless of whether this box is checked. When it is
checked, the additional NAS core dump is collected.
6. To collect PowerFlex logs, select one of the following log level options:
● Default Node Logs
● Default Node Logs plus additional MDM information
● Latest Logs only (Most recent copy of all logs)
7. To collect PowerFlex node logs, select one of the following options:
● Logs from all nodes
● Select Specific Nodes
If you select the Select Specific Nodes option and select the number of nodes for which you want to generate the log, the
View/Select Nodes button is displayed. Click the button to view the list of nodes in the Node Selection window. Select
the required nodes from the Available Nodes list and click >> to view them in the Selected Nodes list. Click Save to
return to the Generate Troubleshooting bundle page.
For the Select Specific Nodes option, Generate is enabled only if a node is selected in the Node Selection window.
8. Click Generate.
Sometimes, the troubleshooting bundle does not include log information for all the nodes. The log collection may appear to
succeed, but the log for one or more of the nodes may be missing. You may see an error message in the scaleio.trace.log file
that says Could not run get_info script. If you see this message, you may need to generate the troubleshooting
bundle again to include information for all the logs.
The Backup page also displays information about the status of automatically scheduled backups (enabled or disabled).
On this page, you can:
● Manually start an immediate backup
● Edit general backup settings
● Edit automatically scheduled backup settings
After performing a backup operation, you need to run a script outside of PowerFlex Manager to restore from the backup. The
user interface does not support the ability to restore from a backup.
Backing up
In addition to automatically scheduled backups, you can manually run an immediate backup.
1. On the menu bar, click Settings > Serviceability.
2. Click Backup.
The Backup page opens.
3. Click Backup Now.
4. Select one of the following options:
● To use the general settings that are applied to all backup files, select Use Backup Directory Path and Encryption
Password from Settings and Details.
● To use custom settings:
a. In the Backup Directory Path box, enter a file path where the backup file is saved. Use this format:
○ CIFS—\\host\share
b. Optionally, enter a username and password in the Backup Directory User Name and Backup Directory Password
boxes, if they are required to access the location you provided.
Restoring
Restoring PowerFlex Manager returns user-created data to an earlier configuration that is saved in a backup file. To restore
from a backup, you need to run a script outside of PowerFlex Manager. The user interface does not support the ability to
restore from a backup.
Before you begin the restore procedure, you need to satisfy these prerequisites:
● The restore cluster must be exactly the same PowerFlex version and Kubernetes version.
● The restore cluster must have exactly the same IP addresses and configuration.
The cluster configuration must be the same as the cluster configuration where the backup was taken.
○ All Kubernetes nodes must have the same IP addresses.
○ All Kubernetes nodes must have the same names.
○ All LoadBalancer IP addresses must be the same.
CAUTION: Restoring an earlier configuration restarts PowerFlex Manager and deletes data created after the
backup file to which you are restoring. Any running jobs could be terminated as well.
1. Login to the node where the PowerFlex Manager platform (PFMP) installer was initially run.
2. Run the restore script that is included with the installer bundle:
./restore_backup.sh
To complete the execution of the restore script, you must specify whether the restore operation will be performed on an
existing cluster or a new cluster.
Here is a snippet that shows a sample run of the restore script:
/usr/local/lib/python3.8/site-packages/paramiko/transport.py:236:
CryptographyDeprecationWarning: Blowfish has been deprecated "class":
algorithms.Blowfish, Installation logs are available at <Bundle root>/PFMP_Installer/
logs/ directory. More detailed logs are available at <Bundle root>/atlantic/logs/
directory. PFMP Installer is about to reset a PFMP cluster based on the configuration
specified in the PFMP_Config.json.
Please enter the ssh username for the nodes specified in the
PFMP_Config.json[root]:root
Please enter the ssh password for the nodes specified in the PFMP_Config.json.
Password:
Please enter CIFS password(base64 encoded). Press enter to skip if username is not
required: UmFpZDR1cyE=
The restore process prints out status information until the restore is complete.
# Run this to make it easier to run the rest of the code and set the default
namespace.
alias k="kubectl -n $(kubectl get pods -A | grep -m 1 -E 'platform|pgo|helmrepo' |
cut -d' ' -f1)"
kubectl config set-context default --namespace=$(kubectl get pods -A | grep -m 1 -E
'platform|pgo|helmrepo|docker' | cut -d' ' -f1)
2. Verify the pgo controller pod and all database pods are up and running with no errors in the logs:
# Get the PostgreSQL operator pod and PostgreSQL cluster pods to verify.
echo $(kubectl get pods -l="postgres-operator.crunchydata.com/control-plane=pgo" --no-
headers -o name && kubectl get pods -l="postgres-operator.crunchydata.com/instance"
--no-headers -o name) | xargs kubectl get -o wide
# Trigger a shutdown
k patch $(k get postgrescluster -o name) --type merge --patch '{"spec":{"shutdown":
true}}'
4. At this point, you can shut down the VMs to take snapshots and adjust resources, as needed. Follow the next steps after the
VMs are powered back on.
5. Verify the shutdown occurred:
# Trigger a shutdown
k patch $(k get postgrescluster -o name) --type merge --patch '{"spec":{"shutdown":
false}}'
To restore to the previous point in time, you can revert the VMs from the snapshots in the vSphere Client user interface.
5. At the top of the Management Software Upgrade page, click Upgrade Now.
After you click Upgrade Now, a dialog box displays with a warning message indicating whether the upgrade path is valid.
6. Under Management Credentials, optionally override the default username and password for the nodes hosting PowerFlex
Manager. The upgrade requires access to the nodes that are hosting PowerFlex Manager. Provide a single user that has
superuser privileges for all nodes. You can enter the username for the first row in the table, and the changes you make will
be applied to all the nodes. Enter the password for each of the nodes.
7. Type UPDATE POWERFLEX MANAGER if you want to proceed with the update. If you want to perform an update even if
none is required, type FORCE UPDATE.
8. Click Yes to update PowerFlex Manager.
The update process displays messages indicating the progress of the update. Depending on which services were upgraded,
you may be required to log in once again, after the upgrade process is complete.
You can upgrade from a .tgz file on a network share. This option provides an easy way to update the software in a dark
site. Provide a path using one of the following formats:
Lifecycle
This section provides reference information for the Resource Groups and Templates pages.
Related information
Managing components
Monitoring system health
Managing external changes
Deploying a resource group
Adding an existing resource group
Viewing a compliance report
Resource groups
This section provides reference information for the Resource Groups page.
Related information
Managing components
Managing external changes
Lifecycle mode
If you add an existing resource group that includes an unsupported configuration, PowerFlex Manager might put the resource
group in lifecycle mode. This mode limits the actions that you can perform within the resource group.
Lifecycle mode allows the resource group to perform only monitoring, service mode, and compliance upgrade operations. All
other resource group operations are blocked when the resource group is in lifecycle mode. Lifecycle mode is used to control the
operations that can be performed for configurations that have limited support.
When you add an existing resource group, PowerFlex Manager puts the resource group in lifecycle mode if the configuration you
want to import includes any of the following:
● Invalid server inventory
Templates
This section provides reference information for the Templates page.
Related information
Getting started
Managing components
Managing templates
On the Templates page, you can view information about templates in a list or tile view. To switch between views, click the tile
icon or list icon at the top of the Templates page. When in tile view, you can view the templates under a category by
clicking the graphic that represents the category.
Users with standard permissions can view details of only those templates for which the administrator has granted permission.
Templates - My Templates List View
The My Templates page displays the details of the templates that you have created.
Field Description
+ Add a Template Allows you to add a template.
After you add a template, you can add node, cluster, and VM
components on the template builder page.
Export All Allows you to export all templates to a CSV file.
Filter By Allows you to filter and view templates based on the template
category.
Field Description
Name Displays the template name.
Components Displays the components in the template.
Templates
On the Templates page, the right pane displays the name of the template, icons of components in the template, and the
following details for a selected template:
Field Description
Edit Click to edit the template.
Delete Click to delete the template.
View Details Click to view the details of the template, such as components
in the template.
Clone Click to clone the template.
Export Template Click to export the template to a GPG file. You can use the
exported template to duplicate the settings.
Created On Displays the date and time when the template was created.
Created By Displays the name of the user who created the template.
Updated On Displays the date and time when the template was updated.
Updated By Displays the name of the user who updated the template.
Sample templates
This topic lists the sample templates that are provided with PowerFlex Manager.
Related information
Deploy software management
Component types
Components (physical or virtual or applications) are the main building blocks of a template.
PowerFlex Manager has the following component types:
● Node (Software/hardware and software only)
● Cluster
● VM
Related information
Adding components to a resource group
Edit a template
Deploying a resource group
View template details
Node settings
This table describes the following node settings: hardware, BIOS, operating system, and network.
Setting Description
Full network automation Allows you to perform deployments with full network automation. This feature
allows you to work with supported switches, and requires less manual
configuration. Full network automation also provides better error handling
since PowerFlex Manager can communicate with the switches and identify
any problems that may exist with the switch configurations.
Partial network automation Allows you to perform switchless deployments with partial network
automation. This feature allows you to work with unsupported switches,
but requires more manual configuration before a deployment can proceed
successfully. If you choose to use partial network automation, you give up the
error handling and network automation features that are available with a full
network configuration that includes supported switches.
For a partial network deployment, the switches are not discovered,
so PowerFlex Manager does not have access to switch configuration
information. You must ensure that the switches are configured correctly,
since PowerFlex Manager does not have the ability to configure the switches
for you. If your switch is not configured correctly, the deployment may fail
and PowerFlex Manager is not able to provide information about why the
deployment failed.
For a partial network deployment, you must add all the interfaces and ports,
as you would when deploying with full network automation. However, you
do not need to add the operating system installation network, since PXE is
not required for partial network automation. PowerFlex Manager uses virtual
media instead for deployments with partial network automation. The Switch
Port Configuration must be set to Port Channel (LACP enabled). In
addition, the LACP fallback or LACP ungroup option must be configured on
the port channels.
Number of Instances Enter the number of instances that you want to add.
If you select more than one instance, a single component representing
multiple instances of an identically configured component is created.
Edit the component to add extra instances. If you require different
configuration settings, you can create multiple components.
Related Components Select Associate All or Associate Selected to associate all or specific
components to the new component.
Import Configuration from Reference Click this option to import an existing node configuration and use it for the
Node node component settings. On the Select Reference Node page, select the
node from which you want to import the settings and click Select.
OS Settings
Host Name Selection If you choose Specify At Deployment Time, you must type the name for
the host at deployment time.
If you choose Auto Generate, PowerFlex Manager displays the Host Name
Template field to enable you to specify a macro that includes variables that
produce a unique hostname. For details on which variables are supported, see
the context-sensitive help for the field.
If you choose Reverse DNS Lookup, PowerFlex Manager assigns the
hostname by performing a reverse DNS lookup of the host IP address at
deployment time.
OS Image Specifies the location of the operating system image install files. You can use
the image that is provided with the target compliance file, or specify your
own location, if you created additional repositories.
To deploy a compute-only or storage-only resource group with the Linux
image that is provided with a compliance file, choose Use Compliance File
Linux image. If you want to deploy a NAS cluster, you must also choose Use
Compliance File Linux image.
To deploy a storage-only resource group with Red Hat Enterprise Linux,
you must create a repository on the Settings page and specify the path
to the Red Hat Enterprise Linux image on a file share. Dell Technologies
recommends that you use one of your own images that are published from
the customer portal at Red Hat Enterprise Linux.
For Linux, you may include one node within a resource group. For ESXi, you
must include at least two nodes.
NOTE: If you select an operating system from the OS Image drop-down
menu, the field NTP Server displays. This field is optional, but it is
highly recommended that you enter an NTP server IP to ensure proper
time synchronization with your environment and PowerFlex Manager.
Sometimes when time is not properly synchronized, resource group
deployment failure can occur.
NTP Server Specifies the IP address of the NTP server for time synchronization.
If adding more than one NTP server in the operating system section of a node
component, be sure to separate the IP addresses with commas.
Use Node For Dell PowerFlex Indicates that this node component is used for a PowerFlex deployment.
When this option is selected, the deployment installs the MDM, SDS, and
SDC components, as required for a PowerFlex deployment in a VMware
environment. The MDM and SDS components are installed on a dedicated
PowerFlex VM (SVM), and the SDC is installed directly on the ESXi host.
To deploy a PowerFlex cluster successfully, include at least three nodes in
the template. The deployment process adds an SVM for each hyperconverged
node. PowerFlex Manager uses the following logic to determine the MDM
roles for the nodes:
1. Checks the PowerFlex gateway inventory to see how many primary
MDMs, secondary MDMs, and tiebreakers are present, and the total
number of SDS components.
2. Adds the number of components being deployed to determine the overall
PowerFlex cluster size. For example, if there are three SDS components
in the PowerFlex gateway inventory, and you are deploying two more, you
will have a five node cluster after the deployment.
3. Adds a single primary MDM and determines how many secondary MDMs
and tiebreakers should be in the cluster by looking at the overall cluster
size. The configuration varies depending on the size of the cluster:
● A three-node cluster has one primary, one secondary, and one
tiebreaker.
● A five-node cluster has one primary, two secondaries, and two
tiebreakers.
4. Determines the roles for each of the new components being added, based
on the configuration that is outlined above, and the number of primary,
secondary, and tiebreakers that are already in the PowerFlex cluster.
At deployment time, PowerFlex Manager automatically sets up the DirectPath
I/O configuration on each hyperconverged node. This setup makes the
devices available for direct access by the virtual machines on the host and
also sets up the devices to run in PCI passthrough mode.
For each SDS in the cluster, the deployment adds all the available disks from
the nodes to the storage pools created.
For each compute-only or hyperconverged node, the deployment installs the
SDC VIB.
When you select this option, the teaming and failover policy for the cluster
are automatically set to Route based on IP hash. Also, the uplinks are
configured as active and active, instead of active and standby. Teaming is
configured for all port groups, except for the PowerFlex data 1 and PowerFlex
data 2 port groups.
If you select the option to Use Node For Dell PowerFlex, the Local Flash
storage for Dell PowerFlex option is automatically selected as the Target
Boot Device under Hardware Settings.
PowerFlex Role Specifies one of the following deployment types for PowerFlex:
● Compute Only indicates that the node is only used for compute
resources.
● Storage Only indicates that the node is only used for storage resources.
Enable PowerFlex File Enables NAS capabilities on the node. If you want to enable NAS on the nodes
in a template, you need to add both the PowerFlex Cluster and PowerFlex
File Cluster components to the template.
This option is only available if you choose Use Compliance File Linux Image
as the OS Image and then choose Compute Only as the PowerFlex Role.
If Enable PowerFlex File is selected, in the Hardware Settings section, the
only available choice for Target Boot Device is Local Hard Drive.
If Enable PowerFlex File is selected, you must ensure that the template
includes the necessary NAS File Management and NAS File Data networks.
If you do not configure these networks on the template, the template
validation fails.
Drive Encryption Type Specifies the type of encryption to use when encryption is enabled.
The options are:
● Software Encryption
● Self Encrypting Drive (SED)
Some nodes might not be selected for deployment, depending on the
encryption type selected. For example, if you choose Software Encryption,
you cannot include a node with only SEDs. Similarly, if you choose Self
Encrypting Drive, you cannot include a node with only software encryption
drives.
PowerFlex Manager does not allow you to mix SEDs and software encryption
drives in the same protection domain. Servers should not typically have this
mix, but PowerFlex Manager checks for this and uses only servers of the type
you specify.
Validate Settings detects nodes that do not match the specified Drive
Encryption Type.
Switch Port Configuration Specifies whether Cisco virtual PortChannel (vPC) or Dell Virtual Link
Trunking (VLT) is enabled or disabled for the switch port.
For hyperconverged templates, the options are:
● Port Channel turns on vPC or VLT.
● Port Channel (LACP enabled) turns on vPC or VLT with the link
aggregation control protocol enabled.
For storage-only and compute-only templates that use a Linux operating
system image, the options are:
● Port Channel (LACP enabled) turns on vPC or VLT with the link
aggregation control protocol enabled.
For a compute-only template that uses an ESXi operating system image, the
Switch Port Configuration setting includes all three options:
● Port Channel turns on vPC or VLT.
● Port Channel (LACP enabled) turns on vPC or VLT with the link
aggregation control protocol enabled.
Teaming And Bonding Configuration The teaming and bonding configuration options depend on the switch port
configuration selected. For hyperconverged and compute-only templates, the
following options are available:
● If you choose Port Channel (LACP enabled) as the switch port
configuration, the only teaming and bonding option is Route Based on
IP hash.
For storage-only templates, the following options are available:
● If you choose Port Channel (LACP enabled) as the switch port
configuration, the only teaming and bonding option is Mode 4 (IEEE
802.3ad policy).
Hardware Settings
Node Pool Specifies the pool from which nodes are selected for the deployment.
BIOS Settings
System Profile Select the system power and performance profile for the node.
User Accessible USB Ports Enables or disables the user-accessible USB ports.
Number of Cores per Processor Specifies the number of enabled cores per processor.
Logical Processor Each processor core supports up to two logical processors. If enabled, the
BIOS reports all logical processors. If disabled, the BIOS reports only one
logical processor per core.
Node Interleaving Enable or disable the interleaving of allocated memory across nodes.
● If enabled, only nodes that support interleaving and have the read/
write attribute for node interleaving set to enabled are displayed. Node
interleaving is automatically set to enabled when a resource group is
deployed on a node.
● If disabled, any nodes that support interleaving are displayed. Node
interleaving is automatically set to disabled when a resource group is
deployed on a node. Node interleaving is also disabled for a resource group
with NVDIMM compression.
● If not applicable is selected, all nodes are displayed irrespective of whether
interleaving is enabled or disabled. This setting is the default.
Network Settings
Multi-Network Selection Select the check-box to include multiple management networks of the same
type. If you select multiple networks of the same type without selecting the
check box, an error is displayed when you publish the template. The multiple
network selection is supported on the following networks:
● Hypervisor Management
● PowerFlex Management
● Hypervisor Migration
● Replication Networks
Number of Replication Networks Per Node This option is displayed only if the Multi-Network Selection and Enable
Replication check boxes are enabled. Select the number of networks you
want to add to the port. For example, if the selected number is 2, you can
assign one network each to two ports—Port 1 and Port 2. It is recommended
that the number of selected networks is always even.
Add New Interface Click Add New Interface to create a network interface in a template
component. Under this interface, all network settings are specified for a node.
This interface is used to find a compatible node in the inventory. For example,
if you add Two Port, 100 gigabit to the template, when the template
is deployed PowerFlex Manager matches a node with a two-port 100-gigabit
network card as its first interface.
To add one or more networks to the port, select Add Networks to this
Port. Then, choose the networks to add, or mirror network settings defined
on another port.
To see network changes that are previously made to a template, you can click
View/Edit under Interfaces. Or, you can click View All Settings on the
template, and then click View Networks.
To see network changes at resource group deployment time, click View
Networks under Interfaces.
Add New Static Route Click Add New Static Route to create a static route in a template. To
add a static route, you must first select Enabled under Static Routes. A
static route allows nodes to communicate across different networks. The
static route can also be used to support replication in a storage-only or
hyperconverged resource group.
A static route requires a Source Network and a Destination Network, and
a Gateway. The source and destination network must each be a PowerFlex
data network or replication network that has the Subnet field defined.
If you add or remove a network for one of the ports, the Source Network
drop-down list does not get updated and still shows the old networks. In
order to see the changes, save the node settings and edit the node again.
Validate Settings Click Validate Settings to determine what can be chosen for a deployment
with this template component.
The Validate Settings wizard displays a banner to when one or more
resources in the template do not match the configuration settings that are
specified in the template. The wizard displays the following tabs:
● Valid ( number ) lists the resources that match the configuration settings.
● Invalid ( number ) lists the resources that do not match the configuration
settings.
The reason for the mismatch is shown at the bottom of the wizard.
For example, you might see Network Configuration Mismatch as
the reason for the mismatch if you set the port layout to use a 100-Gb
network architecture, but one of the nodes is using a 25 GB architecture.
If you set the encryption method to use self-encrypting drives (SEDs), but
the nodes do not have these drives, you might see Self Encrypting
Drives are required but not found on the node,
or software encryption requested but only available
drives are SED.
After entering the information about operating system Installation (PXE) network in the respective field as described in the table
above, PowerFlex Manager untags vLANs entered in the operating system installation network on the switch node facing port.
For vMotion and hypervisor networks, PowerFlex Manager tags these networks on the switch node-facing ports for the entered
information. For rack node, PowerFlex Manager configures the vLANs on node facing ports (untag PXE vLANs, and tag other
vLANs).
If you select Import Configuration from Reference Node, PowerFlex Manager imports basic settings, BIOS settings, and
advanced RAID configurations from the reference node and enables you to edit the configuration. Some BIOS settings might
no longer apply once new BIOS settings are applied. PowerFlex Manager does not correct these setting dependencies. When
setting advanced BIOS settings, use caution and verify that BIOS settings on the hardware are applicable when not choosing
Not Applicable as an option. For example, when disabling SD card, the settings for internal SD card redundancy become not
applicable.
You can edit any of the settings visible in the template, but keep in mind that many settings are hidden when using this option.
For example, only ten out of many BIOS settings that you can see and edit using template are displayed. However, you can
configure all BIOS settings. If you want to edit any of the settings that are not visible through the template feature, edit them
before importing or uploading the file.
Data Center Name Select the data center name from Data Center Name list.
Cluster Name Select a new cluster name from Cluster Name list.
New Cluster Name Select new cluster name from New Cluster Name list.
Cluster HA Enabled Enables or disables a highly available cluster. You can either select or clear (default)
the check box.
Cluster DRS Enabled Enables or disables distributed resource scheduler (DRS). You can either select or
clear (default) the check box.
Protection Domain Name Provide a name for the protection domain. You can automatically generate the
name (recommended), or specify a new name explicitly. A protection domain is a
logical entity that contains a group of SDSs that provide backup for each other.
Each SDS belongs to only one protection domain. Each protection domain is a
unique set of SDSs. A protection domain may also contain SDTs and SDRs.
If you automatically generate the protection domain name or specify a new name
explicitly for a hyperconverged or storage-only template, the PowerFlex cluster
must have at least three nodes before you can publish the template and deploy it
as a resource group. However, if you select an existing protection domain that
is associated with another previously deployed hyperconverged or storage-only
resource group, and this protection domain has at least three nodes, PowerFlex
Manager recognizes the new template as valid if the cluster has fewer than three
nodes. You can publish the template successfully and deploy it as a resource group,
since the protection domain it uses has enough nodes.
NOTE: A compute-only resource group only requires a minimum of two nodes,
since it does not have a protection domain.
If you choose Compute Only as the PowerFlex Role for the node component, the
PowerFlex settings do not include the Protection Domain Name field.
New Protection Domain Name Specify a new name for the protection domain.
Protection Domain NameTemplate If you choose to generate the protection domain name automatically, PowerFlex
Manager fills this field with a default template that combines static text with
variables for several pieces of the autogenerated name. If you modify the template,
be sure to include the ${num} variable to ensure that the name is unique.
For details on the rules for defining a template, see the contextual help that
appears when you hover over the field.
If you choose Compute Only as the PowerFlex Role for the node component, the
PowerFlex settings do not include the Protection Domain Name Template field.
Acceleration Pool Name Provide a name for the acceleration pool. You can automatically generate the name
(recommended) or select from a list of existing acceleration pools.
You can add acceleration pools to a protection domain to accelerate storage pool
performance. An acceleration pool is a group of acceleration devices within a
protection domain.
Acceleration Pool Name Template Define a template for generating the acceleration pool name automatically.
PowerFlex Manager fills this field with a default template that combines static
text with variables for several pieces of the autogenerated name. If you modify the
template, be sure to include the $(num) variable to ensure that the name is unique.
For details on the rules for defining a template, see the contextual help that
appears when you hover over the field.
Storage Pool Name Provide a name for the storage pool. You can automatically generate the name
(recommended), select from a list of existing storage pools for the selected
protection domain, or specify a unique storage pool name. If you choose to
automatically generate a name, you are prompted to define a storage pool name
template.
Storage pools allow the generation of different storage tiers in PowerFlex. A
storage pool is a set of physical storage devices in a protection domain. Each
storage device belongs to one (and only one) storage pool.
The number of storage pools that are created at deployment time depends on the
number of disks available.
Granularity Set the granularity for compression by selecting Fine or Medium. The granularity
setting applies to the storage pool.
Storage Pool Disk Type Allows you to select the disk type - hard drive, SSD, or NVMe.
Storage Pool Name Template Define a template for generating the storage pool name automatically. PowerFlex
Manager fills this field with a default template that combines static text with
variables for several pieces of the autogenerated name. If you modify the template,
be sure to include the ${num} variable to ensure that the name is unique.
For details on the rules for defining a template, see the contextual help that
appears when you hover over the field.
If you choose Compute Only as the PowerFlex Role for the node component, the
PowerFlex settings do not include the Storage Pool Name Template field.
Fault Set Name Specify the fault set name. You can automatically generate the name
(recommended), select from a list of existing fault sets for the selected protection
domain, or specify a unique fault set name. If you choose to automatically generate
a name, you are prompted to define a fault set name template.
Fault Set Name Template If you choose to generate the fault set name automatically, PowerFlex Manager fills
this field with a default template that combines static text with variables for several
pieces of the autogenerated name. If you modify the template, be sure to include
the ${num} variable to ensure that the name is unique.
For details on the rules for defining a template, see the contextual help that
appears when you hover over the field.
Number of Fault Sets Specifies the number of fault sets to create at deployment time. PowerFlex requires
a minimum of three fault sets for a protection domain, with at least two nodes in
each fault set.
For a new protection domain name, you must specify a number between 3 and 16.
Each fault set acts as a single fault unit. If a fault set goes down, all nodes within
the fault set go down as well.
For an existing protection domain name, you must specify a number between 1 and
16. This allows you to add more fault sets to an existing protection domain. If the
selected protection domain already has 3 fault sets, for example, you can specify a
number as low as 1, to include an additional fault set for this protection domain.
PowerFlex Manager ensures that each new deployment has only one MDM role for
each fault set. For example, if you deploy 3 fault sets, one has the primary MDM,
another has the secondary MDM, and the third has the tiebreaker. You can use
the Reconfigure MDM Roles wizard to change the MDM role assignments after
deployment.
Machine Group Name Allows you to provide a name for the machine group. The Machine Group Name is
only available if you automatically generate the protection domain name or specify a
new protection name explicitly. You can automatically generate the machine group
name or specify a new machine group name explicitly.
New Machine Group Name Specify a new name for the machine group.
Number of Protection Domains Determines the number of protection domains that are used for PowerFlex file
configuration data. Control volumes will be created automatically for every node
in the PowerFlex file cluster, and spread across the number of protection domains
specified for improved cluster resiliency. To add data volumes, you need to use the
tools provided on the File tab.
You can have between one and four protection domains.
Protection Domain <n> Includes a separate section for each protection domain used in the template.
Storage Pool <n> Includes a separate section for each storage pool used in the template.
Related information
Add cluster component settings to a template
VM settings
The table describes VM settings for the CloudLink Center and the PowerFlex gateway.
Network Settings ● Static bonding NIC port design or LACP bonding ● LACP bonding NIC port design
NIC port design ● 25 GB
● 10 GB, 25 GB, 100 GB ● No PXE network (using iDRAC
● Required PXE network virtual media)
● Network Automation Type: Full ● Network Automation Type:
Partial
Related information
Adding an existing resource group
Build and publish a template
Resources
This section provides reference information for the Resources page.
Related information
Deploying and provisioning
Managing components
Monitoring system health
Warning Indicates that the resource is in a state that requires corrective action, but does not
affect overall system health. For example, the firmware running on the resource is not
at the required level or not compliant.
For supported switches, a warning health status might be because the SNMP
community string is invalid or not set correctly. In this case, PowerFlex Manager is
unable to perform health monitoring for the switch.
NOTE: If the resource group has VMware ESXi and is set to maintenance mode, a
message is displayed in the View Details pane.
Critical Indicates that an issue requiring immediate attention exists in one of the following
hardware or software components in the device:
● Battery
● CPU
● Fans
● Power supply
● Storage devices
● Licensing
For supported switches, a critical health status might indicate that the power supply is
not working properly or a CPU has overheated.
NOTE: If the resource group is in a power off state, a message displays in the
View Details pane.
Related information
Discover a resource
Removing resources
Exporting a compliance report for all resources
PowerFlex Manager keeps track of which resources it is managing. These operational states display on the Resources page, in
the Managed State column of the All Resources tab.
Managed Indicates that PowerFlex Manager manages the firmware on that node, and the node
can be used for deployments.
Reserved Indicates that PowerFlex Manager only manages the firmware on that particular
node, but that node cannot be used for deployments. You can assign a host to the
reserved state only if the host has been discovered, but is not part of a resource
group.
Compliance status
PowerFlex Manager assigns one of the following firmware statuses to the resources:
Update Required The firmware running on the resource is less than the
minimum firmware version recommended in the default
compliance version. Indicates that firmware update is
required.
Related information
Discover a resource
Removing resources
Exporting a compliance report for all resources
Node discovery
PowerFlex Manager supports PowerFlex node discovery and allows you to onboard nodes by configuring the initial management
IP address and iDRAC credentials. To perform initial discovery and configuration, verify that the management IP address is set
on the node and that PowerFlex Manager can access the IP address through the network. While configuring IP addresses on the
node, verify that PowerFlex Manager can access any final IP address in a range used for hardware management, to complete
discovery of these nodes.
PowerFlex Manager also allows you to use name-based searches to discover a range of nodes that were assigned IP addresses
by DHCP to iDRAC. You can search for a range of DNS hostnames or a single hostname within the Discovery Wizard. After
you perform a name-based discovery, PowerFlex Manager operations to the IDRAC continue to use name-based IP resolution,
since DHCP may assign alternate addresses.
Switch discovery
If you attempt to discover a Cisco switch with terminal color configured, the discovery fails. To discover the switch successfully,
disable the terminal color option by running configure no terminal color persist.
VM manager discovery
A VMware vCenter is discovered as a VM manager. PowerFlex Manager users with the administrator role can discover a
vCenter in PowerFlex Manager. A vCenter read-only user can discover a vCenter in PowerFlex Manager only after the following
requirements are met:
● The vCenter user who is specified in the vCenter credential is granted the
VirtualMachine.Provisioning.ReadCustSpecs and StorageProfie.View privileges.
● The permission containing these privileges is granted to that user on the root vCenter object and the Propogate to
children property is set to True.
PowerFlex Manager allows you to deploy new hyperconverged and compute-only resource groups and add existing resource
groups to a vCenter that has an Enhanced Linked Mode (ELM) configuration. ELM connects multiple vCenter servers,
allowing you to search across servers and perform other vCenter management functions. ELM does not provide clustering
or redundancy.
If you are working with a vCenter that has an enhanced linked mode configuration, you must discover only the vCenter to which
you want to deploy or add an existing resource group. You do not need to discover the other vCenters that are connected
through the enhanced linked mode configuration.
Configuration checks
When you initiate a resource group action, PowerFlex Manager performs critical configuration checks to ensure that the system
is healthy before proceeding. A configuration check failure may result in the resource group action being blocked. You can
export a PDF report at any time that shows detailed system health and configuration data. When you generate the report,
PowerFlex Manager initiates the full set of configuration checks and reports the results.
Related information
Exporting a configuration report for all resources and resource groups
Settings
This section provides reference information for the Settings page.
Backup details
PowerFlex Manager backup files include the following information:
● Activity logs
● Credentials
● Deployments
● Resource inventory and status
● Events
● Initial setup
● IP addresses
● Jobs
● Licensing
● Networks
● Templates
● Users and roles
● Resource module configuration files
● Performance metrics
Network types
You can manage various network types in PowerFlex Manager.
● General Purpose LAN—Used to access network resources for basic networking activities.
● Hypervisor Management—Used to identify the management network for a hypervisor or operating system that is deployed
on a node.
● Hypervisor Migration—Used to manage the network that you want to use for live migration. Live migration enables you to
move running virtual machines from one node of the failover cluster to different node in the same cluster.
● OS Installation—Allows static or DHCP network for operating system imaging on nodes.
● Hardware Management—Used for out-of-band management of hardware infrastructure
● PowerFlex Data—Used for data traffic between storage data servers (SDS) and storage data clients (SDC)
● PowerFlex Data (Client Traffic Only)—Used for storage data client traffic only
● PowerFlex Data (Server Traffic Only)—Used for storage data server traffic only
● PowerFlex Replication—Used to support PowerFlex replication
● NAS File Management—Used to support PowerFlex file management traffic
Maintenance activities
This section includes procedures for performing general maintenance, such as shutting down and restarting nodes.
1. When shutting down/rebooting a node that is a primary MDM (manager), it is recommended that you manually switch MDM
ownership to a different node:
a. From the PowerFlex CLI (SCLI), run:
scli --query_cluster
b. If the node's IP addresses are included in the --query_cluster output, the faulty node has a role of either MDM or
tiebreaker, in addition to its SDS role.
If the node's IP address is located in the primary MDM role, a switch-over action is required.
c. Switch MDM ownership to a different node:
The node remains in the cluster. The cluster will be in degraded mode after it is powered off, until the faulty component
or patch operation in the node is fixed and the node is powered back on.
d. Verify that the cluster status shows that the node is not the primary MDM anymore:
scli --query_cluster
Output similar to the following should appear, with the relevant node configuration and IP addresses for your deployment:
Cluster:
Mode: 5_node, State: Normal, Active: 5/5, Replicas: 3/3
Virtual IP Addresses: 9.20.10.100, 9.20.110.100
Primary MDM:
ID: 0x775afb2a65ef1f02
IP Addresses: 9.20.10.104, 9.20.110.104, Management IP Addresses:
10.136.215.239, Port: 9011, Virtual IP interfaces: sio_d_1, sio_d_2
Version: 2.0.13000
Secondary MDMs:
ID: 0x5b2e9f273b7af9b0
The following are the example commands used during run script on host process:
1. Obtain an access token from the PowerFlex Manager instance. The easiest method is to create a shell script that can be
sourced to add the proper variables to the user environment
c. Parse out the refresh token which can be used to get a new JWT token if the access token has expired. It is valid for 30
minutes by default.
NOTE: The expiration time for access token is five minutes. If required, the above file can be sourced to refresh all
variables.
2. Get the JSON of a system configuration, which will be the payload of the patch command (need to replace liaPassword and
mdmPassword manually from null to some string).
a. Create and save a JSON file like the following, replacing MDM addresses, MDM user, and MDM password with the
appropriate values.
{
"mdmIps":["<MDM IP-1>","<MDM-IP2>"],
"mdmUser":"<mdm user>",
"mdmPassword":"<mdm password>”,
"securityConfiguration":
{
"allowNonSecureCommunicationWithMdm":"true",
"allowNonSecureCommunicationWithLia":"true",
"disableNonMgmtComponentsAuth":"false"
}
}
b. Insert the output of this command (with fixed passwords) into the config.json file:
or
{
"phaseStatus": "running",
"phase": "execute",
"numberOfRunningCommands": 1,
"numberOfPendingCommands": 1,
"numberOfCompletedCommands": 35,
or
{
"phaseStatus": "completed",
"phase": "validate",
"numberOfRunningCommands": 0,
"numberOfPendingCommands": 0,
"numberOfCompletedCommands": 2,
"numberOfAbortedCommands": 0,
"numberOfFailedCommands": 0,
"failedCommands": []
}
Look for:
{
"phaseStatus": "completed",
"phase": "execute",
"numberOfRunningCommands": 0,
"numberOfPendingCommands": 0,
"numberOfCompletedCommands": 37,
"numberOfAbortedCommands": 0,
"numberOfFailedCommands": 0,
"failedCommands": []
}
Logs
Gateway pod log locations:
● /usr/local/tomcat/logs/scaleio.log
● /usr/local/tomcat/logs/scaleio-trace.log
LIA log location: /opt/emc/scaleio/lia/logs/trc.x
NOTE: Special switch to keep the script in the node when troubleshooting or testing:
1. Edit file /usr/local/tomcat/webapps/ROOT/WEB-INF/classes/gatewayInternal.properties
2. Find the field "ospatching.delete.scripts=false"
3. Change to true for troubleshooting (Default is false)
where the value of the PowerFlex component is mdm, sds, sdr, sdt, or lia
Windows
"C:\Program Files\EMC\scaleio\sdc\diag\get_info.bat" -f
The get_info syntax is explained fully in Collecting debug information using get_info.
● If the selected node is the primary MDM, use the flags -u <MDMuser> -p <MDMpassword>, instead of -f.
● If the selected node contains more than one PowerFlex component, running any script will gather logs for all components
on that node.
When the log collection process is complete, an archive file (either TGZ or ZIP) containing the logs of all PowerFlex
components in the node, is created in a temporary directory. By default, the directory is /tmp/scaleio-getinfo on
Linux hosts or C:\Windows\Temp\ScaleIO-getinfo on Windows hosts.
3. Verify that output similar to the following is returned, which shows that the process of log collection was completed
successfully:
bundle available at '/tmp/scaleio-getinfo/getInfoDump.tgz'
NOTE: The script can generate numerous lines of output. Therefore, look for this particular line in the output.
4. Retrieve the log file.
get_info.sh [OPTIONS]
Optional parameters:
-a, --all
Collect all data
-A, --analyse-diag-coll
Analyze diagnostic data collector (diag coll) data
-b[COMPONENTS], --collect-cores[=COMPONENTS]
Collect existing core dumps of the space-separated list of user-land components, COMPONENTS
(default: all user-land components)
For example, -b'mdm sds' (no space between option name and COMPONENTS)
For example, --collect-cores='mdm sds' (separate option name and COMPONENTS with a
single equal sign, "=")
-d OUT_DIR, --output-dir=OUT_DIR
Store collected bundle under directory OUT_DIR (default: <WORK_DIR>/scaleio-getinfo, see --work-
dir for the value of <WORK_DIR>)
-f, --skip-mdm-login
Skip query of PowerFlex login credentials
The parameters -k NUM and --max-cores=NUM collects up to NUM core files from each component
(default: all core files)
Implies --collect-cores -l, --light
Generate light bundle (not recommended)
--ldap-authentication
Log in to PowerFlex using LDAP-based authentication
-m NUM, --max-traces=NUM
Collect up to NUM PowerFlex trace file from each component (default: all files)
--management-system-ip=ADDRESS
Connect to SSO or management at ADDRESS for PowerFlex login (default: scli default)
--mdm-port=PORT
Connect to MDM using PORT for SCLI commands (default: scli default)
-n, --use-nonsecure-communication
Connect to the MDM in non-secure mode
-N, --skip-space-check
Skip free space verification
--overwrite-output-file
Overwrite output file if it already exists
-p PASSWORD, --password=PASSWORD
Use PASSWORD for PowerFlex login (default: scli default)
--p12-password=PASSWORD
Encrypt PowerFlex login PKCS#12 file using PASSWORD (default: scli default)
--p12-path=FILE
Store PowerFlex login PKCS#12 file as FILE (default: scli default)
-q, --quiet, --silent
2. On one of the cluster member nodes, execute the data collection utility as a user with superuser permissions, such as user
root.
In the following example, the utility's executable exists under /root.
# /root/pfmp_support
estimating required space
cleaning up temporary directories
collecting kubernetes data
collecting shared kubernetes data
collecting server data
collecting general hardware data
collecting network data
collecting storage data
preparing files for collection
generating bundle
cleaning up temporary directories
bundle available at '/tmp/powerflex-pfmpsupport/pfmpSupport.tgz'
# /root/pfmp_support --skip-kubernetes-shared
estimating required space
cleaning up temporary directories
collecting kubernetes data
collecting server data
collecting general hardware data
collecting network data
collecting storage data
preparing files for collection
generating bundle
cleaning up temporary directories
bundle available at '/tmp/powerflex-pfmpsupport/pfmpSupport.tgz'
4. From all cluster member nodes, provide the resulting support bundle files to Dell Technologies Support.
By default, bundle files are named /tmp/powerflex-pfmpsupport/pfmpSupport.tgz.
# /tmp/PFMP2-4.0.0-161/PFMP_Installer/scripts/pfmp_support
estimating required space
cleaning up temporary directories
collecting kubernetes data
collecting shared kubernetes data
collecting server data
collecting general hardware data
collecting network data
collecting storage data
preparing files for collection
generating bundle
cleaning up temporary directories
bundle available at '/tmp/powerflex-pfmpsupport/pfmpSupport.tgz'
For example:
CIFS: \\192.168.1.1\uploadDirectory
4. Click Test Connection to verify the connection to the CIFS share before generating the bundle.
5. Optionally, select Include PowerFlex File Core Dump logs, if you want to include core dump logs for NAS.
The NAS directory structure, nodes, and files are always collected regardless of whether this box is checked. When it is
checked, the additional NAS core dump is collected.
6. To collect PowerFlex logs, select one of the following log level options:
● Default Node Logs
● Default Node Logs plus additional MDM information
● Latest Logs only (Most recent copy of all logs)
7. To collect PowerFlex node logs, select one of the following options:
● Logs from all nodes
● Select Specific Nodes
If you select the Select Specific Nodes option and select the number of nodes for which you want to generate the log, the
View/Select Nodes button is displayed. Click the button to view the list of nodes in the Node Selection window. Select
the required nodes from the Available Nodes list and click >> to view them in the Selected Nodes list. Click Save to
return to the Generate Troubleshooting bundle page.
For the Select Specific Nodes option, Generate is enabled only if a node is selected in the Node Selection window.
8. Click Generate.
Sometimes, the troubleshooting bundle does not include log information for all the nodes. The log collection may appear to
succeed, but the log for one or more of the nodes may be missing. You may see an error message in the scaleio.trace.log file
that says Could not run get_info script. If you see this message, you may need to generate the troubleshooting
bundle again to include information for all the logs.
Save the output of the request. This output is a JSON representation of the system configuration.
3. Add the M&O login information to the JSON. The get info script requires the login information to perform certain queries, to
provide this information, add the following key value pairs to JSON:
mnoUser : “<mno_username>”
mnoPassword : “<mno_password>”
mnoIp : “<mno_ip>”
For example:
{
“mnoUser” : “<mno_username>” ,
“mnoPassword”: “<mno_password>” ,
“mnoIp: “<mno_ip>”
“snmpIp”: null
… (rest of the json)
}
5. To monitor the log collection process, run the following API command:
GET https://<mno_ip>/im/types/ProcessPhase/actions/queryPhaseState
Monitor the phaseStatus value and wait until the operation is complete.
For example:
Output example:
{"phaseStatus":"completed","phase":"query","numberOfRunningCommands":0,"numberOfPendin
gCommands":0,"numberOfCompletedCommands":6,"numberOfAbortedCommands":0,"numberOfFailed
Commands":0,"failedCommand
In this example, the logs are downloaded to the current working directory under the name "get_info.zip".
7. (Optional) Add the following optional attributes to the log collection request as query parameters:
● targetIPs—Filter the result to only part of the nodes (by their IPs)
● copyRepositories—Copy MDM repositories (true/false, default is false)
● liteVersion(lite version)—Collect trc.0, exp.0, and umt.0 files only and repository files (true/false, default is false)
● copyBinaries—Collect MDM, SDS, SDR, SDT, LIA binaries and core dumps (true/false, default is false)
● collectSdbgScreens (true/false, default is false)
For example:
8. To clear the operation and move to idle, run the following API commands:
POST https://<mno_ip>/im/types/Command/instances/actions/clear
https://<mno_ip>/im/types/ProcessPhase/actions/moveToIdlePhase
After the log collection operation is complete, mark the completed operation as completed to allow the other jobs to run.
For example:
Example:
Linux
/opt/MegaRAID/perccli/perccli64 /call show all > <file_name>
Example:
Example:
Linux
/opt/MegaRAID/perccli/perccli64 /call show events file=<file_name>
Example:
You can retrieve the RAIDevents.txt file from your local drive.
4. Retrieve the Termlog log file:
Example:
Linux
/opt/MegaRAID/perccli/perccli64 /call show termlog file=<file_name>
Example:
You can retrieve the RAIDtermlog.txt file from your local drive.
installer-30:/opt/dell/pfmp/PFMP_Installer/scripts # ./pfmp_support
estimating required space
cleaning up temporary directories
collecting kubernetes data
collecting shared kubernetes data
collecting server data
collecting general hardware data
collecting network data
collecting storage data
preparing files for collection
generating bundle
cleaning up temporary directories
bundle available at '/tmp/powerflex-pfmpsupport/pfmpSupport.tgz'
g. Select the destination that is created in the previous step and click Submit.
g. Select the destination that is created in the previous step and click Submit.
or
● NVMe-based SSDs:
a. Select the devices for encryption.
For example, /dev/nvmeXn1, where X could be 0 1 2 - 24.
svm status
svm status
3. Run the following command so that CloudLink will control the SED device:
For example:
4. Run svm status again to verify that the device is now managed:
The output should be similar to the following:
Volumes:
/ unencrypted
swap unencrypted
Devices:
/dev/sdf managed (sed SZ: 1788G MOD: KPM6WRUG1T92 SPT:
Yes )
/dev/sdd managed (sed SZ: 1788G MOD: KPM6WRUG1T92 SPT:
Yes )
/dev/sdb managed (sed SZ: 3577G MOD: KPM6WVUG3T84 SPT:
Yes )
/dev/sde managed (sed SZ: 1788G MOD: KPM6WRUG1T92 SPT:
Yes )
/dev/sdc managed (sed SZ: 3577G MOD: KPM6WVUG3T84 SPT:
Yes )
/dev/sdb unencrypted (sds SN:94917674 )
/dev/sdc unencrypted (sds SN:94917675 )
/dev/sdd unencrypted (sds SN:94917676 )
/dev/sde unencrypted (sds SN:94917677 )
/dev/sdf unencrypted (sds SN:94917678 )
/dev/sdg encrypted (sds SN:94917679 /dev/mapper/
svm_sdg)
/dev/sdh encrypted (sds SN:94917680 /dev/mapper/
svm_sdh)
/dev/sdi encrypted (sds SN:94917681 /dev/mapper/
svm_sdi)
/dev/sdj encrypted (sds SN:94917682 /dev/mapper/
svm_sdj)
/dev/sdk encrypted (sds SN:94917683 /dev/mapper/
svm_sdk)
NOTE: The status of the SED devices is displayed in the output as managed, but unencrypted.
or
● PowerFlex Manager:
For more information, see Remove devices.
2. Remove the encryption through CloudLink Center GUI or using the svm erase command:
NOTE: Removing (erasing) a device from CloudLink destroys all data on the device.
For example:
or
Persistent discovery
The persistent discovery controller ensures that the host remains connected to the discovery service after discovery. If at
any point there is a change in the discovery information that is provided to the host, the discovery controller returns an
asynchronous event notification (AEN) and the host requests the updated Discovery log page.
Here are some examples of changes in discovery information:
● A new volume is mapped to the host from a new protection domain.
● A new storage data target (SDT) is added to the system.
● Load balancing wants to move the host connection from one storage data target to another.
When configuring NVMe hosts, ensure that every host is connected for discovery at most once per subnet (data IP address
subnet). To use this functionality, ensure that the host operating system supports the Persistent Discovery Controller, and
that the Persistent Discovery flag is set in the discovery. (See the respective operating system for the NVMe over TCP host
configuration.)
Object Description
Host subnet Networks or subnets used to connect hosts to storage, which
can be either layer 2 or layer 3 (routed). You might have
to define the data networks before starting the deployment
itself. Maximum supported data networks are 4 (in Dell
PowerFlex appliance and Dell PowerFlex rack) and 8 in the
software-only offering.
System data network System-wide object that applies to all protection domains.
Once a resource group is deployed, configure system data
networks/subnets to be used to connect hosts initiator
to the storage data target. Maximum allowed system data
networks are 8. Configure two or four system data networks
for PowerFlex rack and PowerFlex appliance. The number
that you need depends on the number of PowerFlex data
(SDS-SDS and SDS-SDT communication) networks that are
configured for the deployment.
If you have not defined the system data network, by default, a host can reach all system data networks. Ignoring the data
networks/subnets may result in unequal load between the host initiator ports and nonoptimized I/O performance with the
Attributes Description
NAS server Identifies the associated NAS server.
Enabled Identifies whether Events Publishing is enabled on the NAS Server. Valid values
are:
● yes
● no (default)
Pre-event failure policy The policy applied when a pre-event notification fails. Valid values are:
● ignore (default)—Indicates that when a pre-event notification fails, it is
acknowledged as being successful.
● deny—Indicates that when a pre-event notification fails, the request of the
SMB or NFS client is not performed by the storage system. The client receives
a 'denied' response.
Post-event failure policy The policy applied when a post-event notification fails. The policy is also applied to
post-error events. Valid values are:
● ignore (default)—Continue and tolerate lost events.
● accumulate—Continue and use a persistence file as a circular event buffer for
lost events.
● guarantee—Continue and use a persistence file as a circular event buffer for
lost events until the buffer is filled, and then deny access to file systems where
Events Publishing is enabled.
● deny—On CEPA connectivity failure, deny access to file systems where
Events Publishing is enabled.
HTTP port The HTTP port number used for connectivity to the CEPA server. The default
value is 12228. The HTTP protocol is used to connect to CEPA servers. It is not
protected by a username or password.
HTTP enabled Identifies whether connecting to CEPA servers by using the HTTP protocol is
enabled. When enabled, a connection by using HTTP is tried first. If HTTP is either
disabled or the connection fails, then connection through the MS-RPC protocol
is tried if all CEPA servers are defined by a fully qualified domain name (FQDN).
When an SMB server is defined in a NAS server in the Active Directory (AD)
domain, the NAS server's SMB account is used to make an MS-RPC connection.
Valid values are:
● yes(default)
● no
Username When using the MS-RPC protocol, you must provide the name of a Windows user
allowed to connect to CEPA servers.
Password When using the MS-RPC protocol, you must provide the password of the
Windows user that is defined by the username.
Attributes Description
Pre-events Lists the selected pre-events. The NAS server sends a request event notification to the CEPA
server before an event occurs and processes the response. The valid events are defined in the
table that follows.
Post-events Lists the selected post-events. The NAS server sends a notification after an event occurs. The
valid events are defined in the table that follows.
Post-error events Lists the selected post-error events. The NAS server sends notification after an event generates
an error. The valid events are defined in the table that follows.
Limits
● One event pool can have maximum of five CEPA server addresses.
● One event publisher can have a maximum of three event pools.
● One NAS server can have only one event publisher associated with it.
Dell Technologies recommends using VMware Storage vMotion to migrate data from a volume that is presented by SDC to one
presented by NVMe/TCP.
This migration is intended for VMDK (noncluster) customers who want to convert their SDC to NVMe/TCP.
Linux environments, ESXi clusters, and RDMs are not included in this section.
Requirements
See the following VMware product documentation links for information about Storage vMotion
requirements and limitations: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/
GUID-A16BA123-403C-4D13-A581-DC4062E11165.html
See the following VMware product documentation links for information about requirements and
limitations of VMware NVMe Storage: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/
GUID-9AEE5F4D-0CB8-4355-BF89-BB61C5F30C70.html
https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-storage/GUID-9AEE5F4D-0CB8-4355-BF89-
BB61C5F30C70.html
See the following VMware KB article for more details about Storage migration (Storage vMotion), with the virtual machine
powered on: https://kb.vmware.com/s/article/1005241
Ensure that you satisfy these requirements before you begin the migration:
● Ensure the ESXi version is 7.0u3 or later.
● Ensure the PowerFlex version is 4.5 .x.
Workflow
The following steps summarize the workflow that you need to follow to migrate fro SDC to NVMe/TCP using Storage vMotion:
1. Create and map a new volume of equal or greater size than the current VMFS datastore via NVMe/TCP to the same host.
2. Scan for the newly mapped volume.
3. Create a new datastore on the NVMe/TCP volume.
4. Perform the standard data migration using Storage vMotion by using a non-disruptive process.
6. Click COPY and place the host NQN in the copy buffer.
7. Click CANCEL.
8. Repeat the steps for all the hosts.
Create a volume
Use this procedure to create a volume.
1. From PowerFlex Manager, click Block > Volumes.
2. Click +Create Volume.
3. Enter the number of volumes and the name of the volumes.
4. Select Thick or Thin. Thin is the default.
5. Enter the required volume size in GB, specifying the size in 8 GB increments.
6. Select the NVMe storage pool and click Create.
In the first example above, 6x is the first NVMe over TCP software adapter and 6y is the second NVMe over TCP software
adapter.
In the second example above, 192.168.x.x is first data IP Address and 192.168.x.y is second data IP Address, depends on
which VMNIC is enabled for which software NVMe over TCP adapter.
3. Connect to the PowerFlex system by appending “-c” to the discovery query command.
Option Description
Scan for New Storage Rescan all adapters to discover new storage devices. If new devices are discovered, they
Devices appear in the device list.
Scan for New VMFS Rescan all storage devices to discover new datastores that have been added since the last
Volumes scan. Any new datastores appear in the datastore list.