PowerFlex Rack Administration - Instructor Guide
PowerFlex Rack Administration - Instructor Guide
ADMINISTRATION -
INSTRUCTOR GUIDE
INSTRUCTOR GUIDE
INSTRUCTOR GUIDE
Table of Contents
PowerFlex rack
Watch the PowerFlex overview video below and learn how the PowerFlex rack
solution delivers agility for the modern data center.
Movie:
The PowerFlex family includes VxFlex Ready Node, PowerFlex appliance, and
PowerFlex rack. The fundamental building block is PowerFlex, a software-defined
storage service. PowerFlex enables customers to create scale-out infrastructure
based on Hyper Converged architecture.
Scale-out block storage service that enables customers to create a scale-out Server SAN or HCI infrastructure.
The Dell EMC HCI portfolio offers a wide range of solutions to meet
customer needs. These HCI solutions are preengineered and tested to
provide a turnkey experience. To learn more about the Dell EMC HCI
portfolio offerings, see https://www.dellemc.com/hi-in/converged-
infrastructure/hyper-converged-infrastructure.htm
Expansion Add drives and memory Add per node Add expansion
nodes/rack
2
3
3: Enterprise-class performance:
PowerFlex delivers massive performance with an architecture that pools large sets
of resources and eliminates bottlenecks in the system, delivering millions of IOPs at
sub millisecond latency. Dell Technologies lab testing demonstrates that PowerFlex
delivers leading-edge performance for enterprise applications, including high-
performance databases, big-data analytics, and AI or ML workloads. PowerFlex
also delivers mission-critical Six 9’s (99.9999%) availability for your high-value
workloads with multiple protection groups and fast rebuilds.
Here are some key PowerFlex rack use cases for different scenarios.
Explore each rack components and links within the description to understand its
functionality.
6
5
7
2
3
8
1:
PowerFlex nodes use Dell PowerEdge servers. These nodes are clustered
together to provide the computing power for production virtual machines and bare
metal workloads.
Click here to learn more about PowerFlex node types.
2:
The nodes communicate to each other using a pair of access switches. In a multi
rack configuration, each rack requires a pair of access switches.
Click here to learn more about the access switch function and configurations that
are used in PowerFlex rack.
3:
4:
5:
6:
7:
8:
9:
PowerFlex nodes use Dell PowerEdge servers. These nodes are clustered
together to provide the computing power for production virtual machines and bare-
metal workloads.
Click here to learn more about PowerFlex node types.
PowerFlex Node
All types of nodes are available in various configurations that are based on
PowerEdge R640, R740xd, and R840. Refer to Dell EMC website for detail
specifications.
Access Switch
Dell EMC S5248F-ON provides four 100GbE QSFP28, and two 100GbE QSFP-DD
(quad small form-factor pluggable double density) ports for uplink and inter-switch
connectivity. But the PowerFlex rack offers only the 25GbE switch port.
Another option for the access switch is Cisco Nexus 93240YC-FX2. It is a 1.2-RU
switch designed for spine-leaf deployment in data centers. This switch has 48 x
1/10/25 Gbps SFP28 ports and 12 x 40/100 Gbps QSFP28 ports.
Note: Older models of PowerFlex rack use a pair of 1-RU Cisco Nexus 3132Q-X or
2U 3164Q access switches. These switches provide 32 and 64, 40 Gbps QSFP+
ports respectively. They use 4x10 Gbps breakout cables to connect to the
PowerFlex nodes and Controller nodes.
Aggregation Switches
As the number of nodes and cabinets grow, more access switches can be added to
the aggregation switch.
There are two type of management controller nodes: small and large. The small
controller nodes come with single socket, 20 cores, 192GB RAM and 9.6 TB
storage capacity. The large controller node comes with dual socket, 40 cores,
384GB RAM and 15.36TB storage capacity.
The drive count for small controller node is 5x1.92T SSD and for large node is
8x1.92TB SSD.
The small controller node runs all management and orchestration software needed
for a large deployment up to 200 nodes. A management control plane with large
controller node is recommended for deployment of more than 200 nodes in the
cluster or when additional management components such as VMware vROPs
needs to be added to the cluster.
Similar to the PowerFlex nodes, the controller node drives are used as storage for
the virtual machines running on them. However, this storage pool is separate from
the production storage and uses a VMware vSAN software defined storage
solution. VMware vSAN provides high availability cluster for the management
controller.
Management Switch
The PowerFlex rack uses Dell EMC PowerSwitch S4148T-ON or Cisco Nexus
31108TC-V management switch.
Each node and switch have a copper management network interface which
connects to the management switch. It is a one rack unit (1U) switch that provides
48 x 1/10 Gbps RJ45 ports and 6 x 40/100 Gbps QSFP+ ports.
Note: Older systems use Cisco Nexus 3172TQ as the management switch.
Spine-Leaf Architecture
For greater scalability, multipath redundancy, and high throughput, PowerFlex rack
cluster are configured with spine-leaf architecture. In this architecture, every leaf
switch (Cisco Nexus 93240YC-FX2) is replaced for an access switch and is 1.2U
in size and have 12 more SPF ports than the access switches. The spine
switches (Cisco Nexus 9336C-FX2) are replaced for the aggregation switches in
the network. The leaf switches all connect to spine switches in a full-mesh
topology.
The spine switches then connect to two to four border leaf switches (Cisco Nexus
9336C-FX2). Uplink to the customer core network is provided through border leaf
switches.
With the spine-leaf architecture, Layer 3 gateways are available at leaf switches.
This distributed gateway enables VM migration seamlessly between the racks.
PowerFlex supports three or six spine switches. With six spine switches, the
maximum nodes that are allowed are 348 (three controller nodes + 381 FLEX
nodes). With six spine switches, there is no oversubscription on the switches and
no performance degradation, even when every node is using its network with full
capacity.
• Maximum nodes allowed- 16 x24 =384 Nodes (three Controller + 381 FLEX
Nodes)
The access switches connect to the PowerFlex nodes with 25 Gbps SFP+ cables.
In two layer deployments, each storage-only node has four connections to each
access switch. One connection is dedicated to the back-end storage traffic.
Multiple VLANs are used to separate the different types of traffic on the access
switches. VLANs keep the production, vSphere, vMotion, PowerFlex, and vSAN
traffic segregated. Dell EMC PowerEdge R640/R740xd/R840 servers are used as
the PowerFlex nodes.
Management and
Production Data
Management and
Production Data
The access switches also connect to each Controller node with 10 Gbps
connection. Each switch has two connections to each Controller node. One
connection is for management traffic and the other is for storage services such as
vSAN and vMotion. As with the PowerFlex nodes, each switch connects to two
different NICs on the Controller node.
A pair of Cisco Nexus 93180YC-EX or Dell EMC Networking S5248F-ON are used
as the access switches in the rack.
Dell EMC PowerEdge R640 servers are used as the controller nodes.
In addition to the iDRAC connection, each controller node has a second 1 Gbps
connection into the management switch. These secondary connections allow the
jump server and PowerFlex Manager (management interface) to have access to
the out-of-band network to perform management of the components.
The management switch also connects to the customer network. This allows the
customer to have out-of-band network access to the components of the PowerFlex
rack.
Interrack Connectivity
Multiple racks are connected using the aggregation switches. Even if a PowerFlex
rack cluster has only one rack, it often will have a pair of aggregation switches to
allow for easier expansion deployments. A pair of aggregation switches connect to
pairs of access switches in each rack.
Each access switch has a 40 Gbps fiber optic connection to each aggregation
switch. There are also two peer connections between the aggregation switches.
The aggregation switches also connect into the customer network. This uplink
carries all the production traffic to and from the PowerFlex rack.
A pair of Dell EMC Networking S5232F-ON or Cisco Nexus 9336C-FX2 are used
as the aggregation switches.
PowerFlex rack configurations are categorized into single and multiple rack
configurations.
Movie:
PowerFlex uses existing local storage devices (Direct Attached Storage) and turns
them into shared block storage. The shared block storage is available as software-
defined storage to the applications.
Metadata Metadata
Storage Media
Logical Volumes
Applications
Software-defined Storage
− Nodes
• Component Traffic Types
The Storage Data Server (SDS) is a software daemon that enables a server in the
cluster to contribute its local storage to the aggregated storage pool. It owns the
contributing devices and together with the other SDS forms a protected mesh from
which storage pools are created.
An instance of the SDS runs on every server that contributes some or all its local
storage space (SSDs, or NVMe devices). The SDS manages the capacity of a
single server. The SDS performs the requested SDC back-end I/O operations, and
MDM rebuild and rebalance operations.
Storage-only Nodes
TCP/IP
Storage Media
Server Logical Volumes
Block Device Drivers
TCP/IP
Server
Software-defined Storage
The Storage Data Client (SDC) is a lightweight block device driver that exposes
PowerFlex shared block volumes to the operating system. The SDC runs on the
same server as the application. The SDC enables the application to issue an I/O
request. The SDC fulfills the I/O request regardless of where the particular blocks
physically reside.
The SDC communicates with other nodes (beyond its own local server) over
TCP/IP-based protocol. The only I/O in the stack that the SDC intercepts are the
I/O that are directed at the volumes that belong to PowerFlex.
Storage Media
Logical
Volumes
Compute-only Node
TCP/I
P
Client
Volume Manager
File Manager
Software-defined Storage
Applications
Metadata Metadata
Storage-only Nodes
Compute-only Node
Server
Read Write
Block Device Drivers Operatio Operatio
Client
Software-defined Storage Volume Manager
File Manager
Mapped Mapped
Storage Storage
Server
Block Device Drivers
Applications
The MDM configures and monitors the PowerFlex system. It contains all the
metadata that is required for system operation. Although the MDM is responsible
for data migration, rebuilds, and all system-related functions, the user data never
passes through MDM.
The MDM hands out instructions to each SDC and SDS about its role and how to
play it. The MDM gives each component all the information it needs but nothing
more.
The number of MDM entities can grow with the number of nodes. Three or five
instances of the MDM run on different servers to support high availability.
MDM Cluster
The MDM cluster consists of a combination of Master MDM, Slave MDMs, and Tie-
breaker MDMs. There is also the Standby MDM which is not a part of the cluster.
Note: For a 5 or more PowerFlex node system, 5 MDM cluster is the default for
PowerFlex rack.
MDM
MDM is a daemon service that runs on any PowerFlex node. An MDM is assigned
a Master, Slave or a Tie-breaker role, during installation. At minimum 3 MDMs; 1
Master, 1 Slave, and 1 Tie-breaker MDMs are needed to make an MDM cluster.
MDM has a unique MDM ID and can be given unique names. Before the MDM can
be part of the cluster, it must be promoted to a Standby MDM first. A standby MDM
must be manually activated in order to be part of an active cluster when Master or
Slave MDM goes down. There is always an odd number of MDMs in a cluster such
as 3 or 5.
Master MDM
The Master MDM functions like a brain and controls all the SDCs, SDSs, and SDRs
in the cluster. In an MDM cluster, only one MDM can be a primary at a given time.
The Master MDM contains and updates the MDM repository, the database that
stores SDS configurations, and how data is distributed between the SDSs.
Master
Slave MDM
The Slave MDM is an MDM in the cluster that is ready to take over as the Master
MDM. In a 3-node cluster, there is one Slave MDM, thus allowing a single point of
failure. If you have 5 nodes or more the default is a 5 node MDM cluster.
Tie-breaker MDM
A Tie-breaker MDM is the MDM that determines which MDM becomes the Master
MDM. It helps in maintaining quorum in the cluster. In a 5-node MDM cluster, two
MDMs will have a tie-breaker role.
Standby MDM
Once a MDM is added as a Standby MDM, it will be added or locked for the
specific system. When promoted to a cluster member, it can be considered as a
Slave or Tie-breaker MDM.
Tie-breaker 1 Stand-by Up to 8
Master Slave Slave Tie-breaker 2
stand-by
5-Node MDM Cluster MDMs
Manager MDM
Metadata managers (MDMs) control the behavior of the PowerFlex system. They
determine and publish the mapping between clients and their data, keep track of
the state of the system, and issue reconstruct directives to SDS components. A
Manager MDM acts as a Master or Slave in the cluster. Manager MDMs have a
unique system ID and can be given unique names. A manager can be a standby or
a member of the cluster.
Slave Tie-breaker 1 Up to 8
Master Slave Tie-breaker 2 Stand-by
stand-by
MDMs
Manager 5-Node MDM Cluster
Nodes
PowerEdge R640
PowerEdge R740xd
PowerEdge R840
PowerFlex Nodes are the basic hardware units, which are used to install and run
the hypervisor and the PowerFlex system.
Dell EMC VxFlex Ready Nodes, PowerFlex appliance, and PowerFlex rack bring
together Dell EMC PowerEdge servers and Dell EMC PowerFlex software. The
combination is done in a reliable, quick, and easy to deploy building block. These
building blocks are ideal for server SAN, heterogeneous virtualized environments,
and high-performance databases.
• Hyperconvered (HC) Node - When both PowerFlex SDS and SDC run in the
same PowerFlex node chassis, it is called HC node.
• Compute Only (CO) Node - When only PowerFlex SDC runs in a PowerFlex
node chassis it is called CO node. CO nodes are compute-heavy and have little
to no storage capacity.
• Storage Only (SO) Node- A SO PowerFlex node runs only SDS and it is
storage-heavy with small computing power (CPU and memory)
The software components that makeup PowerFlex (the SDCs, SDSs, and MDMs)
converse with each other in predictable ways. For designing a PowerFlex
deployment, you should be aware of these traffic patterns in order to make choices
about the network layout.
Note: The front-end and back-end storage traffic is a logical distinction and does
not require physically distinct networks.
Traffic between the SDCs and the SDSs forms the bulk of front-end storage traffic.
Front-end storage traffic includes all read and write traffic arriving at or originating
from a client. This network has a high throughput requirement.
Traffic between SDSs forms the bulk of back-end storage traffic. Back-end storage
traffic includes writes that are mirrored between SDSs, rebalance traffic, rebuild
traffic, and volume migration traffic. This network has a high throughput
requirement.
MDMs are used to coordinate operations inside the cluster. Directs PowerFlex to
manage rebalance, rebuild, and redirect traffic. They also coordinate Replication
Consistency Groups, determine replication journal interval closures, and maintain
metadata synchronization with PowerFlex replica-peer systems. MDMs are
redundant and must continuously communicate with each other to establish
quorum and maintain a shared understanding of data layout.
MDMs do not carry or directly interfere with I/O traffic. The data exchanged among
them is relatively lightweight, and MDMs do not require the same level of
throughput required for SDS or SDC traffic.
MDM to MDM traffic requires a stable, reliable, low latency network. MDM to MDM
traffic is considered back-end storage traffic. PowerFlex supports the use of one or
more networks dedicated to traffic between MDMs.
The primary (also known as the master) MDM must communicate with SDCs in the
event that data layout changes. This can occur because the SDSs that host an
SDC’s volume(s) storage for the SDCs are added, removed, placed in maintenance
mode, or go offline. It may also happen if a volume is placed into a Replication
Consistency Group.
Communication between the Master MDM and the SDCs is lazy and asynchronous
but still requires a reliable, low latency network. MDM to SDC traffic is considered
front-end storage traffic.
The primary (or master) MDM must communicate with SDSs to monitor SDS and
device health and to issue rebalance and rebuild directives. MDM to SDS traffic
requires a reliable, low latency network. MDM to SDS traffic is considered back-end
storage traffic.
Other Traffic
There are many other types of low-volume traffic in a PowerFlex cluster. The other
traffic includes infrequent management, installation, and reporting. This also
The data workflow in PowerFlex is categorized as Write I/O and Read I/O. Each
workflow has their own functional operation.
Write Operation
Write I/O from the application is sent to the SDC service running on the server
node where the application is installed.
1. The SDC sends the I/O to the SDS where the primary copy of the storage block
is located.
2. The SDS on the node writes the data to the primary block and send the data to
the other SDS that is holding the secondary block for the primary block.
3. The SDS that holds the secondary block acknowledges the data packet.
4. Once the SDS that holds the primary data block receives an acknowledgment
from the SDS that holds the secondary data block, it sends an acknowledgment
back to the SDC. The SDC forwards the acknowledgment to the application and
the write operation is marked completed.
Movie:
The web version of this content contains a movie.
Read Operation
Reads consist of Read Hits and Read Misses. A Read Hit is a read from the
PowerFlex system (SDS) where it finds the requested data already in the system
Read Cache space. Therefore, read hits are run at memory speeds and not at disk
speeds. A Read Miss is a read to the PowerFlex system when requested data is
not in cache and must be retrieved from physical disks.
1. The application triggers the SDC to issue the I/O to the SDS
1. The application triggers the SDC to issue the I/O to the SDS
2. If there is a read miss, then data is retrieved from the physical disk
4. The request I/O is acknowledged and the data is sent back to the application
Movie:
The web version of this content contains a movie.
Write I/O
The SDC fulfills the write I/O request regardless of where any particular storage
block physically resides. Writes are buffered in the host memory for Read after
Write caching. One way to achieve Write buffering is to use RAID controllers (for
example LSI, PMC) that have battery backup for write buffering. The DRAM buffer
is protected against sudden power outages to avoid any data loss.
Reads
Reads that the RAID controllers cache services are still considered a Read Miss
from the PowerFlex management point of view. Consider that sequential reads are
not counted separately. If any I/O is serviced from the host read cache, then the I/O
is counted as Read Hits. Any other I/O is counted as Read Misses. In terms of
resources consumed, the host Write I/O generates two I/O over both the network
and back-end drives. A read I/O generates one network I/O and one back-end I/O
to the drives.
Two-layer Deployment
In a two-layer deployment, the SDS is installed on a separate node from the SDC.
The front-end (client) is separated from the back-end (storage) data traffic.
Hyperconverged Deployment
In the hyperconverged (HC) configuration, both the SDC and the SDS are installed
on the same node. HC enables the applications and storage to reside on the same
node.
Mixed Deployment
Users can configure, provision, maintain, and monitor PowerFlex rack systems with
the help of various management tools.
Note: PowerFlex Manager is the main management tool that is used for PowerFlex
rack. The other tools should be used only for tasks PFxM cannot perform.
5 3
4 1
1: The PowerFlex Command Line Interface (CLI) is used to perform the entire set
of configuration, maintenance, and monitoring activities in PowerFlex rack.
3: For the VMware hypervisor environments, vSphere Web Client and vSphere
Client enable the management of all virtual components. These components
include the ESXi hosts, virtual machines, distributed switches, datastores, and
more. vSphere Web Client and vSphere Client connects to the VMware vCenter
Server Appliance (VCSA) which is running as virtual machine cluster on the
management controller. PowerFlex rack has two separate vSphere environments,
one for the production cluster and another for the management controller.
Note: The Dell EMC PowerFlex rack plug-in is supported for vCenter 6.5 and 6.7
versions with the old branding. There is no plug-in UI support available for vSphere
7.0 yet.
To learn more about VMware vCenter, and Web Client functionality, see the
VMware website.
4:
5:
PowerFlex Manager
Key Terminology
Resource
Template
Service
Compliance
Repository
The Integrated Dell Remote Access Controller (iDRAC) allows out of band remote
access to a Dell PowerEdge server. iDRAC alerts administrator of server issues,
helps them perform remote server management and reduces the need for physical
access to the server.
PowerFlex Manager is purpose built software for the PowerFlex product family. The
comprehensive IT operations management software automates and simplifies
implementation, expansion, and lifecycle management. Best of all, PowerFlex
Manager is wizard-driven, making it easy to navigate, consume, and manage your
PowerFlex system resources.
Cisco Nexus switches run NX-OS, an embedded operating system to control the
switch functions. NX-OS has a command-line interface to manage and monitor the
switch environment. NX-OS supports single device management for authentication,
configuration, and updates. NX-OS CLI is used to manage services, health,
performance, and troubleshooting of Cisco Nexus Switches.
Remote Connectors
PowerFlex provides two options to remotely connect and manage the PowerFlex
system.
Remote Syslog
The MDM syslog service can send events, via TCP/IP, to RFC 6587-compliant
remote (or local) Syslog servers. Messages are sent with facility local0, by default.
Once the syslog service is started, all events will be sent until the service is
stopped.
− Customer can have SNMP agents that are configured to send information
directly to SNMP server.
Simple Network Management Protocol (SNMP) is a network management protocol,
which is used for collecting status information from network devices, such as
servers and switches. The SNMP enabled device runs SNMP agent, and
communicates with the SNMP management server to share information about
device status. In PowerFlex rack, all SNMP traps should be directed towards
customer’s active SNMP monitoring system. This will provide proactive alerting for
critical and warning level events. These events include, but are not limited to,
hardware failures requiring field replacement and software faults that could
negatively impact the stability of the system.
Protection Domain
For example, if two SDSs that are in different Protection Domains fail
simultaneously, data is still available.
Storage Pool
Storage Pools enable creating different storage tiers in the PowerFlex system. The
best practice is to have the same type of storage devices within a Storage Pool.
This ensures that each volume is distributed over the same performance type of
storage device. PowerFlex provides two types of Storage Pools "Medium
Granularity (MG)" and "Fine Granularity (FG)" Pools. MG and FG Storage Pools
work in either zero-padding enabled or disabled.
Fault Set
A Fault Set is a logical entity that contains a group of SDSs within a Protection
Domain, that have a higher chance of going down together when they are all
powered within the same rack. By grouping them into a Fault Set, PowerFlex will
mirror the data for a Fault Set on the SDSs that are outside the same Fault Set.
Availability is assured even if all the PowerFlex nodes are within one Fault Set fail
simultaneously.
A Fault Unit can be a Fault Set or a SDS which is not assigned to a Fault Set. A
minimum of three fault sets are required per protection domain.
Once the Fault Sets are created, SDSs can be distributed across them equally. A
SDS can only be added to a Fault Set during the initial configuration of the SDS. An
improper or unbalanced configuration can cause volume creation to be
unsuccessful.
Acceleration Pool
More than one Protection Domain helps in improving system resilience. This
solution keeps the production I/O unaffected even if there is a failure to the server
or media device.
Performance Isolation
It helps to establish SLA tiers by separating volumes for performance planning. For
example, assigning highly accessed volumes in "less busy" domains or dedicating
a particular domain to an application, which is server-based tiering.
Tenants are segregated efficiently and securely for data location and partitioning in
multitenancy deployments.
Network Constraint
Each volume block has two copies that are on two different SDSs. The copies
enable the system to maintain data availability following a device, network, or
server failure. The data is available following multiple failures, when each single
failure took place in a different storage pool.
• Define a capacity Storage Pool consisting of all HDDs in the Protection Domain.
• Define a performance Storage Pool consisting of all SSDs in the Protection
Domain.
Zero-Padding
Enabled
• Enabled zero-padding ensures that every read from an area that is previously
not written to returns zeros. Some applications might depend on this behavior.
Furthermore, zero-padding ensures that reading from a volume does not return
information that was previously deleted from the volume. This behavior incurs
some performance overhead on the first write to every area of the volume.
Disabled
The image describes the PowerFlex Manager high-level workflow which is typically
done by the Dell Technologies Professional Services team. View each step details
highlighted in red.
3
4
1 5
5: PowerFlex Manager monitors services for compliance. When the service is out
of compliance, PowerFlex Manager displays the service status on the Dashboard
compliance tab and on the Services page status column. Administrators perform
the update on the service from the Services page.
6: When you first log in to PowerFlex Manager, you are prompted with an Initial
Setup wizard for initial rack configuration. This wizard enables you to configure the
basic settings that are required to start using PowerFlex Manager.
7: The PowerFlex Manager Getting Started page guides you through the common
configuration task that are required to prepare a new PowerFlex Manager
environment.
Movie:
2. In this example, we’ll deploy the 4-node VMware HCI Service. In the Deploy
Service popup wizard,
d. Click Next.
3. In the Deployment Settings page, since we are using an HCI template, the
information is pre-populated. The information given is based on the values
already configured in the template, but PowerFlex Manager allows overriding
some of the values if needed. Examine and accept the default values for the
cluster. Click Next.
4. Finally, you can select whether to Deploy Now, or Deploy Later. In this case,
we’ll deploy it now.
Select Deploy Now and then click Next in the lower right corner.
5. In the summary page, the Service to be deployed, including node and network
configurations, the vCenter and PowerFlex configuration, CloudLink setup, and
the Storage Pools details are presented.
a. Scroll to the bottom and select Finish. Select Yes in the confirmation
window that pops up.
8. After both volumes are added, the health status of the Service switches to
Green/Healthy, and the volumes appear in the Service Details map and under
the Storage resources listed below.
The PowerFlex Manager (PFxM) is the primary interface for the PowerFlex rack
management and orchestration. PowerFlex Manager is used to monitor and
manage the PowerFlex Cluster.
A service deployment results in the creation of a separate storage pool for each
type of disk found in the nodes. The deployment process adds the disks from the
nodes to the appropriate storage pools based on the expected types for each pool.
After the Service is deployed, from the Service page in PowerFlex Manager, you
can view the state of a service at the component level for PowerFlex rack
deployments.
Once the Presentation Server is deployed using the PowerFlex Manager, the
service can be used to launch the PowerFlex UI.
Clicking the Management IP address launches the PowerFlex Web UI (Presentation Server).
Depending on the allocation unit size, a storage pool layout can either be of
"Medium Granularity (MG)" or "Fine Granularity (FG)" type.
• In MG storage pools, volumes are divided into 1 MB allocation units, which are
distributed and replicated across all disks contributing to a pool. The MG
storage pool works great for a performance-driven workload.
• FG storage pools are more space efficient, with an allocation unit of just 4 KB
and a physical data placement scheme based on Log Structure Array (LSA)
architecture built on NVDIMMs. If you need to enable compression or replication
options, you must have an FG storage pool.
Medium Granularity
Space 1 MB units
Allocation
Fine Granularity
Compression Supported
Each storage platform has its own advantages and use-cases, and administrators
are free to choose between both layouts. A system can support both FG and MG
pools on the same SDS nodes, and volumes can be nondisruptively migrated
between the two pools.
Why FG Pools?
MG Pool FG Pool
Snapshots causes increased overhead because new writes and FG pools drastically reduce snapshot overhead.
updates to the volume’s data each require a 1MB read/copy Enabling compression or making heavy use of
action. This might have an impact on performance in some snapshots has almost zero impact on the performance of
cases. the volumes.
Cannot enable compression. Uncompressed blocks of data Solution Offers space-saving services and additional data
consume a predictable size on disk that data size that is integrity. FG pools have the same elasticity and
written is equal to data size stored. Even when compression is scalability properties of MG pools. They are a great
enabled at application level, it creates irregular block sizes and choice for most cases where data is compressible and
empty regions. where space efficiency is more important than raw I/O.
Why MG Pools?
FG Pool MG Pool
When compression is enabled, reads are slower than in cases where The MG layout would be a better choice for workloads with
compression is disabled, therefore in some cases FG will be slower high performance requirements.
than MG.
MG layout would also be a good option for cases where the
Larger footprint - In FG pools, 256x more metadata is written due to
data is incompressible (if data is already compressed or if it
the 4 KB allocation unit compared to 1 MB allocation unit. Byte
Solution is encrypted by the application).
alignment further increases the amount of metadata. Compression
results in more data to be stored, which further adds to more
metadata. The metadata of FG pools cannot be saved in memory like
in MG pools. So, FG reserves some space on each disk to save the
metadata.
Fine Granularity
The Fine Granularity (FG) layout requires both Flash media (SSD or NVMe) and
NVDIMM to create an FG storage pool. FG layout is thin-provisioned and zero-
padded by nature and enables PowerFlex to support inline compression, more
efficient snapshots, and persistent checksums.
Layout
The fine granularity (FG) layout improves space utilization because of a smaller
data footprint. FG layout requires an NVDIMM-based acceleration pool. The
acceleration pool along with SSD or NVMe media is assigned to the FG pool at the
time of creation. If you only have one NVDIMM on each SDS, you can only have
one FG storage pool in the protection domain. If there are two NVDIMMs per SDS,
you can either have a larger acceleration pool for the single FG pool or have two
separate FG pools.
Data Compression
Defragmentation
Placing or fitting data on disk becomes complex with compression, so a new way to
lay out the data on a disk is needed. Also, how snapshots are written can cause
write amplification problems.
LSA provides the FG layout with a smaller allocation unit to handle problems with
snapshots and provides a different way to place data on the disk and handle gaps
left by compression.
LSA architecture provides a viable solution in the FG layout. Click here to learn
more about why we need the FG layout with the LSA architecture in storage pools.
Inline compression reduces the volume of data that is stored on the disk and
improves space utilization. This storage efficiency feature is enabled using
NVDIMM devices that are used for Fine Granularity storage pools.
the SSD. The NVDIMM is used to enable compression and as performance tier
for the SSD and for SSD endurance. However, some data is always present on
the NVDIMM only.
2. SDS looks-up the LSA logs containing the data. If the data is not on the
NVDIMM, it is fetched from the SSD.
3. SDS gets the compressed data (copied into the NVDIMM).
4. SDS decompresses the data - now it is exactly 4 K.
5. SDS sends the uncompressed data to the SDC.
6. SDC gets an uncompressed 4 K block of data.
Administrating Volumes
Administrating Volumes
To add a new volume to the cluster, from the Services page, choose Add Volume
under Add resources option. From the Add Volume wizard, parameters such as
volume size, compression can be defined. Volumes can be assigned to existing
datastores, or new datastores can also be created.
Once the storage volumes are added to the service, PowerFlex Manager lets you
view details about the storage volumes from the Services Page.
After you create the volumes in a storage-only service, they are added to
PowerFlex, but not mapped. When you add the volumes to a compute-only service,
PowerFlex maps the volumes and creates the datastores.
For a hyperconverged service, the added volumes are mapped to the datastore.
PowerFlex Manager requires you to enter the datastore name for each new volume
that must be added. This is because PowerFlex Manager also creates an ESXi
cluster and vCenter datastore for a hyperconverged service. PowerFlex Manager
Volume Migration
Migrating PowerFlex volumes from one storage pool to another, migrate the
volume and all its snapshots together (known as vTree granularity). Migration is
nondisruptive to ongoing I/O and is supported across storage pools within the same
protection domain or across many protection domains. Zero-padding must be
enabled for those storage pools.
Movie:
The PowerFlex rack computes only, and hyperconverged nodes provide the
required computing resources. These resources are pooled together and
configured to host virtual machines. On these VMs, you can run your application
workloads and other services. You have a choice of hypervisors to use on these
nodes: VMware vSphere, Red Hat Virtualization (Red Hat Virtualization), or Hyper-
V (two-layer deployment only).
PowerFlex rack has two separate vSphere environments - one for the PowerFlex
Management Controller cluster, and the other for the PowerFlex node cluster.
1: The Controller cluster hosts VMs that provide services for the PowerFlex rack
system itself. It is a VMware vSphere cluster where all the nodes run the ESXi
hypervisor, and they are managed by VMware vCenter. For storage, the Controller
cluster uses VMware vSAN. Similar to PowerFlex, vSAN aggregates the locally
attached disks of the PowerFlex Controller nodes to create a pool of distributed
shared storage.
2: The PowerFlex node clusters primarily host the production virtual machines.
Nodes in the PowerFlex cluster run either ESXi or Red Hat Virtualization
hypervisors and are managed by VMware vCenter or by Red Hat Virtualization
Manager. Unlike the Controller cluster, the PowerFlex node cluster that uses
PowerFlex for all the customer production data. PowerFlex provides massive
scalability and flexibility in terms of hypervisor/OS and bare-metal deployments. If
using ESXi nodes, Storage Virtual Machines must provide storage to PowerFlex.
− Virtualization Management
o PowerFlex Controller VCSA with vCenter Server High Availability
o PowerFlex node cluster VCSA
o PowerFlex node Red Hat Virtualization Manager or RHV-M (Used as an
optional component only when Red Hat virtualization is deployed)
− PowerFlex Gateway
− PowerFlex Manager and OpenManage Enterprise
− Secure Remote Services
− Windows jump servers
The PowerFlex Controller cluster maintains the environment for the PowerFlex rack
management. The virtual machines running on the Controller Cluster include
vCenter Server Virtual Appliance (vCSA), PowerFlex Gateway VM, PowerFlex
Manager, and OpenManage Enterprise VMs, Secure Remote Services VMs, and
Windows jump servers for support access. This cluster uses vSAN for storage, so
you do not need PowerFlex in the controller cluster.
Note: An administrator may not interact with most of these VMs for day-to-
day administration activities.
The PowerFlex node clusters (or production clusters) provide compute and storage
to customer applications. PowerFlex pools together the local storage of the node.
The Storage VMs cannot be migrated to other hosts because they need direct
access to the local storage of the node. However, other VMs consuming the
PowerFlex storage can be migrated from one ESXi host to another or from one
RHV host to another.
Nodes running RHV do not need storage VMs. Instead, PowerFlex SDC and SDS
software run directly on the Red Hat Enterprise Linux operating system of the node.
The Storage-only nodes run embedded operating system that is based on CentOS
kernel and contributes storage to the PowerFlex cluster. No customer application
runs on Storage-only nodes. Compute only nodes provide computing power, but do
not contribute any storage to the PowerFlex storage pool
The VXSA hosted on the controller cluster manages the ESXi nodes in the
PowerFlex cluster. Similarly, Red Hat Virtualization Manager (RHV-M) virtual
machine on the controller cluster manages RHV.
Each ESXi node requires a Storage Virtual Machine (SVM), to access PowerFlex
storage. Because the SVM provides access to PowerFlex storage, it cannot be
stored on PowerFlex storage. Instead, its files are stored on a small datastore that
uses the internal BOSS of the node (PowerEdge 14G) or SATADOM (13G)
storage. These datastores are labeled DASXX or something similar.
This is the boot drive of the PowerFlex node. And this device could have other
operating system running on depending on the nodes use in the PowerFlex rack
cluster (Linux, Windows, and ESXi).
DASXX datastores should only store the SVMs and system files. Production virtual
machines should be stored on datastores that are backed by PowerFlex storage.
Shown is the storage view in the vSphere Web Client for the PowerFlex cluster.
Notice the five DASXX datastores in this example. There is one datastore for each
node. The device backing of the datastores is labeled as a local SATADOM device.
Along with the datastore, these devices also host the ESXi operating system.
Any virtual machine that is used for production must be stored on the PowerFlex
storage. The first thing that you must do is to create a volume in PowerFlex. Thick
provisioning for the PowerFlex volume is recommended. Thick provisioning is
recommended because the hypervisor is not aware of whether the volume is over
provisioned. Thin provisioning can be then used on VMWare when creating the
virtual machine disks (if needed).
The volume must be mapped to all the SDCs so that they have access to the
volume. Then make a note of the PowerFlex Volume ID number of the new volume.
This number is used to locate the volume in vSphere.
In vSphere, you can see the details of the Storage Devices available to a
PowerFlex node cluster. The Fibre Channel Disks that you see here are the
PowerFlex volumes that are mapped or available to a specific host. The ends of
their identifiers match the PoweFlex Volume ID that was assigned during the
creation of the volume.
PowerFlex Volumes
Matches ID from
PowerFlex
After you have created the PowerFlex volumes, you can create a datastore on the
PowerFlex Volume Fibre Channel Disk. When creating a datastore, be sure to
select a device that uses a PowerFlex volume. Here, you see the wizard screen to
select the device. The selected device has an identification number that matches
the volume that is created in PowerFlex. After completing the wizard, select Finish,
and the datastore is created.
PowerFlex rack provides the same methods for building virtual machines in
vSphere as in any vSphere environment. Some of these methods, such as cloning
or deploying VMs from a template, require vCenter. Others are universal to
whatever management platform is being used. Using the New Virtual Machine
wizard makes it easy. One key difference when allocating storage for a VM is that
you should choose a datastore that uses PowerFlex volumes.
Building a fresh new VM involves, installing a Guest operating system into the VM,
and installing VMware Tools. Installing an operating system into a VM takes about
the same amount of time it would take on a physical system. Most other
deployment methods involve imaging a base VM template to avoid installing the
operating system repeatedly for each VM.
Cloning a VM
Cloning VMs involve taking a copy of a base VM (image) to create a new virtual
machine. Cloning takes a copy of the configuration and storage files for a VM and
uses it as the basis for a new virtual machine.
Importing a virtual
appliance
Create VM Example
When creating VMs in the PowerFlex rack environment, ensure that you select the
PowerFlex storage and not the individual datastore on each host. Allocating
storage is part of the process when using the New Virtual Machine wizard in the
vSphere Web Client.
Choose PowerFlex
Datastore
In the vSphere Web Client app, you can choose any datastore. You must be careful and ensure that
you select the PowerFlex datastore and not the local datastore that is shown as DASXX.
Add Storage to VM
You can expand the storage capacity of a virtual machine by adding a virtual disk,
and this is referred to as Raw Device Mapping (RDM). To add a disk to a virtual
machine, select the New Hard disk under New Device in the Edit Settings
screen. Specify the size of the new disk, expand the New Hard disk, and then
expand Location. Select either Store with the virtual machine or Browse.
Browse shows you a list of devices (as shown in the image). If you set up your
VMFS datastores with meaningful names, it can help you to choose the correct
device.
There are many options to control a VM and its environment. Some of these
options include monitoring, creating a snapshot, cloning, creating a template, and
adding/removing devices. To see all the options, select the VM in the left navigation
pane and click Actions.
Management of VM
• VM power on/off
• VM Clone/Template
• VM Snapshot
• Edit Settings
VMotion Migration
• It must not have a connection to an internal standard switch: Virtual switch with
zero uplink adapters.
• It must not have CPU affinity configured.
• vSphere VMotion must be able to create a swap file accessible to the
destination host before migration can begin.
Types of Migration
See VMware website for the latest information about maximum concurrent migration to a single
vSphere VMFS datastore.
Migration Wizard
Note: Storage virtual machines (SVM) are not candidates for migration as they use
local storage.
You use Red Hat Virtualization Manager (RHV-M) to manage the Red Hat
Virtualization environment. RHV-M runs on a virtual machine that is hosted on the
controller cluster.
Note: RHV-M is an alternative interface for VMWare Vsphere. It is used when the
nodes are running Red Hat Linux instead of ESXi as hypervisor.
Network Types
Network resources are managed using PowerFlex Manager. From the PowerFlex
Manager UI, go to Dashboard>Settings>Networks.
1
2
9
10
8
1: Used for management of Hypervisors in the system. These network VLANs are
defined at system level.
6: Used for data traffic between Storage Data Servers (SDS) and Storage Data
Clients (SDC).
8: Used to manage the network that you want to use for live migration. Live
migration enables you to move running virtual machines from one node of the
failover cluster to a different node in the same cluster.
Verifying Connectivity
The connectivity between Storage Data Server (SDS) and Storage Data Client
(SDC) and PowerFlex Gateway can be verified using an SSH session.
Use this procedure to ping the Storage Data Server (SDS) from Storage Data
Client (SDC).
1. Open an SSH session with a VMware ESXi host using PuTTy or a similar SSH
client.
2. Log in to the host using root.
3. Type vmkping to ping each SDC and SDS.
4. Repeat from each VMware ESXi host.
Use this procedure to ping the SDS and PowerFlex Gateway from SDS.
1. Open an SSH session with an SDS host using PuTTy or a similar SSH client.
2. Log in to the host using root.
3. Ping each SDS and the PowerFlex Gateway using a 9000 byte packet without
fragmentation on the SDS to SDS data networks.
4. Repeat for each SDS host.
5. Repeat for the PowerFlex Gateway.
Maximum transmission unit (MTU) is the largest physical packet size, which is
measured in bytes, that a network can transmit. Any messages larger than the
MTU are divided into smaller packets before transmission. MTU on Access
switches can be checked using PuTTy or an SSH client. We can also check MTU
on VM Kernel ports and all other port groups using the vSphere web client.
The procedure can be used to check the maximum transmission unit on access
switches.
1. Open an SSH session with the switch using PuTTy or a similar SSH client. You
can also connect to the serial console.
2. Check each interface for their MTU configuration.
You can add an available network to a service or choose to define a new network
that was initially deployed outside of PowerFlex Manager. You cannot remove an
added network using PowerFlex Manager.
PowerFlex Manager supports static route configurations for both replication and
nonreplication use cases. A static route allows communication between compute-
only and storage-only nodes in different network environments. When you define a
network, PowerFlex Manager enables you to specify the IP address for the subnet.
You can add a static route to a template before deployment, or add it later as a
resource on the deployed service.
Services > View Details > Resource Action > Add Resources > Add Network
Dell#configure
Dell(config-vlan)#no shutdown
Dell(config-vlan)# exit
These commands are an example of how to add VLAN 10 to the uplink port-
channel 100.
Cisco_Access-A# configure
Cisco_Access-A(config)# vlan 10
Cisco_Access-A(config-vlan)# exit
Cisco_Access-A(config-if)# end
The access switches provide networking for both the controller nodes and
PowerFlex nodes. The traffic coming from these nodes use various VLANs to allow
the traffic to remain separated, even if it is traveling over the same physical cable.
The switches are configured with virtual PortChannel (vPC) to allow multiple
physical connections to act as one, even if they are on separate switches.
PowerFlex rack uses virtual PortChannel for all the management and production
traffic.
VLAN Tagged
Access Switch
VLAN Trunked
VLAN Tagged
Access Switch
VLAN Trunked
Mgmt
Data
PowerFlex Data
vP
PowerFlex Node
• PowerFlex Data - These ports tag all incoming traffic with a VLAN ID for that
data network. The two PowerFlex data ports on the two switches use different
VLAN IDs, and they are not part of a vPC. Instead PowerFlex handles the load
balancing of traffic on these two connections.
• Virtual Port Channels (vPC) - vPCs are connected to different network devices,
but act as a single port channel to a third device. The benefits of vPC include
high availability, fast convergence, and increased bandwidth.
Access Switch
VLAN Trunk
VLAN Trunk
Access Switch
VLAN Trunk
vPC
Mgmt
Data
Services Data
vPC
Controller Node
Each access switch has two ports that connect to each controller node. Since each
port carries traffic that is segregated on different VLANs, they are configured in
switchport trunk mode. This allows them to accept traffic tagged with multiple
VLANs.
Older network configurations at a customer site require editing the template node
settings to match the customer node settings. The Node Switch Port Configuration
setting in the operating system Settings section of the template specifies whether a
Cisco virtual PortChannel (vPC) or Dell Virtual Link Trunking (VLT) is enabled or
disabled. The Port Channel options of the Switch Port Configuration turns on or off
the vPC or VLT. The sample template default Port Channel option provides link
aggregation or bundle of multiple port channel connections through the Link
Aggregation Control Protocol (LACP). This default setting is the logical v3 network
configuration.
To view the switch configuration, connect the switch and use Cisco NX-OS. Here,
you are logged into the Top of Rack switch.
Logical Networking
In PowerFlex rack, logical networking is provided at the switch level and at the
node level. Distributed virtual switches or DVswitches are configured to manage
virtual networking. Both the Management node and the PowerFlex node consists of
three DVswitches, each with multiple port groups.
The logical network topology can be viewed in VMware vSphere. Click here to see
how to go to the logical network topology.
Hover over the components in the image below to see the networks between the
physical components and the DVswitches.
This view shows the logical networking in VMware vSphere. This view can be
accessed by selecting the Network icon, and then the distributed virtual switch you
want to see the networking of. Then, select the Configure tab > Topology from the
menu on the left.
Using the vSphere Web Client, you can view the physical adapters on an ESXi host
by selecting it and going to the Configure > Physical adapters section. This
shows a list of network interface cards and their speeds, MAC addresses, and so
on. In this image, we are looking at physical network interface cards, which show
us four 10Gb ports that are used for VxFlex integrated rack networking. It also
shows which distributed switch is using each interface as an uplink.
Production virtual machines need their own port groups and VLANs to have
networking. They should not use the existing port groups or VLANs, which are for
PowerFlex rack system use only. Production traffic should also be separate from
the PowerFlex data traffic. Put Production traffic on DVSwitch0, and configure
separate port groups and VLANS.
To configure the physical switches, find a VLAN number that is not in use. This is
your new VLAN. Display the port channel interfaces with the show interface
description command. Make a note of the interface names, that are used for
the uplink, peer-link, and connections to all ESXi hosts. These are the port
channels that need the new VLAN added.
Configure the access switches to accept traffic tagged with the new VLAN ID. Add
the VLAN to virtual port channels for the peer link, uplink, and ESXi hosts. Use the
vPC numbers identified earlier. Confirm that the VLANs have been added with the
show vpc command. This shows your new VLAN listed under each of the vPCs.
Create VLAN
Peer link
Uplink
Hosts
You also must create a port group that will use the new VLAN ID. Traffic from any
virtual machine using that port group will be tagged with the correct VLAN ID, and
be allowed to travel across the access switches.
User Roles
User permissions
2 3
1 4
1: Users with the administrator role can view all pages and perform all operations in
PowerFlex Manager and grant permission to standard users to perform certain
operations. The administrator can perform functions like deploying a service,
exporting it to a file, and performing upgrades.
2: Users with the standard role can view pages and perform operations that are
based on the permission that is granted by an administrator. Also, standard users
can grant permission to other users to view and perform operations that they own.
A standard user with owner privileges can edit service information, and perform
compliance upgrades.
3: Users with the operator role can view pages and perform operations that are
based on the permission that is granted by an administrator. Drive replacement is
the primary operation that is performed by operator users.
4: Users with the read-only role can view all operations but are not allowed to
perform any operations. When a user logs in as a read-only user, PowerFlex
Manager does not allow the user to perform any operations.
User Permissions
The table provided shows a concise list of permissions for the user roles.
To get a detailed view of the permissions for each user role, navigate to the Online
Help section in the PowerFlex Manager interface.
Run Yes No No No
discovery
Remove Yes No No No
resources
Users can be added, modified, and deleted using the PowerFlex Manager
interface.
Create User
Edit User
Delete User
external components (CLI, UI, PowerFlex Gateway) and the MDM. A PowerFlex
system supports SSL login for external components. The MDM cluster has an SSL
certificate that is used externally with all components. There is a certificate for each
of the active MDMs (in any MDM cluster mode).
In the PowerFlex Installation Manager UI, the certificate is generated upon the
installation of the MDM components. This certificate is a self-signed certificate. The
customer signs the certificate using a Certificate Signing Request (CSR).
You can also generate or upload an SSL certificate from the PowerFlex Manager
UI post installation. See the below images.
PowerFlex Manager
Master MDM: The MDM in the cluster controls the SDSs and SDCs. The Master
MDM contains and updates the MDM repository, which is the database that stores
the SDS configuration. The repository also specifies how data is distributed
between the SDSs in the system. This repository is constantly replicated to the
Slave MDMs, so they can take over during a system component failure, such as a
network link. Every MDM cluster has one Master MDM.
Slave MDM: An MDM in the cluster that is ready to assume the role of the Master
MDM if necessary. In a three-node cluster, there is one Slave MDM, thus enabling
a single point of failure. In a five-node cluster, there are two Slave MDMs, thus
enabling for two points of failure. This increased resiliency is a major benefit to
enabling the five-node cluster. If you have five nodes or more, it is the
recommended best practice to deploy a five node cluster.
The Administrators can view and monitor the PowerFlex MDM cluster from the
PowerFlex Manger or PowerFlex Web UI page.
PowerFlex Manager
From PowerFlex Manager UI, go to Services>Health and select the cluster you
want to check the health.
PowerFlex Web UI
More details on each MDM are available in the Configuration: MDM Cluster
Settings page.
PowerFlex Manager enables you to change the MDM role for a node in a
PowerFlex cluster. For example, if you add a node to the cluster, you might want to
switch the MDM role from another node to the new node.
You can launch the wizard for reconfiguring MDM roles from the Services page or
from the Resources page.
Procedure
ii. Select a service that has the PowerFlex Gateway for which you want to
reconfigure MDM roles.
iii. In the right pane, click View Details. The Service Details page is displayed.
iv. On the Service Details page, click Reconfigure MDM Roles. You can also
click Reconfigure MDM Roles on the Node Actions menu on the Services
Details page. The MDM Reconfiguration page is displayed.
iv. On the Details page, click Reconfigure MDM Roles. The MDM
Reconfiguration page is displayed.
Using Virtual IP is the recommended PowerFlex best practice. Click the link for the
process to configure virtual IP address.
Linux
VMware
REST API
The REST API can also be used to add virtual IP addresses to the cluster. Always,
a virtual IP NIC placeholder must be mapped to each virtual IP address. Ensure
that there are NICs available for this purpose.
PowerFlex Manager
1. From the VxFlex OS Systems page, click Actions and select Configure
virtual IP address.
2. In the Configure virtual IP's dialog box, select the network, and enter a virtual IP
address.
License Management
You can add multiple standard licenses. In that scenario, details of all the licenses
are displayed together in the License Management section on the Virtual Appliance
Management page.
If you try to upload the same standard license second time, you get an error
message stating that License has already been used.
The Background Device Scanner scans the system for errors and fixes the errors
before they can affect the system. The scanner provides SNMP reporting about
errors that are found, and keeps statistics about its operation. Information about
errors is provided in event reports.
The scanning function is enabled and disabled (default) at the Storage Pool level.
Checksum Protection
The checksum feature identifies errors that change the payload during the transit
through the PowerFlex system. PowerFlex protects data in-flight by calculating and
validating the checksum value for the payload at both ends.
Read Operation
Application
During read operations, the checksum is calculated when the data is read from the
SDS device. It is validated by the SDC before the data returns to the application. If
the validating end detects a discrepancy, it initiates a retry.
Write Operation
During write operations, the checksum is calculated when the SDC receives the
write request from the application. This checksum is validated just before each
SDS writes the data on the storage device. If the validating end detects a
discrepancy, it initiates a retry.
The checksum feature can be enabled through Storage Pool -> Settings ->
General. Pools with Fine Granularity, with or without compression, have persistent
checksum by default.
PowerFlex Rebuild
Forward Rebuild
Forward Rebuild creates another copy of the data on a new server. In this process,
all the devices in the Storage Pool work together, in a many-to-many fashion to
create copies of all the failed storage blocks. Creating copies of all failed storage
blocks ensures a faster rebuild.
Movie:
The web version of this content contains a movie.
Backward Rebuild
Backward Rebuild re-synchronizes the copies created during the Forward Rebuild
process. The re-synchronization is done by passing back the changed data while
the copy was inaccessible during a device or node failure. This process minimizes
the amount of data that is transferred over the network during recovery.
Movie:
The web version of this content contains a movie.
PowerFlex Rebalance
Rebalance is the process of moving data copies across different servers in the
system. It occurs when PowerFlex detects that the user data is not evenly balanced
across the devices in a Storage Pool. In this process, data is moved from the most
used devices to the least used ones.
Addition
Removal
Combination
Rate Limits
For example, if the rebuild rate is limited to 60MB/sec, and the rebalance rate to 30
MB, then concurrently running rebuild jobs and rebalance jobs is limited to
60MB/sec.
Note: When both rebuild and rebalance occur simultaneously, the aggregate
bandwidth that is consumed by both will not exceed the individual maximum for
each type.
Disable or Enable
Rebuild and rebalance can be enabled or disabled for a specific Storage Pool. A
practical example of usage is the application I/O workload is high when new
servers are added to the cluster during the work week. To avoid network
congestion from rebuild and rebalance operations, they can be deferred to the
weekend. The decision to defer rebuilds should be carefully considered since a
rebuild mitigates a protection issue (single copy of data). Deferring rebalance, but
not rebuild, makes more sense in a production environment.
Priority setting
Priorities can be defined for application I/O, rebuild, rebalance and migration
workloads. The administrator can choose one of these prioritization schemes for
each Storage Pool.
NOTE: For a given pool, a different prioritization scheme can be set for rebuild and
rebalance.
• Unlimited
Below are more details on network throttling and how to improve rebuild
performance.
Network throttling
Both Rebuild, and Rebalance will compete with the application I/O for the system
resources. This includes network, CPU, and storage media. PowerFlex provides a
rich set of parameters that can control this resource consumption. While the system
is factory-tuned for balancing between expedient rebuild/rebalance and
minimization of the effect on the application I/O, the user has fine-grained control
over the rebuild and rebalance impact. These limits can be modified using the
Network Throttling option for Protection Domains.
Snapshots in PowerFlex
With PowerFlex, you can create up to 126 snapshots for MG and FG pools. Out of
these snapshots, 60 snapshots can be policy-managed.
PowerFlex Web UI > Volumes > volume name > More > Create Snapshot
Snapshot Policy
Snapshot policies contain a few attributes and elements, offering the ability to
implement automatically run snapshots for specified volumes based on specified
retention schedules. Up to 60 policy-managed snapshots can be retained per root
volume.
Example:
number_of_snapshots_per_retention_level 6,4,3,2,2,1 --
read_only_snapshots
Secure Snapshot
A secure snapshot sees a snapshot that cannot be deleted for a predefined period.
This feature was in PowerFlex 3.5 to secure data for financial compliance
regulations.
A secure snapshot with a set secured flag and expiration time cannot be manually
removed by a user or automatically deleted by a snapshot policy.
With native asynchronous replication, the data is replicated from one cluster to
another cluster. The cluster can be both a replication Source or a Target. The
Storage Data Replicator (SDR) mediates the flow of replicated data traffic between
the source and target cluster.
The Storage Data Replicator (SDR) is a logical storage component that manages
the I/O of replicated logical volumes. It is located alongside SDS on the same node.
From the point-of-view of the SDS, the SDR appears to be an SDC. Conversely to
the SDC, the SDR appears to be an SDS. The SDR mediates the flow of replicated
traffic, and the MDM instructs each of these logical elements where to read and
write data. The SDC uses its metadata cache to determine what data gets passed
to the SDS. It routes only the IO for replicated volumes through the SDR. Non-
replicated IO will not travel through the SDR.
Replication Consistency Groups (RCG), establish the attribute and behavior of the
replication of one or more volume pairs. One such attribute is the target replication
storage cluster. While a given RCG can replicate to only one target cluster, other
RCGs may replicate to other clusters provided they have exchanged certificates
and have been peered. RCGs are flexible. For some use cases, all volumes that
are associated with an application can be assigned to a single RCG. For large
applications, multiple RCGs can be created based on data retention, datatype, or
related application quiescing procedures, enabling read-consistent snapshots when
needed. In general, RCGs are crash-consistent, but related snapshots can be read-
consistent if application quiescing rules have been followed. Concerning Recovery
Point Objectives specified in the RCG configuration, you see that they can be
based on seconds, minutes, hours, or even days.
Note: Before creating RCGs, our replication volumes must exist on both the source
and target systems, and they must be of the same size. They are not required to
reside in the same storage pool type (MG, FG.). As a best practice in PowerFlex
rack volume pair has the same attributes (including zero padding and granularity)
In case there is a need to resize a volume, the target volume should be expanded
first to prevent disruptions in replication. It is not possible to migrate replicated
volumes from one Protection Domain to another, as the replication journals do not
span across Protection Domains.
Click here to learn about the replication attributes defined in the RCG.
Replication Attributes
Consistency Mode
The RCG consistency mode defines the way the data is applied at the destination.
When the RCG is set to consistent (which is the default mode), the data is applied
at the destination only when the destination has a consistent image in the journal.
When the mode is set to inconsistent, the data is applied at the destination on
arrival without waiting for a full consistent image.
Pause Mode
When the RCG is paused, all application I/Os are stored in the source journal.
When replication of the RCG is resumed, the source SDR sends the journal
contents to the target SDR to be applied to the target volumes. You may want to
pause an RCG to handle a network issue between the peer systems or when fixing
a hardware issue.
Freeze Mode
When the RCG is frozen, replication remains active. The application I/Os are
replicated from the source SDR to the target journal but are not applied to the
target volumes. When the RCG is unfrozen, the target SDR starts applying the data
in the target journal to the target volumes. You may want to freeze an RCG in order
to create a snapshot of the replicated volume.
In the source system, for replicated volumes, the SDCs communicate with the
SDR. For nonreplicated volumes, the SDCs communicate directly with the SDSs.
Physical devices are connected to the SDS. The SDRs serve as a pipeline for I/Os.
Application I/Os (both reads and writes) intended for replication volumes are sent
from an SDC to an SDR. The source SDR packages the data into a consistent
journal barrier and distills them so that only the most recent writes are included.
The source SDR sends the journal barrier over the WAN to the target SDR. At the
target system, the SDR processes the journal barrier and applies it to the volumes.
Asynchronous replication defines a point in time, and ensures that all writes carried
out before that point, and no writes carried out after that point, are copied to the
destination copy. Once all the data is transmitted to the destination, the destination
copy is consistent. PowerFlex 3.5 supports only one to one replication topologies.
Movie:
Replication Steps
Prepare
Configure
− Volume pair: A volume on the source domain and its copy on the target
domain.
− The volumes must be pre-created and can span different storage pools
− Consistency is kept across all volume pairs in an RCG
Initialize
First time initialization is the process where the source volumes data is copied to
the target volumes. At the end of the initialization the target volume image is
consistent. During the process the application's writes continue without interruption.
Steady State
1. When you select the replication template, the Enable Replication option is
enabled by default.
2. In the template, under Node > Network settings, select the required VLANs to
enable replication on interface 1 port 1 and port2 . Repeat the same on
interface 2.
3. Under Node > Static routes , select Enabled. Click Add New Static Route.
4. Choose the source and destination VLANs from the menu, and manually enter
the gateway IP address of the source replication VLAN. Repeat for the second
replication VLAN.
Maintenance Modes
A few conditions must be met before the entering into maintenance modes. The
drop-down has more details.
Only one Fault Unit (or stand-alone SDS) can be in Maintenance Mode at any
given time.
No other SDS can be in degraded or failed state (force override can be used).
Movie:
The web version of this content contains a movie.
• Entering IMM
− The node is immediately and temporarily removed from active participation
without building a new copy of the data on other nodes.
− Existing data is, in effect, temporarily frozen.
− A rebuild is not triggered when the node goes offline.
• During IMM
− The system mirrors new writes.
− Changes are tracked for writes that would have affected the node under
maintenance.
− If the node undergoing maintenance fails, IMM assures no data
unavailability (DU). If it did fail, a rebuild of the old data must return the
cluster to normal health.
• Exiting IMM
Movie:
The web version of this content contains a movie.
• Entering PMM
− Creates a new, temporary copy of the data by leveraging the many-to-many
rebalance.
− The data on the node being maintained is unchanged. This makes for three
copies, but only two are available.
• During PMM
− Like IMM, PMM does not need to rehydrate the node with data, only
resyncing the deltas back that occurred during maintenance. Recycles the
same code as used by IMM today.
− The system mirrors new writes. Changes are tracked for writes that would
have affected the node under maintenance.
• Exiting PMM
− Changes are tracked and re-synched when the node is available again.
PowerFlex Manager enables you to put a node in service mode when you must
perform maintenance operations on the node. When you put a node in service
mode, you can specify whether you are performing short-term maintenance or
long-term maintenance work.
To enter the maintenance mode through the PowerFlex Manager, follow these
steps.
CloudLink Integration
For PowerFlex 3.5, CloudLink can be deployed using the PowerFlex Manager.
View the CloudLink deployment steps below.
• Reuse the available sample templates and configure the settings for the
presentation and gateway VM.
• The image displays the PowerFlex Gateway discovered in the Resources page.
• Ensure the correct target CloudLink Center is selected in the CloudLink Center
Settings.
VMware Snapshots
Although a snapshot acts as a copy of the entire virtual machine, only the changes
to the virtual machine are stored. This means that the initial size of a snapshot is
small. The longer a snapshot is retained, the more capacity it uses, since the
number of changes to a VM grows.
To create a snapshot, right-click the virtual machine and select Snapshots > Take Snapshot.
It is triggered on.
• ESXi host failure or isolation
During normal operation, the virtual machine only uses one ESXi host as its
compute source. When there is a problem, the virtual machines go down and then
restart on another host. When virtual machines are restarted on another host, they
reboot.
High Availability requires that the virtual machines use shared storage and that the
hosts are placed in a cluster with a shared management network. PowerFlex rack
already has all hosts in a cluster with a shared management network, and
PowerFlex provides shared storage between all hosts. PowerFlex rack
environment, therefore, can enable the use of VMware vSphere High Availability
feature.
Select Cluster > Configure > vSphere Availability > Edit cluster settings > Turn on vSphere HA
are not, the VMs can automatically migrate to the other hosts. It is also possible to
configure affinity rules as certain VMs may prefer certain resources, or so the group
of VMs run on the same resource.
PowerFlex rack also supports Data Protection options that are provided by other
products.
• VMware Snapshots
• VMware High Availability (HA)
• VMware Fault Tolerance (FT)
Click here to learn more about the Integrated Data Protection options for
PowerFlex Rack system. You will be redirected to a different course, you can
bookmark it to take the course later.
Monitoring Tools
You can provision, maintain, and monitor PowerFlex using the following
management tools:
vSphere Plug- The Plug-in enables the VMware admin to perform all the
in monitoring and management operation for PowerFlex within
the VMware environment.
CLI The PowerFlex CLI can be used to perform the entire set of
configure, maintain, and monitor activities in a PowerFlex
system.
REST Gateway A REST API can be used to expose monitoring and provisioning
through the REST interface. The REST gateway is installed as
part of the PowerFlex Installation Manager (IM).
Current IOPS and bandwidth (KB/S) display in the top-left corner of the
Performance tab. The data is automatically refreshed every thirty seconds.
Performance data is available when PowerFlex is deployed to a service.
Further down in the Dashboard, you can see the utilization of nodes. In the
example, 42% of the nodes, or 6 out of 14, are in use by a service. The remaining
nodes can be added later if more capacity is needed. They can be configured to
provide compute and storage to the existing service, or to a new service.
The Dashboard also shows PowerFlex storage usage. In the example shown,
there is only one PowerFlex cluster that only has 512 GB of storage provisioned.
PowerFlex Manager allows easy monitoring of Services and Resources through the
PowerFlex Manager UI.
Service Monitoring
From the Services section of PowerFlex Manager, you can view your services.
Selecting a service shows a diagram of all the resources in the service. You can
quickly see each resource along with their statuses. You can see details on each
resource by clicking it. You can then view logs from that resource. Some
maintenance tasks are also available. For example, you can place a node into
Service Mode which places the PowerFlex and VMware services into maintenance
mode.
vCenter PowerFlex
Nodes
PowerFlex Volumes
Resource Monitoring
In this view, you can quickly see information about each resource. This includes
whether they are healthy and if they are compliant with the IC level. It also provides
links to the management interfaces of each component, or in the case of a switch,
its IP that you can connect to.
Click the hyperlinks for information on Resource Details and Node Details.
Resource Details
You can view details of a resource by selecting it in the Resources view and
clicking View Details. Here, you can see detailed information about the resource
including performance statistics.
Shown is the details page for a PowerFlex system. It shows its capacity and
historical IOPS data.
Node Details
The Resources page displays detailed information about all the resources and
node pools that PowerFlex Manager has discovered and inventoried. You can
perform various operations from the All Resources and Node Pools tabs.
Here, you can see that the Resource Details page displays detailed information
about the resource and associated components. Performance details, including
system usage, CPU usage, memory usage, and I/O usage are displayed.
Performance usage values are updated every five minutes.
Compliance Scan
PowerFlex Manager monitors current firmware and software levels and compares
them to the active RCM occurrences, which contains the baseline firmware and
software versions. It shows any deviation from the baseline in the compliance
status of the resources. You can use the PowerFlex Manager to update the servers
to a compliant state. Using PowerFlex Manager, you can choose a default RCM, or
add new RCM. You can update the firmware and software of shared resources
from the Resources page. A firmware or software update on a node that is part of a
cluster is successful only if the node is set to maintenance mode. PowerFlex
Manager sets nodes in a cluster to maintenance mode before performing an
update. To ensure that the node remains in maintenance mode, ensure that there
are other nodes available in the cluster to host the virtual machines of the node
being updated.
You can view RCM compliance by clicking a service in the Services window and
clicking the View Compliance Report button.
Update Resources
You can configure PowerFlex Manager to receive and display alerts from
discovered PowerFlex appliance components. The alert connector is available
through PowerFlex Manager. It sends alerts on the health of PowerFlex nodes and
PowerFlex software securely through Secure Remote Services. Secure Remote
Services routes alerts to the Dell EMC support queue for diagnosis and dispatch.
When using the alert connector with Secure Remote Services, critical alerts can
automatically generate service requests. Dell Technologies Services continuously
evaluates and updates which alerts automatically generate service requests.
During node discovery, you can configure iDRAC nodes to automatically send
alerts to PowerFlex Manager. PowerFlex Manager receives SNMP alerts directly
from iDRAC and forwards them to Secure Remote Services.
Decryption,
IC and Alerts Analysis
Diagnostic and
PowerFlex appliance Parsing
SERVICE ACTIVITY
SNMP Monitoring
All SNMP traps should be directed towards the active SNMP monitoring system of
the customer. This provides proactive alerting for critical and warning level events.
These events include, but are not limited to, hardware failures requiring field
replacement and software faults that could negatively impact the stability of the
system.
To configure SNMP, specify the access credentials for the SNMP version you are
using and then add the remote server as a trap destination. PowerFlex Manager
and the network management system use access credentials with different security
levels to establish two-way communication. For SNMPv2 traps to be sent from a
device to PowerFlex Manager, you need to provide PowerFlex Manager with the
community strings on which the devices are sending the traps.
PowerFlex node cluster vCenter is used to monitor the health of the ESXi hosts
and the clusters to which they belong. All production VMs run in this vCenter and
must be monitored for resource usage, and performance. You can also monitor and
manage virtual networks, such as distributed virtual switches and port groups
settings. The PowerFlex node cluster uses datastores which are also monitored for
capacity usage and performance. These datastores are created on the PowerFlex
storage. vCenter provides some PowerFlex monitoring capability through the
PowerFlex plug in.
vSphere Monitoring
Monitoring Resources
Monitor Resources
The vSphere statistics subsystem collects data on the resource usage of inventory
objects. Data on a wide range of metrics are collected at frequent intervals. The
data are processed and archived in the vCenter Server database. Statistical
information can be accessed through command line monitoring utilities or by
viewing performance charts in the vSphere Web Client.
Monitoring VMs
The vSphere Web Client lets you look at a virtual machine at a high level with the
Summary tab. It also enables you to monitor a specific aspect of a VM. The
Monitor tab gives you options to look at Issues, Performance, Tasks and
Events, Policies, and Utilization. The screenshot displays the recent events that
have occurred on the VM.
VM Performance Monitoring
Temporary spikes in CPU usage indicate that you are making the best use of CPU
resources. Consistently high CPU usage might indicate a problem. You can use the
vSphere Web Client CPU performance charts to monitor CPU usage for hosts,
clusters, resource pools, virtual machines, and vApps.
Host machine memory must be at least larger than the combined active memory of
the virtual machines on the host. Memory size of a virtual machine must be larger
than the average guest memory usage. Increasing the virtual machine memory size
results in more overhead memory usage.
You can view the status of each host, or node, and its VDS from the vSphere Web
Client. To view VDS health, go to Network, select the VDS in the left pane, select
the Monitor tab, and then Health.
The green color of the port plugs show that the port is active. One of the ports is
down in the example shown.
Port is down
Monitoring vSAN health is critical for the proper functioning of the PowerFlex
Controller cluster. To validate the health of the PowerFlex Controller cluster,
perform the vSAN health test periodically. Select the cluster, and under the
Monitor tab, select vSAN > Health and then click the Retest button. In a healthy
system, all tests should pass successfully.
Retest Periodically
Events are records of user actions or system actions that occur on objects in
vCenter Server or on a host. Examples of events include license key expiry, VM
power on, or lost host connection. Event data includes details about the event such
as who generated it, when it occurred, and what type of event it is.
PowerFlex Monitoring
The PowerFlex Web UI is a new HTML based Web UI introduced with PowerFlex
3.5. PowerFlex Web UI enables you to perform many standard maintenance
activities, and monitor the health and performance of the storage system.
The tabs display different views and data that are beneficial to a storage
administrator. You can review the overall status of the system, examine to the
object level, and monitor these objects.
Dashboard
• The Health section provides the system overview and a summary of any major
alerts. As displayed in the UI, the current system is healthy.
• The Performance section provides details of IOPS, Performance graph, used
Bandwidth details, Rebuild, and Rebalance status.
• The Capacity section describes the Usable Capacity, and Data Savings
information.
Configuration
Replication
The Replication option helps the administrator with monitoring the replication
process between systems. As displayed in the image, RCG status, Journal
Capacity, and Bandwidth can be monitored here.
Alerts
The Alerts view provides a list of the alert messages currently active in the system,
in a table format. They can be filtered according to alert severity, and object types
in the system.
MDM
The MDM view in the interface displays the cluster settings, its health state, and its
MDM details.
Warning and Error events may be transient conditions, or they may require admin
intervention.
Shown here is the structure of PowerFlex events as recorded in the system. Every
PowerFlex event has six distinct fields: ID, Date, Name, Severity, Message, and
Extended. These fields are selected for a particular event as displayed by the
showevents.py command. These events are decoded in a similar manner within the
Web UI.
The following is a breakdown of the event according to the fields in the event
record (as described above):
Events and Alerts are available in the Web UI. Events may also be viewed when
logged in on the primary MDM using the showevents.py script that is provided as
part of the PowerFlex installation. Shown here is the Web UI with an example of
the scli command syntax. The MDM stores the events in a persistent, private
database file, and also periodically archives them. You can use the GREP
command to search for specific errors or content within the event logs.
Alert Analysis
This example shows the recommended action that is documented in the Monitoring
Guide for the alert “CAPACITY_UTILIZATION_ABOVE_CRITICAL_THRESHOLD."
The Monitor Dell EMC PowerFlex v3.5 Guide documents every possible alert by its
uniquely identifying Name field. For each type of alert, it indicates the
recommended action.
PARAMETER DESCRIPTION
Severity 5 (Critical)
Hardware Monitoring
The iDRAC interface provides high-level health status of the various system
components.
The Dashboard indicates that there are no health issues with the server using
green checkmarks. You can click each component to find more details.
The Dashboard also provides basic information about the system including model,
service tag, iDRAC MAC address, BIOS, and firmware version.
You can also launch a console session from the Dashboard. Power controls are
available here in the blue button under the Dashboard title. The tabs on the top of
the home page take you to specific details based on which action you would like to
perform.
System Information
From the System view, iDRAC allows you to monitor different hardware
components such as Batteries, CPU, and Power Supplies. You can drill down
various components to find more information.
For example, information about fans and system temperature can be found under
Cooling. If a fan has an issue, its status goes to a warning or critical state. There
are similar details for the memory, network devices, and other components.
Storage Monitoring
Storage is examined under its own tab where you can see the high-level status of
the physical disks and drill down into detail on each device. If a device has an
issue, the status will change color.
You can configure how iDRAC handles different alerts on the Configuration,
System Settings page.
For example, you may want some alerts to generate an email or SNMP trap. Some
events can even be configured to perform an action when they occur, such as an
automatic reboot of the server.
To use email and SNMP, you must configure their settings, such as the SMTP
server and email address information, in the SMTP (Email) Configuration selection.
Switch Monitoring
Here is an example from a switch showing brief information about the interfaces on
this switch. show interface brief is a good first command to run and see
attributes, such as Status and Speed. You can see the assigned VLAN, its access
mode – whether its access or trunk mode and port speed. Notice that the 40G trunk
is used to connect to other switches. You can also see the reason that a port is
down. When you see Administratively down, it means that the admin set the port to
down or shutdown. The only way this port can be active again is if the admin
purposely activates it with the no shutdown attribute on the port.
Showing the VLAN can provide a high-level view of VLAN definition. This example
is from an access switch. Notice there are port channels in use here, indicated as
Po in the Ports column. To get more details on the port channels, you can run a
show port-channel command.
SSH to the IP address of the access switch and run the below mentioned
commands.
This is a hands-on CLI Walkthrough experience that is not part of the lab
exercises.
Log Collection
Application Logs
The application log entries can be exported to a comma-delimited CSV file for
troubleshooting using the Export All option. The Purge option allows deleting log
entries which are based on date and severity.
Troubleshooting Bundle
VMware Logs
To troubleshoot the virtual environment, you need to access the VMware support
log bundle.
To view vCenter Server logs, select the vCenter Server and navigate to Monitor >
System Logs. Also, you can right-click the ESXi host or a VM and select Export
System Logs to start the Export Logs wizard.
Generally, you do not need performance data. There are more options under some
of the selection choices. Under the Storage selection, you have different elements
that are related to vSAN. If you are working in the PowerFlex cluster, you do not
need to select vSAN.
On the VCSA virtual machine, the vc-support.sh script can be run to collect the
vCenter log bundle. It records all logs and the information from the VCSA until the
time of the collection. This script creates a vcsupport.zip file in the /root directory of
the vCSA. Either use SCP to export the generated support bundle to another
location or download from https://<VCSAIP>:443/appliance/<support-bundle>.tgz
using root credentials.
The vm-support command can be run on the ESXi nodes to generate the
vSphere log bundle. The bundle is displayed in /var/tmp, /var/log, or the current
working directory. The vSphere bundle is the standard vmsupport bundle that is
collected in typical ESXi troubleshooting. The bundle contains all log files for a
specific node. This command creates a .tgz file that contains many log files related
to the ESXi node. You can also download the logs remotely with the URL:
https://<esxihost>/cgi-bin/vm-support.cgi.
Another important script for debugging or re-creating the system with the collected
data is the reconstruct.sh script file. This file is created in the root directory of
the support bundle. Certain commands in vm-support generate a large file. This file
consumes more resources and is likely to result in a timeout error or takes a
considerable amount of time to execute. To control the creation of larger files, the
reconstruct.sh file breaks down the larger file into fragments when added into
the vm-support bundle. Upon completion of the support tool, users can re-create
the larger file by running the reconstruct.sh file above the extracted bundle
directory.
Linux log files can be collected using the emcgrab procedure. This procedure is a
comprehensive collection of key elements from the Linux storage only node. Log in
to the Linux console with the root user id and retrieve logs using the scp command.
Collect the Linux operating system logs from the /var/log directory. Alternatively,
EMC Grab Utility can also be used to collect the log files. For more information,
search for EMC Grab at https://www.dell.com/support/home.
PowerFlex installation logs can be collected using the Show server log option in the
vSphere web client. Use copy and paste to save the information to a file.
Troubleshoot any PowerFlex installation issues with the server log in the
PowerFlex plug-in.
1:
System Events can be downloaded from the Maintenance section after logging into
iDRAC. Lifecycle Log files can be viewed online.
Performing a backup saves all user-created data to a remote share from which it
can be restored. Perform frequent backups to guard against data loss and
corruption.
The Backup and Restore page displays information about the last backup
operation that was performed on the PowerFlex Manager virtual appliance.
Information in the Settings and Details section applies to both manual and
automatically scheduled backups.
Listed are steps to locate the deployment logs and log examples:
• To locate the deployment log directory:
− You can find the service/deployment ID from the URL in the address bar.
Online Help
Click the Save Progress and Exit button below to record this
eLearning as complete.
Go to the next eLearning or assessment, if applicable.
Protection Domain
A Protection Domain is a group of nodes (servers) or SDSs that provide data
isolation, security, and performance benefits. A node (with SDS) can only
participate in one Protection Domain at a time.
RPO
RPO defines the maximal data loss, in time units, that the client is willing to lose.
The length of the data collection and data transmission intervals is bounded by the
RPO. In PowerFlex release 3.5, the smallest RPO offered is 30 seconds. Over
time, it is possible this might reduce in subsequent releases.
Storage Pool
A storage pool is a subset of physical storage devices in a Protection Domain.
Each storage device belongs to only one Storage Pool. When a PowerFlex volume
is configured, the volume contents are distributed over all the devices residing in
the same Storage Pool.
VMware Snapshots