PowerStore 3.0 Administration - File Provisioning - Participant Guide
PowerStore 3.0 Administration - File Provisioning - Participant Guide
0
ADMINISTRATION - FILE
PROVISIONING
PARTICIPANT GUIDE
PARTICIPANT GUIDE
[email protected]
[email protected]
PowerStore 3.0 Administration - File Provisioning
NAS Container
NAS Server
SMB
Share
NFS
Export
File Storage
PowerStore File also enables clients to access data over FTP and SFTP.
NAS Services
There is no designated hardware for the file services in the PowerStore platform.
File storage is provided through deployed NAS services.
NAS services are only available on PowerStore T models deployed in the Unified
deployment mode. It is not supported on Block-optimized.
NAS container is installed on both nodes of the primary appliance. Nodes are
active/active.
In a multiappliance cluster, the NAS container always runs on the appliance that
it was installed on.
Service Deployment
ICW triggers the deployment of NAS services after a successful PowerStore cluster
configuration.
With the File services successfully installed, you can create NAS servers and file
systems.
Network Configuration
The NAS installation automatically creates a NAS cluster with an isolated network
using IPv6 addressing. The NAS cluster uses this network heartbeat for data
service failure detection.
The NAS cluster network configuration are displayed on the Settings page of the
PowerStore Manager:
1. Select Network IPs.
2. Select the ICM tab.
3. The page displays the IPv6 network address for the NAS cluster, NAS nodes,
and NAS Node Serviceability.
NAS Server
NAS Container
NAS Server
SMB Protocol
SMB Share
Windows
File System
Interface
NFS Export
NFS Protocol
File Storage
Linux
NAS Server NAS Servers are logically independent software-defined file sharing
servers.
Each NAS server is a separate virtual file server.
Each NAS Server allows network access to PowerStore-hosted files and folders
via network file sharing protocols.
Each NAS Server can be associated with one or more file systems.
NAS supports the follwing protocols: SMB, NFSv3, NFSv4, FTP, and SFTP.
Access can be provided to UNIX or Linux and Windows clients simultaneously.
NAS Servers can be secured and isolated logically at the file system, network,
and authentication level.
VLAN separation in NAS server context supports network multitenancy.
Separate configurations with independent:
capacity
File Storage
File access protocols
configured on the
NAS server
One or more shares
SMB shares
NFS exports Overview of file systems
Protection Policy
(local protection with snapshots)
Remote async replication
Before provisioning a file system, create a NAS Server. NAS servers are
automatically assigned on a round-robin basis across the available nodes. The
system determines in which appliance (running NAS services) the NAS server is
created.
Sharing Protocols
Select which storage protocol that the NAS server supports for NAS clients access
to provisioned file storage.
Select the protocol type:
If SMB is selected, the Windows Server Settings tab is available and displayed
on the next step.
Select the Windows server type:
Tip: The UNIX Directory Services tab is only available if at least one
of the NFS protocols is being configured in the NAS server.
Enable DNS
For Active Directory services, DNS configuration is required. Add at least one DNS
server for the domain.
User Mapping
The User Mapping page displays if you select SMB and join to the Active Directory
Domain. Keep the default, Enable automatic mapping for unmapped Windows
accounts/users, to support joining the active directory domain. Automatic mapping
is required when joining the active directory domain.
Protection Policy
Summary
The system assigns the NAS interface to the PowerStore Ethernet port
(BondEnclosure-bond0) that you selected.
Click CREATE NAS SERVER to start the job. The job starts in the background.
In the example, a NAS server was created but is not yet configured with a sharing
protocol.
NAS Cluster
NAS maintains its own cluster, which is independent from the PowerStore cluster.
Each NAS node runs in a NAS docker container.
In the example, the PowerStore system distributed three NAS servers with their file
systems (FS) across the NAS nodes.
PowerStore Appliance
Node A Node B
CoreOS CoreOS
NAS Container NAS Container
NAS Server 1
FS1
FS2
Each NAS Server is assigned a Preferred Node and a Current Node at the time of
the NAS server creation.
NAS Network Heartbeat enables data service high availability (HA), which results in
NAS server failover when a threshold is reached. The heartbeat is conducted at
fixed intervals of 1 second.
NAS HA provides:
NAS fault tolerance
One of the NAS nodes is elected as NAS Cluster Controller to orchestrate all the
NAS control path operations. NAS Storage Heartbeat is used to provide cluster
controller HA.
If network heartbeat failed for more than five seconds, the node is considered as
failed and high availability (HA) is triggered. Five seconds represent three times the
send interval plus two seconds allowance for network fault tolerance response
time.
PowerStore Appliance
Node A Node B
Core OS Core OS
NAS Container NAS Container
FS 1 FS 2
NAS Server 2
FS 3
FS 4
In the example, the PowerStore system moves the NAS server 1 from faulty NAS
node A to the backup NAS node B.
Manual Failback
The NAS Service supports only manual failback of NAS server after node recovery.
On-demand manual load balancing of NAS servers between nodes is also
supported.
The NAS Servers page in PowerStore Manager displays the Current Node and
Preferred Node assignments for each NAS server.
From the NAS Servers page, perform the following steps to manually fail back a
NAS server to the preferred NAS node:
1. Select the NAS server from the list.
2. Select Move NAS Server from the MORE ACTIONS menu.
3. Verify the NAS server Source Node and Destination Node.
4. Click UPDATE to start the operation.
NAS Services
SMB protocols:
SMB - SMB1
Share - SMB2
File System - SMB3-3.1.1
File Storage
NAS servers that support multiprotocol file sharing or are joined to an Active
Directory (AD) must be configured with DNS support.
The SMB share is created and associated with the file system. The SMB share
represents a mountable access point through which Windows clients can access
file system resources.
Unlike NFS clients, SMB clients do not need to be granted access to the SMB
share from the PowerStore interface.
Click the name of the NAS server to show its properties. Settings that must be
configured for SMB sharing include:
DNS
For Directory Services, DNS configuration is required. To add at least one DNS
server, select NAMING SERVICES and the DNS option. Complete the form and
click APPLY to save the configuration.
Standalone Server
To enable support for Windows shares on the NAS server, open the SHARING
PROTOCOLS card, and select the SMB SERVER tab. Follow these steps to
configure the NAS server as a standalone SMB server:
Domain-joined Server
To configure the NAS server as a domain-joined SMB server, open the SHARING
PROTOCOLS tab, select the SMB SERVER card, and follow these steps:
Access to DNS and NTP services must be configured before joining the server to
the domain. Time skew1 must not exceed 5 minutes, or the process fails.
1
Difference between the readings of the clock in PowerStore nodes and the
domain Active Directory server.
User Mapping
User mapping requires that naming services are configured on the NAS server with
either Unix Directory Services or the upload of Local Files.
If the Windows Server Type is set to Join Active Directory Domain, you must
select Enable Automatic Mapping for unmapped Windows accounts/users.
NAS servers catalog, organize, and optimize read and write operations to the
associated file systems.
The type of file system that you can create is determined by the file sharing
protocols that are enabled for the NAS server.
Selecting File Systems displays the File Systems menu. Click CREATE to display
the File System Type menu:
1. The table identifies which sharing protocols are enabled in each NAS server. To
support the access of Windows clients, select a NAS server with the SMB
protocol enabled.
2. Optionally choose from the following Advanced SMB Settings:
2
Setting is required when using SMB shares to store and access database files.
The storage system performs immediate synchronous writes and reduces the
chances of data loss or file corruption in various failure scenarios. The option can
have a big impact on performance, and should only be enabled if file storage is
used for database applications.
3
Allows SMB clients to buffer file data locally before sending to the system. SMB
clients can then work with files locally and periodically communicate changes to the
storage system. Enabling this option is recommended, unless your application
handles critical data or has specific requirements that make this mode or operation
unfeasible.
4
Enables applications to be notified using the Windows API when files are written.
5
Enables applications to be notified using the Windows API when files are
accessed.
FS Details
Enter the file system details in the next step of the wizard:
Retention period timing is set on an individual file during creation but can be
modified later. The configuration settings during provisioning just provide the
minimum, maximum, and default periods.
SMB Share
1. Name: The name provided for the share. The SMB share names must be
unique at the NAS server level per protocol.
a. The '$' sign at the end of the share name in the image prevents the share
being found in a share search on a network (essentially a hidden share).
2. Description (Optional): Enter a description that can help identify how the SMB
share is used.
3. Offline Availability: Configure the client-side caching of offline files.
None6 (default)
Manual7
Programs8
Documents9
4. Advanced SMB Settings: Configure advanced settings that are supported for
the SMB protocol that the NAS client uses to access the share.
6
Client-side caching of offline files is not configured.
7
Files are cached and available offline only when caching is explicitly requested.
8
All programs and files that clients open from the share are automatically cached
and available offline. The option optimizes availability for performance and is
recommended for executable programs.
9
All files that clients open from the share are automatically cached and available
off-line. Clients open these files from the share they are connected to. This option
is recommended for files with shared work.
10
Allows persistent access to the share without loss of the session state. Enable
continuous availability for a share only when you want to use Microsoft Server
Message Block (SMB) 3.0 protocol clients with the specific share.
11
Encrypts data in-flight between clients and the system. SMB encryption is
supported by SMB 3.0 clients and above.
12
Restricts the display of files and folders based on the user’s access privileges.
Administrators can always list all files.
PowerStore creates the SMB share paths based on the NAS server network
address (hostname or IP address), and the share name. NAS clients can access
the file system using the SMB share path.
Protection Policy
Optionally select a policy to protect the file system, or add a protection policy after
creating the file system:
A protection policy must be created before associating it with the file system. If the
selected policy contains both snapshot and replication rules, the replication rules
are ignored.
13
Allows users to access data stored on a remote NAS Server without traversing
the WAN. Copies content from the share and caches it at branch offices.
Summary
Review the file system configuration on the Summary page. Go BACK and make
changes, or conclude the operation.
UMASK is configured automatically on a per SMB share basis, and has a default
value of 022.
Click CREATE FILE SYSTEM to start the job. The job starts in the background,
and the GUI shows an update in the Actions icon.
CoreOS CoreOS
FS1 FS3
FS2 FS4
Support Engineering
server1 server2
Ethernet Ethernet
\\server2\engineering
Windows server
In a PowerStore unified storage configuration, file services are enabled on both nodes on the
primary appliance.
Windows clients can mount PowerStore SMB shares for file-based storage access.
In the example, there are two NAS servers. Each NAS server is sharing one file
system:
FS1 has only one SMB share (Support) mapped to a Windows server.
FS3 has two SMB shares (Engineering and Sales). Both are mapped to the
same Windows server.
Client access and user-level permissions are defined at the NAS client side.
Different users and user groups are granted access to the shared folders and
content.
The PowerStore active/active architecture enables file storage load balancing and
high availability.
Windows CLI
To map the SMB share to the Windows client, use the operating system net use
command. Windows CLI command syntax:
net use [device]: \\[host_name]\[SMB share]
The SMB share path is the combination of the NAS server network name (host
name or address) and the SMB share name.
The use of the hostname is recommended. When mapping to the share, specify the
full Universal Naming Convention (UNC) path of the SMB share on a NAS server.
The example shows the mapping of the SMB share to the NAS client Win6 (ip
address 192.168.1.6).
The NAS server fully qualified domain name (FQDN) was used to map the SMB
share Top$ to the local drive X: on Win6.
Verify the mapped network drives using net use:
Next set the users and group permissions to its directories and files.
File Explorer
Mapping the SMB share to the NAS client can also be performed from the
Windows UI.
In the example, the SMB share Top$ was mapped to the local drive Z: of NAS
client Win6 (192.168.1.6).
Directory Structure
After mapping the SMB shares to the client, create the directory and file structure
for the share.
The SMB file system allows the creation of multiple shares with the same local
path.
Client-side access controls for different users can be configured.
Shares within the file system all access the same content.
In order to enable SMB shares within the same file system to access different
content:
Create directories on the Windows drive that is mapped to the file system.
Create corresponding shares using PowerStore.
PowerStore Manager
Configure user-level access of different content within the same file system either
by using Access Control Lists (ACLs) or configuring different shares.
To create shares, launch the Create SMB Share wizard from the SMB SHARES
tab of the File Systems page.
In the SMB Share Details section of the wizard, the SMB share names are
associated with different local paths (mountpoints).
The SMB share path represents the combination of the NAS server network
address (hostname or IP address) and the share name.
In the example, the Hmarine_Sales SMB share provides access to the Sales
directory that was created in the file system.
File Explorer
After mapping the SMB share to a local drive, you can set client-side access
controls by opening the properties of the local drive in File Explorer.
The Security tab of the properties window allows the configuration of the SMB
share access permissions.
The tab displays the users or groups who have been granted access to the share
and the enabled permissions.
In the example, the administrator user and the Westcoast Sales group were added
with full access control (read, write, list, execute). The Everyone group permissions
were changed.
SMB shares may also be managed using the Microsoft Management Console
(MMC) from the NAS client. Once MMC is connected to the NAS server, expand
the shared folders and verify the PowerStore file system SMB shares and local
paths.
The New Share option launches the Create a Shared Folder wizard. The wizard
configures a share name, associates a local path, and sets user permissions.
The Shared Folders plug-in also provides support for setting share permissions.
Share permissions are often used to set folder permissions on FAT32 file
systems and other systems that do not use NTFS.
Share permissions and NTFS permissions can exist simultaneously. If both
types of permission exist on a single folder, the more restrictive permissions are
applied.
Simulation:
Create NAS Server with SMB Support.
Create a file system with an SMB share.
Create a lower-level SMB Share of a file system.
Create an SMB share of a file system using
Windows MMC.
o Use the Microsoft Active Directory for authentication. Use Windows directory
access for folder permissions.
o A shared file system can be mapped to the Windows system using File
Explorer or Windows CLI.
o Windows CLI net use command can be used to access shares. Syntax:
net use [device]: \\[hostname]\SMB share
o SMB shares may also be managed using the Microsoft Management
Console (MMC) from the NAS client.
NAS Services
NAS Server
File System
NFS Protocols:
-NFSv3
NFS
-NFSv4/v4.1
Expor
-Secure NFS
File Storage
Supported NFS protocols are NFSv3, NFSv4, NFSv4.1, and Secure NFS
NAS Servers
To open the NAS server properties in PowerStore Manager, go to Storage > NAS
Servers.
Click the NAS server to show its properties. Settings that can be configured are
available on the NAMING SERVICES and SHARING PROTOCOLS tabs,
including:
Enabling DNS services
Configuring naming services with UNIX Directory Services (LDAP/NIS) or local
files
Defining the supported NFS protocol versions
DNS
To add DNS servers, select NAMING SERVICES and the DNS option. Complete
the form and click APPLY to save the configuration.
NIS
To configure naming services with NIS, select NAMING SERVICES and the UDS
(UNIX Directory Services) option. Select NIS from the drop-down. Add the domain
and IP address and click APPLY to save the configuration.
LDAP
To configure naming services with LDAP, select NAMING SERVICES and the UDS
(UNIX Directory Services) option. Select LDAP from the drop-down. Complete the
form and click APPLY to save the configuration.
Note: Enable either NIS or LDAP services. The LDAP configuration must adhere to
either the Active Directory, RFC 2307, RFC 2307bis, or iPlanet schemas. If not,
LDAP does not function properly.
4. Click RETRIEVE CURRENT SCHEMA to view and edit the ldap.conf file. All
containers that are specified in the file must reference a location that is valid
and exists in the LDAP configuration. To upload the updated configuration, click
UPLOAD NEW SCHEMA and select the edited file.
5. Define if the LDAP protocol must use SSL (LDAPS) for secure network
communication. If enabling LDAPS, upload a trust certificate.
Local Files
Local files can be used instead of, or in addition to, DNS, LDAP, and NIS directory
services. To configure naming services with local files, select NAMING SERVICES
and the LOCAL FILES option. Complete the form and click APPLY to save the
configuration.
1. Download the template using the icon next to the file type.
a. Passwd resolves usernames to User IDs.
If combined with other naming services (NIS or LDAP), the storage system queries
the uploaded local files first.
Sharing Protocols
To enable NFS support on the NAS server, select SHARING PROTOCOLS and
the NFS SERVER option. Complete the form and click APPLY to save the
configuration.
NAS Server
With a NAS server running on the storage system, a file system can be
provisioned. The NAS server catalogs, organizes, and optimizes read and write
operations to the associated file systems. The file sharing protocols that are
enabled for the NAS server determine the types of file systems that you can create.
There are two ways to provision file systems using the PowerStore Manager
interface:
1. From the Storage menu, click the File System Add icon to launch the Create
File System wizard.
2. Select File Systems and click CREATE on the File Systems page.
Associate the file system with a NAS server. The table identifies which sharing
protocols are enabled in each NAS server. To support the access of Linux, UNIX,
and ESXi hosts, select a NAS server with the NFS protocol enabled.
The SMB Settings are grayed out because the selected NAS Server only supports
NFS.
FS Details
Enter the file system details, including name, description, and size. The size
represents the quantity of storage that is subscribed for the file system. The
minimum is 3 GB, and the maximum is 256 TB. 1.5 GB per file system is always
allocated for metadata.
NFS Export
Each NFS export must have a unique local path, which NAS clients use to access
the file system. PowerStore automatically assigns this path to the initial export
created within a new file system. The local path name is based on the file system
name.
Optionally create an NFS export for the file system during file system creation. The
NFS export name must be unique at the NAS server level per protocol. The NFS
export name is combined with the NAS server IP address to provide an NFS export
path for the file system.
o Kerberos: The Kerberos mode (krb5) allows clients using any Kerberos
flavor to connect.
o Kerberos with Integrity: Allows clients that have Kerberos with data
integrity (krb5i) or encryption to connect.
o Kerberos with Encryption: Allows clients that have Kerberos with
encryption (krb5p) enabled to connect.
Define the default access level for NAS clients (hosts) not in the export access
list. The access level options are:
No Access: Access is denied to user or client.
Read/Write: Users have read/write access to the export.
Read-Only: Users have read-only access to the export.
Read/Write, allow Root: Users have read/write access, and root has root
privileges on the export.
Read-Only, allow Root: Users, including root have read-only access on the
export.
Add a host to include NAS clients in the export access list. Add hosts using
hostnames, IP addresses, subnets, netgroups, or domains.
You can also assign NAS client access to an NFS export by selecting More
Actions and Import Hosts. The option provides instructions on how to create a
CSV file with a list of hosts network addresses and access types. The CSV file can
be imported to the storage system.
Protection Policy
Optionally select a policy to protect the file system, or add a protection policy later.
If the selected policy contains both snapshot and replication rules, the replication
rules are ignored.
Summary
Review the file system configuration on the Summary page. You may click BACK
to edit, or CREATE FILE SYSTEM to continue.
Linux and UNIX clients can mount NFS exports for file-based storage access. NFS
exports can also be mounted as NFS datastores for VMware ESXi hosts. NAS
client access is defined by the NFS access control settings of the NFS export.
In the example, there are two NAS servers. The first NAS server is sharing one file
system and the second is sharing two file systems:
The Engineering NFS export is mounted to the UNIX server.
The Support NFS export is mounted to the Linux server.
The Datastore NFS export is mounted to the ESXi host as an NFS datastore.
The PowerStore active/active architecture enables file storage load balancing and
high availability.
CoreOS CoreOS
FS1 FS3
Ethernet Ethernet
192.168.3.107:/datastore
192.168.3.106:/engineering 192.168.3.107:/support
UNIX server ESXi host Linux server
Manage access of Linux, UNIX, and ESXi hosts to a file system on the NFS
Exports section of PowerStore Manager.
Select Storage > File Systems > NFS Exports. Follow these steps to configure
NAS client access to the NFS export:
1. Modify host access for the selected NFS export:
Minimum security
Default access level
Access level of one of the existing NAS clients
Add a host and configure its access level
Import a list of hosts with defined access levels
2. The Import host list pane opens and displays a template of the file data
organization.
Create the CSV file with hostname or IP address, and the access type.
Then click IMPORT CSV FILE to upload the file. See an example of a CSV
file14.
14
"Name/Network Address","Access Type"
"192.168.1.101","READ_WRITE"
"192.168.1.102","READ_ONLY"
"192.168.1.103","READ_WRITE_ROOT"
"192.168.1.104","No_ACCESS"
"192.168.1.105","READ_WRITE"
See the commands to connect Linux and UNIX clients to NFS exports.
See the vCenter procedure to connect ESXi host access to NFS exports.
Simulation:
Create NAS Server with NFS Support.
Create a File System with an NFS export.
Create a lower-level NFS export of a file system.
From the Storage > File Systems in PowerStore Manager, view alerts, size
used, capacity, associated NAS server, and protection policies. Modify one file
system at a time.
Click the name of the file system to view its details.
Select the check box of the file system and then click MODIFY.
From the Storage > File Systems > select a file and select MODIFY.
In the Properties pane, modify the Description or Size. A file system cannot
be renamed.
Changing the size increases or decreases the file system capacity.
For file systems shared using the SMB protocol, changes to advance settings,
such as Sync Writes, notification on writes are available.
From Storage > File Systems, select the file system name to view its details:
1. To modify the properties:
View and change the properties of a file system from its details panel
FLR protects files from modification or deletion through SMB, NFS, SFTP, or
FTP access.
FLR is also known as Write Once, Read Many (WORM).
Files within an FLR enabled file system have different states: Not Locked,
Locked, Append Only, and Expired.
Availability
FLR Types
FLR-C protects file data that is locked from content changes that are made
by SMB, NFS, and FTP users, regardless of their administrative rights and
privileges.
With FLR-C, storage administrators cannot delete file systems that contain
locked files.
Dell Support cannot delete an FLR-C file system with locked files.
FLR-C enabled file systems comply with the Securities and Exchange
Commission (SEC) rule 17a-4(f) for digital storage.
o Required by companies that must comply with federal regulations.
FLR-C enabled file systems include a data integrity check for written files.
Considerations
FLR Interoperability
Snapshots
FLR-C does not support snapshot restoration.
o Restore snap operations are supported on FLR-E file systems.
Snapshot refresh operations must be from same file system FLR type (FLR-
C or FLR-E).
Replication
If a replication source file system is FLR-E or FLR-C enabled, then the
replication destination file system must have the same file-level retention.
Clones
Managing FLR
Storage administrators can create and delete file systems that have FLR enabled.
They can also view and change FLR features.
The exact actions that can be performed vary, depending on whether the FLR
mode of a file system is off, FLR-E, or FLR-C.
During file system creation, set the FLR type, and file retention period details.
Auto-Lock
Auto-Delete
Autolock and Autodelete:
Are disabled by default.
Cannot be specified during file system creation.
Can be modified at any time after file system creation.
To view or change the Autolock and Autodelete settings, go to Storage > File
Systems, and then click the file system. Then click the SECURITY & EVENTS tab,
and the FILE-LEVEL RETENTION subtab.
With Automatic File Locking, the system locks files automatically if they are not
modified within a specified time.
When a file is automatically locked, it is locked for the default retention period.
Files that are in append-only mode are also subject to automatic locking.
The Policy Interval specifies how long to wait after files are modified before
they are automatically locked.
Automatic File Deletion automatically deletes locked files after their retention
date has expired.
A weekly process that scans the file system to search for expired files.
The first scan happens seven days after the feature is enabled.
Enabling a file system for FLR is only done during the creation of the file system in
the Create File System wizard. By default, FLR is set to Off. Select either
Enterprise to enable FLR-E, or select Compliance to enable FLR-C.
Write verification may have a performance impact due to the read back
operation.
Write verification functionality is not available for FLR-E.
To enable the write verification function, log in to the PowerStore service account
and use the svc_nas_tools command.
Current value of FLRCompliance is 0, which indicates that the write verify function
is disabled.
svc_nas_tools service command output that shows write verification function as disabled.
svc_nas_tools service commands to enable write verification and confirm that it is enabled
FLR-C Restrictions
o Factory reset deletes all files from the system, regardless of retention
status.
With FLR-C, a file that has been locked with unlimited retention can never be
deleted.
With FLR-E, a file that is locked with unlimited retention can be updated with a
specific retention date later.
FLR-C does not support snapshot restores.
FLR-E does support snapshot restores.
To enable the write verification function, log in to the PowerStore service account
and use the svc_nas_tools command.
There are two ways to unmount an SMB share from the Windows client:
1. From inside the share, change the access to a directory outside of the directory
tree.
2. Use the net use command to verify the status of the share and the drive it is
mounted to.
3. Run the command net use <drive:> /delete to unmount the SMB share.
4. Run the net use command again to verify that the drive is unmounted.
To unmount the NFS export from the Linux or UNIX client, use the operating
system umount /<mountpoint> command.
To unmount the shared file system from the client, use the mount point that was
used to mount it.
In the example, the NFS export root that was mounted to the nfs folder was
unmounted from the Linux6 system.
To remove the NAS client access to an existing NFS export, go to the File
Systems page and select the NFS EXPORTS tab:
1. Select the NFS export.
2. Select Host Access from the MORE ACTIONS menu. The Host Access slide-
out panel is launched.
3. Select the checkbox of the NAS client to remove.
4. Click DELETE. The system displays a message that the host was removed.
5. Click APPLY to commit the changes.
For this example, NAS client access to an existing NFS export can be removed.
However, if the Default Access was set differently, it would leave default access.
SMB Shares
To remove an SMB Share, go to Storage > File Systems > SMB SHARES tab:
1. Select the SMB share.
2. Click DELETE. The system displays a message.
3. Click DELETE again to commit the operation.
This example shows that the Hmarine_Eng SMB share is deleted from the
PowerStore cluster.
NFS Exports
To remove an NFS export, browse Storage > File Systems > NFS EXPORTS tab:
1. Select the NFS export.
2. Click DELETE. The system requests confirmtion.
3. Click DELETE again to commit the operation.
– Read Only
– Accessed through previous versions or .snapshot directory
Expiration:
File system snapshots can be set to not expire by choosing the No Automatic
Deletion option.
The Protection Policy has the following naming scheme to create the snapshots:
Snapshot Rule: Name_Resource Name_Timestamp with nano-time
Add a protection policy with a snapshot rule to a file system to schedule a snapshot
for that file system. Click here for steps to apply a protection policy to a file
system.
From PowerStore Manager > Storage > File Systems > [File System] > Protection
Card > CREATE SNAPSHOT.
Creating a snapshot
Restoring a file system from a snapshot returns that file system to the state that it
was in when the snapshot was taken.
Refreshing a file system snapshot, the contents of the snapshot are overwritten
with the current contents of the file system.
Refreshing a snapshot
Use quotas to track and limit drive space consumption at the file system or
directory level. Enable or disable quotas on SMB, NFS, SFTP, and multiprotocol
file systems at any time. Configure them during non-peak hours to avoid impacting
file system operations. You cannot create quotas for read-only file systems.
To set default quotas on a file system, go to Storage > File Systems. Click the file
system.
File Systems
Before configuring user and tree quotas, first enable quotas and create a default
quota policy. In the File System properties, go to the QUOTAS card. Click
PROPERTIES to set the default quota. Set the default from either the USER
QUOTA or TREE QUOTA tab.
To track space consumption without setting limits, set Soft Limit and Hard Limit to
0, which indicates no limit.
From the USER QUOTA tab, click ADD to add a user quota.
Important: If you change the limits for a tree quota, the changes take
effect immediately, without disrupting file system operations.
A Tree Quota limits the total amount of storage that is consumed on the directory
tree. Use tree quotas to:
Set storage limits on a project basis. For example, establish tree quotas for a
project directory that has multiple users sharing and creating files in it.
Track directory usage by setting the tree quota hard and soft limits to 0 (zero).
From the TREE QUOTA tab, click ADD to set a tree quota.
1. Optionally toggle the switch to Enforce User Quota on this tree quota.
2. Enter a valid Path for the directory to which this quota will apply.
3. Set a Grace Period as a specific period of time or as Unlimited. The Grace
Period applies to the Soft Limit.
4. Set Soft and Hard Limits.
Manage NDMP
NDMP Backups
PowerStore supports:
Three-way NDMP, which transfers both backup data and metadata over the
LAN
Two-way NDMP is not supported.
Both full and incremental backups
Components:
Primary Storage is the source system to be backed up, for example,
PowerStore.
Data Management Application (DMA) is a backup application that coordinates
the backup sessions. For example, Networker.
Other supported backup vendors include: Avamar with ADS/DD, CommVault
with NDMP, IBM Spectrum Protect, Micro Focus Data Protector, Veritas
NetBackup, and Veritas Backup Exec.
NDMP Configuration
Monitor NAS
View capacity and performance metrics for file systems and NAS servers in
PowerStore Manager.
View metrics for file system capacity. Capacity metrics are collected every five
minutes and rolled over hourly and daily.
NAS server capacity includes the sum of all its file systems. In the example, the
NAS server holds two file systems. Each of them is provisioned with 5 GB, in which
3.5 GB is free.
NAS Server
Customize the
metrics and
timeline
File System
Monitor individual file system performance by selecting a file system from the
Storage > File Systems tab. Select the PERFORMANCE card.
Appliance
From the Appliance page, with the FILE tab selected, select a node and a protocol
to view protocol performance on that node.
PowerStore uses the Common Event Publishing Agent (CEPA) to protect NAS
Servers and File Systems against cybersecurity threats.
PowerStore uses CEPA to register to receive event notifications with context in
one message.
CEPA runs on Windows or Linux.
PowerStore sends event notifications from SMB and NFS to the event server.
Event servers contain event configurations and send event notifications to event
pools.
Third-party cybersecurity applications monitor events to identify patterns
indicating a ransomware attack.
PowerStore Node
CEPA Servers
NAS Server
Event
F 1
Event
Event Pool
Publisher
NAS Server
F 1
F 2
Event
Event Pool
Publisher
NAS Server
F 1
Event servers notify event publishers, which notify event pools of cybersecurity threats
An Events Publisher and a Publishing Pool are required. Once they are set up, they
are mapped to the NAS Server.
NAS Settings
Create an event publisher and an event pool on the NAS Servers > NAS
SETTINGS tab.
Publishing Pool
Events Publisher
After the pool has been created, select it, and click NEXT to configure the events
publisher. Create more than one pool to customize how different NAS servers are
monitored.
1. Select policies in case the PowerStore node cannot send events to the CEPA
server.
2. Set a different HTTP port or RPC, a different server account, heartbeat, and
timeout values.
3. Click CREATE EVENTS PUBLISHER.
1. From the NAS Servers page, select the SECURITY & EVENTS card.
2. Select the EVENTS PUBLISHING tab.
3. Select Enabled and choose the Events Publisher from the drop-down. Enable
SMB or NFS.
4. Click APPLY.
NAS Capabilities
File support is only available on PowerStore unified deployments and is configured
only on the PowerStore cluster primary appliance.
Description Value
Storage administrators cannot delete FLR-C file systems that contain locked
files.
Dell Support cannot delete an FLR-C file system with locked files.
FLR-C enabled file systems are compliant the Securities and Exchange
Commission (SEC) rule 17a-4(f) for digital storage, which is required by
companies that must comply with federal regulations.
FLR-C includes a data integrity check for files that are written to the file
system.
FLR Interoperability:
Snapshots
FLR-E supports snapshot restoration.
FLR-C does not support snapshot restoration.
Snapshot refresh operations must be from same file system FLR type, either
FLR-C or FLR-E.
Replication
If a replication source file system is FLR-E or FLR-C enabled, then the
replication destination file system must be the same type of file system.
Clones
State Description
Not All files start as not locked. A not locked file is an unprotected file that
Locked is treated as a regular file in a file system. In an FLR file system, the
state of an unprotected file can change to locked or remain as not
locked.
Locked Also known as Write Once, Read Many (WORM). Locked files cannot
be modified or deleted. The file remains locked until its retention
period expires. Files can be locked manually or may be automatically
locked by the system or FLR Toolkit. A locked file can have its
retention period extended, but not shortened.
Append Existing data cannot be modified or deleted in an append only file, but
Only data can be added to it. An example of an append only file is log files
that grow over time. The file can remain in the append-only state
forever, or locked later.
Expired An expired file was previously locked, but the retention date has
passed. An expired file can only be re-locked or deleted from the file
system; it cannot be changed to append-only unless it is empty. Data
in expired files cannot be modified.
Linux command:
When mounting the shared file system to the client, use the NFS export path. The
NFS export path is the combination of the NAS server network address and the
path to the target NFS export.
In the example, a folder called nfs was created in Linux6. The NFS export named
training (file system NFS_FS01) was mounted to this directory.
After mounting the NFS export to the host, set the share’s directory and file
structure. Then set the user and group permission to its directories and files.
In the example, a folder called support was created in the nfs directory. The folder
permission bits were modified and the ownership changed to user swoo.
To grant access to an existing NFS export, go to the File Systems page and select
the NFS Exports tab:
1. Check the NFS export, and select Host Access from the More Actions menu.
The Host Access pane is launched.
2. Define the minimum access security: sys, Kerberos, Kerberos with Integrity,
and Kerberos with Encryption.
3. Define the access level for the NAS client (hosts) not in the export access list:
No access, read/write, read-only, read/write, allow root, and read-only,
allow root.
4. Click ADD HOST to include NAS clients in the export access list.
5. Set the ESXi host access level as read/write, allow root.
6. Enter the hostname or IP address of the NAS clients.
7. Click SAVE to commit the changes and APPLY to save the configuration.
To create an NFS datastore from the provisioned PowerStore file system in the
vSphere environment, open a session to the vCenter server managing the ESXi
host. Launch the vSphere Web Client from a supported web browser using the
vCenter Server URL.
From the vSphere Web Client session, select the Datacenter under the Storage
section, and perform the following actions:
1. Expand the ACTIONS menu and select New Datastore... from the Storage
section. The New Datastore wizard launches.
2. On the New Datastore wizard, select the NFS datastore type. The NEXT button
advances to the next wizard steps.
3. On the second step of the wizard, select the NFS version that is supported on
the NAS server.
4. On the third step, enter the following information:
FLR States
State Description
Not Locked All files start as not locked. A not locked file is an unprotected file
that is treated as a regular file in a file system. In an FLR file
system, the state of an unprotected file can change to Locked or
remain as not locked.
Locked Locked files are also known as WORM. They cannot be modified,
extended, or deleted. The file remains locked until its retention
period expires. Files can be locked manually or automatically by
the system or FLR Toolkit. A locked file can have its retention
period that is extended but not shortened.
Expired An expired file is a file that was previously locked, but the
retention date has passed. An expired file can only be relocked or
deleted from the file system, it cannot be changed to append-only
(unless it is empty). Data in expired files cannot be modified.
Storage Resources
A protection policy may be selected at the time the storage resource is created, or
associated with an existing storage resource later. Associating a protection policy
with a supported storage resource is not a requirement.
Only one protection policy may be applied to each supported storage resource:
Standalone volume or a volume in a volume group (if the volume group has
no protection policy associated).
Volume Group
Protection Policy
Volume
Virtual Machine
(vVols)
Protection Policy
Protection policies can be applied to thin clones of volumes, volume groups, file
systems, and snapshots.
Protection policy can be substituted with another configured policy at any time.
Substitute the protection policy with one that has different snapshot rules.
If the associated policy has no replication rule, you can associate the resource
with one that has.
If swapping a policy with one that also has a replication rule, ensure both
policies use the same remote system. This restriction avoids an unnecessary
initial, full sync operation.
Volumes
1. From the Volumes page, select the volumes to protect. Select any individual
volume or volume group members that do not have a policy that is associated at
the group level.
2. Open the PROTECT menu and select the Assign Protection Policy option.
3. From the list of existing policies, select the one that you want to associate with
the storage resource.
4. Click Apply to commit the changes.
In the example, the Policy1 policy is assigned to the volumes: Vol01 and Vol05.
One of the volumes (Vol01) is a member of the VolumeGroup-1 volume group.
The policy can be associated with this volume because the volume group has no
policy that is associated with it.
Volume Groups
To protect a volume group using the PowerStore user interface, expand the
Storage submenu, and select Volume Groups.
1. From the Volume Groups page, select the volume groups to protect.
2. Open the PROTECT menu and select the Assign Protection Policy option.
3. From the list of existing policies, select the one that you want to associate with
the volume group.
4. Click Apply to commit the changes. The policy is applied to all the member
volumes of the volume group.
In the example, the Critical Applications policy is associated with volume group C2-
VG01. Two volumes that are members of this group are associated with the
protection policy.
File Systems
To protect a file system using the PowerStore user interface, expand the Storage
submenu, and select File Systems.
1. From the File Systems page, select the file system to protect.
2. Open the PROTECTION menu and select the Assign Protection Policy
option.
3. From the list of existing policies, select the one that you want to associate with
the file system.
4. Click Apply to commit the changes. The policy is applied to the file system.
For policies that include a replication rule, only the snapshot schedule is used.
Replication is not supported for the file systems.
Virtual Machines
To protect a virtual machine using the PowerStore user interface, expand the
Compute submenu, and select Virtual Machines.
1. From the Virtual Machines page, select the virtual machine to protect.
2. Open the PROTECTION menu and select the Assign Protection Policy
option.
3. From the list of existing policies, select the one that you want to associate with
the virtual machine.
4. Click Apply to commit the changes. The policy protects the virtual machine, and
the underlying vVols.
For policies that include a replication rule, only the snapshot schedule is used.
Replication is not supported for virtual machines.
To save the order of the answer choices as they are authored, do the following:
2. In the BEHAVIORS panel, click the button to switch it to the "on" position (it
turns blue).
CEPA
A mechanism in which applications can register to receive event notification and
context from PowerStore systems. CEPA runs on Windows or Linux. CEPA
delivers to the application both event notification and associated context in one
message.
Current Node
The Current Node indicates the node that is assigned to run the NAS server. In
stable sate, the parameter indicates the node on which the NAS server is running.
Docker Container
A container executes functionality using kernel resource isolation features. Multiple
independent container applications can execute under a single operating system
instance.
Node Fencing
Preferred Node
The Preferred Node indicates on the node that the NAS server should run.
Changing this parameter does not affect the Current Node assignment.
Primary Node
The node assigned to run the NAS server. In a stable state, it is the cluster node
the NAS server is running.
Profile DN
Profile DN specifies the entry with the configuration profile for the iPlanet or
OpenLDAP server.
Secure NFS
Enables secure data transmission by leveraging Kerberos instead of individual
clients for authentication. It can be used with NFSv3 or NFSv4 (preferred).
UMASK
The UMASK is a bitmask that controls the default UNIX permissions for newly
created files and folders. This bitmask determines which permissions bits are
excluded upon creation.