ECS Administration - Lab Guide
ECS Administration - Lab Guide
LAB GUIDE
Version 1 - September 2021
PARTICIPANT GUIDE
PARTICIPANT GUIDE
[email protected]
[email protected]
Dell Confidential and Proprietary
Copyright © 2021 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC and other
trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be
trademarks of their respective owners.
Lab Topology
Review your Lab Environment.
5. Three ECS sites, with one node each. Each node is a VM running ECS 3.5
software (ECS Community Edition – Single node). Real world ECS installs
require a minimum four-node setup; this one-node install is for
demonstration purposes only. It’s worth mentioning that although this is a
virtual environment, all lab exercises perform as a real world ECS 3.5
installation.
Note: Chrome is the preferred browser and delivers the best experience. If
you do not have Chrome, you can use the browser of your choice.
2. Log in to the VLP using the credentials that your instructor provided.
Username:_______________
Password:________________
3. In the upper right corner of your browser, Click on Enrollments in the top left.
If you need help Click Help > Tips to review the EduLab Orientation Video or
you can raise your hand. The instructor will get a notification.
Scenario:
Review the lab guide for this class and establish a connection to your management
station.
My ECS pod
number:_______________________________________________
2. At this point the VLP brings you to the Windows management station login
screen. Click CTRL-ALT-DEL button at the top of the screen to get your login
prompt. Once logged in you have convenient access to all needed tools, and
every other host in your pod.
You can connect back into the same session at any time using the following
credentials:
Login: DELL\Administrator
Password: P@ssw0rd!
3. Open Google Chrome browser in your management station (Jump Server) and
either type in the IP address of site 1 ECS node into the address bar
(192.168.1.5) or click on the ECS Site 1 Luna link.
4. If there is a security certificate error, click Advanced and then click Proceed
(unsafe).
You change the browser resolution in Chrome to 75% to 80%. This allows you
to see the entire browser application for the ECS Portal.
NOTE: When you login to the ECS portal for the first time, the GETTING
STARTED checklist is invoked. Since you will configure the system following
the lab guide, click: GO TO ECS
6. Once authenticated, take a moment, expand and explore the following options:
Dashboard, Monitor, Manage, and Settings. These options are located on
the left side of the ECS Portal screen.
DO NOT change your password. The instructor will not be able to change it
back and cannot help you.
7. You can use the ECS Portal to change your password, set password rules,
manage user sessions, and set user agreement text.
b. Explore each tab setting for changes to Password Rules, Sessions and User
Agreement. If changes are made to this section, the user must log out and
log back in for those changes to take effect. Do not make any changes!
8. You will login to the ECS Portals at all the different sites (ECS Site 1 Luna,
ECS Site 2 Phobos, and ECS Site 3 Deimos) and modify the session
timeouts.
1. Go to Settings>Security>Sessions.
4. Note: Make sure you have made these setting changes on all three ECS
sites.
Scenario:
Using the ECS Portal, configure the core storage infrastructure elements for your
system: Storage Pool(s), VDC(s) and Replication Group(s)
1. If not already logged in, bring up the Chrome browser and provide the IP
address (192.168.1.5) or click on the ECS Site 1 Luna link. This will open the
ECS Portal login screen. Provide the authentication information below to log
into the ECS Portal:
a. Name: luna_sp1
b. From the Available Nodes field, select the host luna (a minimum of 1 node
is required) and click the arrow to add nodes to the Selected Nodes area.
Notice the host name of your ECS node. Each node has a unique default
name, and each rack has a unique color. These values make up the name
Note: The creation of the storage pool is a time sensitive step. You must
allow a minimum of 15 minutes for this to complete. The storage pool will
show Not Ready as its status, you must not proceed to the next lab exercise
until at least 15 minutes has elapsed since the Save button was clicked.
When you select the storage pool and status shows 'Partially Ready' and
node 1 is 'ready to use' you may continue.
4. Create the Storage Pools at the other sites (ECS Site 2 Phobos, and ECS
Site 3 Deimos). Opening new Chrome browser windows and click the website
links for the other ECS sites in separate browser windows. Login to the ECS
Portals at the other site and use the information below to create the other
Storage Pools:
1. Log back into the first site’s ECS portal, ECS Site 1 Luna (192.168.1.5)
a. Before creating the VDC, an Access Key must be generated. Click GET
VDC ACCESS KEY.
3. When the access key is generated, highlight the access key and copy it
<Ctrl>+<C> since it will be required in the next step. Open a new Notepad++
session on the Windows host and paste the Access Key by using the
<Ctrl>+<V> then save this file to the desktop. You will be adding information
during these lab exercises.
4. From the Virtual Data Center Management page Virtual Data Center and click
NEW VIRTUAL DATA CENTER.
5. On the New Virtual Data Center page, enter the following information to
successfully create a VDC within your assigned ECS pod:
a. Name: vdc1_luna
e. When the information is entered, click Save to create the VDC. Status will
indicate online.
Password: P@ssw0rd!
You should change the browser resolution in Chrome to 75% to 80%. This
allows you to see the entire browser application for the ECS Portal.
When the key is generated, copy it to the Notepad++ on the Windows host.
3. Once you have copied the site 2 key to Notepad++ on your Windows host,
Log out of ECS Site 2 Phobos (192.168.1.6) now!
4. Log back in to ECS Site 1 Luna (192.168.1.5) (in case you logged out).
MAKE SURE YOU RETURN TO: ECS Site 1 Luna (192.168.1.5) NOW!!
6. On the New Virtual Data Center page, enter the following information to create
a VDC within your assigned ECS Appliance:
a. Name: vdc2_phobos
b. Key: <Paste the Access Key generated for ECS Site 2 from step 2>
e. When the information is entered, click Save to create the VDC for site 2.
Status will indicate online.
7. The VDC Federation is successfully created which is shown by two VDC's with
two different endpoints.
8. You will now create another federated VDC for the third site, ECS Site 3
Deimos.
MAKE SURE YOU RETURN TO: ECS Site 1 Luna (192.168.1.5) NOW!!
d. Go to the Virtual Data Center Management page by navigating to Manage >
Virtual Data Center.
f. On the New Virtual Data Center page, enter the following information to
create a VDC within your assigned ECS Appliance:
Name: vdc3_deimos
Key: <Paste the Access Key generated for ECS Site 3 from step 8b>
Replication Endpoints: Enter the IP address of ECS Site 3 Deimos
192.168.1.7
Management Endpoints: Enter the IP address of ECS Site 3 Deimos
192.168.1.7
g. When the information is entered, click Save to create the VDC for site 3.
Status will indicate online.
1. If not already, login to the ECS Portal on the ECS Site 3 Deimos
(192.168.1.7) location.
a. Name: rg_local_deimos
b. Leave default settings for Replicate to All Sites ‘Off’ and Geo Replication
type ‘Active’. All buckets in the Replication Group will be local only.
c. Click ADD VDC, the VDC and Storage Pool created in the previous lab will
appear in their respective drop-down (vdc3_deimos and deimos_sp3).
4. Click the down arrow to the left of the Replication Group Name. Once the local
replication group has been created, its status will show Online.
Contact your instructor if it is not.
a. Name: rg_global_luna_phobos_deimos
d. Click the ADD VDC button and add the following VDCs:
Scenario:
Using readily available data clients, test basic I/O access by performing "CRUD"
(Create, Read, Update and Delete) operations on ECS data repositories
(commonly referred to as "buckets")
Create an object user then, generate and retrieve the S3 Access Key for that user.
Create a bucket and assign the object user as the bucket owner.
1. If not already logged in, using the Chrome browser and login to the ECS Site 1
Luna portal at 192.168.1.5 using the credentials below.
2. Navigate to Manage > Users > Management Users and on the User
Management page, click NEW MANAGEMENT USER.
You will see the two default management users that are created during the
initial deployment of the ECS Appliances.
root: System and Security Administrator This user performs the initial
configuration of the ECS system.
b. Name: ns1_admin
c. Password: P@ssw0rd!
e. System Administrator: No
f. System Monitor: No
g. Click Save
h. The following Warning will appear, indicating that the management user you
are creating will not be a valid login unless it is mapped to a Namespace.
Click OK to proceed.
i. The new management user has been created. You will use this new
management user in the next steps when you create a namespace.
a. Name: ns1
1. Note: A namespace can have more than one admin user. If there are
multiple admin users, enter comma separated user names in the User
Admin field. In this lab, we will use the new management user created in the
previous steps.
2. Note: The Namespace Root User is used with S3 Identity and Access
Management feature (called S3 IAM)
c. Domain Group Admin: Leave Blank
f. Click Save.
6. Now, you need to create an object user who can own a bucket and perform
read and write operations to it via an external application. ECS Object users,
can access ECS object storage for CRUD operations (Create, Read, Update
and Delete).
a. Create a new object user for the namespace that you created in the previous
step. You will then use the object user to perform I/O operations through the
bucket that you will be creating in a later lab step.
b. Navigate to Manage > Users > Object Users. Click NEW OBJECT USER.
a. Name: user1
8. This step allows for updates and to add passwords for the new object users.
The Object Access section has options to generate passwords for various
clients (S3, Swift and CAS) that are supported for ECS object store access.
a. Click GENERATE & ADD SECRET KEY in the S3/Atmos section then
select Show Secret Key.
b. Highlight the key press <Ctrl>+<A> then <Ctrl>+<C> to copy the key to the
Notepad++ file on your desktop. You will need this key later to create an S3
account and access the ECS object store using the S3 Browser application.
e. Now that you have an object user created and the secret key password, you
will need to create a bucket with this object user as the bucket owner.
Click NEW BUCKET. (Notice that the namespace ns1 is already selected.)
10. When creating a new bucket, there are three categories of information to
complete: Basic, Required and Optional.
Enter the ‘Basic’ information for the new bucket with the following
information:
a. Name: bucket1
b. Namespace: ns1
c. Replication Group: rg_global_luna_phobos_deimos
d. Bucket Owner: user1 (the object username you created in a previous step)
- The bucket owner will have the ability to modify bucket ACLs and thus
provide/remove bucket access to other object users in the namespace.
e. Choose Next.
11. Below are the ‘Required’ bucket configuration options. For now, leave all of
these at their default values. You will experiment with some of these options in
a later lab.
File System: Enable/Disable file system access on the bucket using HDFS
or NFS export
Metadata Search: Indexes created for the bucket on specific key values
Click Next.
12. Below are the ‘Optional’ bucket configuration options. Leave all of these at
their default values as you will experiment with some of these options in a later
set of labs.
Bucket Tagging: Key-value pairs associated with the bucket, so objects can
be categorized
Click Save.
13. Upon successful creation of a bucket, you will see the bucket listed in the
Bucket Management page as shown below.
Note: You can filter and view the buckets in a particular namespace by
selecting the namespace from the Namespace drop-down.
You cannot modify the bucket name, replication group and namespace
attributes of a bucket.
The Edit bucket option, under the Actions list, will allow you to change other
bucket properties like bucket owner, quota, ACLs, etc. which you will explore
in subsequent lab exercises.
a. Name: bucket2
b. Namespace: ns1
e. Click Next
Click On.
NOTE: Metadata Search key/value pairs can ONLY be added at the time the
bucket is created and cannot be added to or modified after the bucket is
created.
image-width (Integer)
image-height (Integer)
image-viewcount (Integer)
gps-latitude (Decimal)
gps-longitude (Decimal)
d. Click ADD.
e. Enter the remaining metadata search attributes listed in step 4, then click
Next.
Click Save.
Verify that you have created an object user and provisioned 2 buckets. You
will now use the object user to ingest and access data.
7. To verify that the VDCs are federated and replication has been setup between
the 3 site locations, perform the following:
a. In the Chrome Browser select the ECS Site 2 Phobos and login to the ECS
Portal with credentials: root / P@ssw0rd!
b. Navigate to Manage > Virtual Data Center verify that you can see VDCs
vdc1_luna, vdc2_phobos, and vdc3_deimos.
c. Navigate to Manage > Replication Group verify that you can see the
replication rg_global_luna_phobos_deimos.
d. Navigate to Manage > Namespace verify that you can see the namespace
ns1.
e. Navigate to Manage > Users >Object Users and verify that you can see the
S3 object user user1.
f. Navigate to Manage > Buckets, select the ns1 namespace and verify that
you can see the buckets bucket1 and bucket2.
g. Perform the step b – f on the ECS Site 3 Deimos by logging into the ECS
Portal with credentials: root / P@ssw0rd!
3. Once you add the new account the S3 Browser shows 2 buckets, bucket1 and
bucket2 that were created in the previous lab.
You will see that information in the left pane as shown below.
This is because the object user was set as the bucket owner when the bucket
was created.
Added object users in the same namespace cannot view this bucket until the
bucket owner modifies the ACL to allow a new object user to view or operate
on a bucket.
4. If you click the Permissions tab in the bottom pane, you will see that the
object user has Full Control permission set on both buckets, since the bucket
owner by default, would have full access over the bucket.
You will experiment with the bucket permissions also known as ACL (Access
Control List) for different object users later in this lab.
5. Now select bucket1 to upload some files. Click the Upload button and then
choose Upload files(s) to upload to the bucket. Use any of the files in the
C:\Lab Software\Test Files folder for testing uploads and downloads.
6. Now, download some files using the Download button. You can also delete a
file(s) using the Delete button.
1. Open Chrome browser then navigate to your primary ECS Site 1 Luna
(192.168.1.5).
Click Manage > Users > Object Users, and then click NEW OBJECT USER.
2. Now create an object user named swiftuser1 for connection to ECS using
swift protocol.
c. Click SET GROUPS AND PASSWORD. You will see a message at the top
indicating success.
a. Name: swiftuser2
f. Click SET GROUPS AND PASSWORD. You will see a message at the top
indicating success.
b. Nickname: swiftuser1
f. Close the dialog box with the X in the upper right corner when done and
settings will be saved.
Click Create.
The container (Viewable in the ECS Portal) will be created and available for
file upload, download, and delete. It will appear in the ECS Portal as a bucket.
Be sure to select the Namespace which the bucket was created in and verify in
your ECS Portal that the new bucket was created.
12. Using Windows Explorer, navigate to C:\Lab Software folder, open the Test
Files folder then drag and drop Test.txt onto container1 in Cyberduck.
If prompted about an invalid certificate, click Continue. This will copy the file to
the container as shown below.
13. Using the Cyberduck application menu bar select Bookmark then select New
Bookmark.
In the New Connection dialog box, enter the following information shown
below.
b. Nickname: Swiftuser2
f. Close the dialog box with the X in the upper right corner when done and
settings will be saved.
This is because any ECS Swift user by default is added to the admin group.
The admin group has full permissions to all Swift containers. See the appendix
at the end of the lab guide for curl commands you can execute to address this
behavior.
Put and Get Centera C-Clips from ECS using CAS Tools
In this lab, you will perform the following activities:
1. In the ECS Portal select Manage > Bucket and create a NEW BUCKET.
b. Namespace: ns1
e. Click Next.
Click Next.
4. From the ECS Portal select Manage > Users to create a new object user
b. Name: casuser
5. Set the CAS password information (perform these steps in the order shown):
3. From the Default Bucket drop-down choose the casbucket you created in
step 1 of this lab exercise.
6. Copy the content of the PEA File generated to the clipboard (Select the text
and press <CTRL> + <C>).
generated PEA File (<CTRL> + <V>) and save the contents in a file named
pea.p to your Desktop.
8. Click Close.
b. Once selected, drop-down the corresponding Actions list and choose Edit
ACL for casbucket
11. Fill in the User Name field with the CAS object user name you created in step
4 of this lab exercise.
Be sure casuser has Full Control checked on the bucket and click Save.
12. Using Windows Explorer, navigate to C:\ and locate the JCASScript-win32-
3.2.35 folder
a. Right Click the Window menu icon and select the Run box. Type cmd and
press OK.
b. Right Click on the upper left corner on the window and select Properties
c. In Options Tab > Edit Options > Quick Edit Mode ensure this box is
checked to allow copy and paste.
Run the command java -jar JCASScript.jar to start the program. You will be
at the CASScript prompt.
a. poolOpen 192.168.1.5?pea.p
Note: The command shown is using the relative path to the PEA file. The
absolute path can be specified alternatively using the following command:
b. CASScript> poolOpen <ip_of_ECS node>?C:\JCASScript-win32-
3.2.35\pea.p
16. Copy a small file from C:\Lab Software\Test Files to the C:\ JCASScript-
win32-3.2.35 directory.
17. Transfer the file and save it on ECS as a clip in the CAS bucket.
a. Using your mouse, highlight and copy the new clip ID returned by the
“fileToClip” command from the previous step.
This saves the clip to a file named “savedclip.txt” in your local C:\
JCASScript-win32- 3.2.35 directory. Compare the two clips, Test.txt and
savedclip.txt.
22. To delete the clip from a CAS bucket run the command: clipDel
<ContentAddress>
Scenario:
Experiment with ECS features for access control (ACLs), quotas and retention for
object data
Create a second, new object user in the existing namespace you created in the
previous lab.
Modify the bucket ACL to provide access to the new object user.
Using the S3 Browser, verify that the ACL defined is regulating read/write access
as you expected.
1. Login to the Primary ECS Site 1 Luna Portal at 192.168.1.5 using the
following credentials:
2. From the ECS Portal, create a new object user as described below.
a. Navigate to Manage > Users. Click on Object Users, then click NEW
OBJECT USER.
c. Namespace: ns1
f. Select: Show Secret Key box. <Ctrl>+<A> to select and <Ctrl>+<C> to copy
the key to Notepad++.
a. Open S3 Browser
4. After completing the previous step, you will now be using the new S3 account
created for the user2 user.
5. In the Bucket Explorer pane, S3 Browser will automatically list only the
bucket(s) owned by this user2 user. To view other buckets which the same
user has access to (via ACLs), you must use the Add External Bucket under
the Buckets menu of the S3 Browser.
From the S3 Browser, navigate to Buckets > Add External Bucket option.
6. Enter the name of the bucket you created in the previous lab (bucket1) and
click Add External bucket.
8. Now, select the bucket to view the contents. You will get the below popup
message.
Click Yes.
This is because, user2 does not have read access privilege on the bucket.
Click OK.
10. Now go check what the bucket ACL looks like in the ECS Portal.
11. Choose the Edit ACL option from the Actions drop-down of bucket1.
• User ACLs - enables admin user to provide read and write privileges on a
bucket for an object user.
• Group ACLs - lets you set permissions for a set of pre-defined group
• Custom Group ACLs - Custom groups are names of user groups for access
You will first test User ACLs and then move on to Group ACLs.
As below, you can see that the User ACL, by default has an entry for the
bucket owner with Full Control permission.
13. You want the user2 user to read bucket contents, so you will add a new rule
for this user.
You can see a list of permissions available. Unselect all the permissions
except for Read. You will just assign read privilege to the user.
Click Save.
14. On successful creation of the rule, you can see that the object user was added
to the User ACL list as seen below:
15. Now, go back to the S3 Browser where user2 is logged in and click Refresh.
You can see the files that you uploaded to bucket1 as user1 user from the
previous lab.
16. You can also verify, through the S3 Browser, that user2 has read access to
bucket1. Change the account to user1, and select the Permissions tab.
17. Change the Account user back to user2. Now try performing an Upload
operation.
No, because the user2 does not have write permission on the bucket. You
can view the “Access Denied” error in the Tasks pane at the bottom of S3
Browser as shown below:
Experiment with various ACL permissions and test how they affect operations
you can perform from the S3 Browser.
19. You tested how you could use ACLs to give permission to a user for bucket
access.
Now you will see how Group ACLs can be used to provide permissions on a
large set of pre-defined user groups.
Below are the groups available in Group ACLs.
20. You will first try the All users Group ACL. For this, you need to create a new
object user in ECS Portal. From the ECS Portal, create a new object user as
described below.
c. Username: user3
d. Namespace: ns1
f. Select GENERATE & ADD SECRET KEY for the S3 client. Choose Show
Secret Key.
21. Now, add a new Group ACL rule to allow all users to perform read operation.
In the ECS Portal, navigate to Manage > Buckets.
22. Select your namespace (ns1) from the Namespace dropdown list.
Select Edit ACL from the Actions drop-down for the bucket1 bucket.
You can see that the Group ACL does not have any rules. Click Add.
Unselect all permissions except the Read permission and click Save.
This rule will provide read permission on the bucket to all authenticated users.
25. Now that you have read permission set on bucket for all authenticated users
in the same namespace, try to read this bucket as user3 using S3 Browser.
Note: S3 Browser free edition will allow a maximum of two accounts. So, you
will get a warning when you try to add a new account for user3.
26. Add a new account for user3. . Fill in the fields with the following information
shown.
28. Add external bucket to get the bucket1 listed on the bucket explorer pane.
Select bucket1 to see that user3 is able to read the bucket. Note that there is
no ACL that specifically adds access to this particular user; our all users
Group ACL enabled the user to read buckets.
29. You can also experiment with the public Group ACL. Adding permission to
this group enables even anonymous, or unauthenticated, users to access the
bucket. S3 Browser will not allow you to create an account without any
credentials. So, you will use the curl command-line utility to test public
access.
30. Connect to your ECS Site 1 Luna node using PuTTY to:
IP address: 192.168.1.5
Login: admin
Password: ChangeMe
31. Issue the curl command below, which is an anonymous request to read the
bucket1 bucket.
As you see below, you will get the Access Denied error. This is expected,
since the bucket ACL does not permit anonymous user access.
NOTE: If you want the xml output to be in a readable format, you can pipe the
curl command output through xmllint --format -
32. Next in the ECS Portal, create a Group ACL which gives read permission to
the public group. This will allow both authenticated and anonymous users to
perform read access on the bucket.
b. Select Edit ACL from the Actions drop-down for the bucket1 bucket. Select
the Group ACLs tab.
c. Click Add
Upon successful creation, the Group ACL of the bucket will appear as shown.
33. Now from the PuTTY session, re-run the curl command:
NOTE: If you want the xml output to be in a readable format, you can pipe the
curl command output through xmllint --format -
34. If not already, login to the ECS Portal on ECS Site 1 Luna (192.168.1.5) with
the credentials: root / P@ssw0rd!
35. Navigate to Manage > Buckets and select the ns1 namespace from the
dropdown.
36. Add a new bucket called bucket6 owned by object user1 on replication group
rg_global_luna_phobos_deimos.
37. Click the arrow next to the Edit Bucket for bucket6 and select Edit Policy.
38. The Bucket Policy Management view is displayed. This view allows you to
create or edit bucket polices. There are different editing modes you can select.
For this lab we will use the default edit mode, Format JSON data, with
proper indentation and line feeds.
39. You will now create a bucket policy on bucket6 that allows object user2 to
write and read objects from bucket2 from IP address 192.168.1.5. Recall that
bucket6 is owned by user1.
Enter the following JSON code, exactly as shown, into the Bucket Policy
Editor:
Note: In the C:/Lab Software directory on the jump server there is a text file
called bucketpolicy.txt that contains this JSON code.
40. Start up the S3 Browser. Navigate to Accounts > Manage accounts and
delete the user3 account. Click Save changes.
41. Navigate to Accounts > Add new accounts and add a new S3 account
user2:
42. You will now be using the new S3 account created for the user2 user.
43. To view bucket6 you must use the Add External Bucket under the Buckets
menu of the S3 Browser.
a. From the S3 Browser, navigate to Buckets > Add External Bucket option.
44. Enter the name of the bucket you created in the previous lab (bucket6) and
click Add External bucket.
1. You will first experiment with retention period option on buckets. Login to the
ECS Site 1 Luna (192.168.1.5) Portal using the below credentials:
In the Bucket Management page, select your namespace ns1 from the drop-
down.
3. In the Edit Bucket page, select Next, then select Next again. You will see the
Bucket Retention Period section.
The retention period is set at the bucket or object level. It prevents the
objects to be modified or deleted until the retention period elapses, after the
The bucket retention period can be set in units ranging from seconds to
years.
There is also an Infinite option which when checked. This option prevents any
modification of the object indefinitely.
Click Save.
The delete operation failed because the object/file creation time is not more
than the 1-month retention period that you had set on the bucket. You can see
the error message by clicking on the Failed task in the Tasks pane at the
bottom of the S3 Browser.
As you see the status message states that the object cannot be deleted
because it is subject to retention.
6. Modify the retention period of the bucket to a smaller duration (duration less
than the current age of your test object, based on its creation time).
Try again to delete the object in the bucket. You can see that the Delete
operation succeeds without any problem.
Retention policies are configured for the Namespace level. Multiple retention
policies can be defined for a given Namespace.
Policies can be applied to objects using S3 curl commands.
Navigate to Manage > Namespace then click Edit on your ns1 Namespace.
8. In the Retention Policies section enter the following values for the new
retention policy:
Name: retention10min
Value: 10 minutes
Click ADD
Name: retention20min
Value: 20 minutes
Click ADD
Click Save.
You will use these two retention polices, retention10min and retention20min,
on two different objects in the bucket1 and test how retention works.
9. The ECS Portal does not offer the ability to set retention policy on objects. You
will need to use s3curl utility to set this option.
s3curl is the Amazon S3 authentication tool for curl. Since ECS uses custom
header with x-emc string prefixed, the s3curl script needs to be modified to
include the x-emc in the header attribute.
You can find the pre-modified s3curl.pl file at C:\Lab Software\s3curl path in
your management station. You can find more information and details on
modifications to the s3curl.pl file at https://www.dell.com/support/home/.
You must have an account and sign in to view documentation.
You will copy these files to your primary ECS node using WinSCP.
Open WinSCP from your desktop and login into ECS Site 1 Luna
(192.168.1.5)
User Name: admin Password: ChangeMe
10. Once logged into WinSCP you will be in the /home/admin directory of the
node. If you see a warning message appear, click Yes to continue.
11. In the left side pane change to the C:\Lab Software\s3curl directory. Select
the 2 files, s3curl.pl and dot_s3curl.txt then drag them over to /home/admin
directory.
Note: The dot_s3curl.txt you will find in the s3curl directory MUST be
renamed to .s3curl on the ECS node and reside in the home directory of the
admin user, (/home/admin).
Now you need to update the my_profile section with your object user’s
credentials and update the endpoints with the IP address of your ECS node
that you are currently logged in and its hostname.
2. Edit the .s3curl file that you copied to the /home/admin directory and
perform the below changes, then save the .s3curl file.
3. To edit the file contents using vi you will need to place vi into INSERT mode
by pressing the i key on the keyboard. You use the keyboard arrow keys
to move the cursor around to the desired locations that need to be edited.
Once you have completed ALL the changes, take vi out of INSERT mode by
pressing the ESC key on the keyboard. To save the file with changes, type:
wq!
13. Change the permission on the s3curl files by running the following. Make sure
you are in the /home/admin directory.
Now enter:
chmod 600 .s3curl
chmod 755 s3curl.pl
14. In the PuTTY session, run the below command to test if s3curl is functional.
./s3curl.pl
15. Now try to upload a file to the bucket2 bucket as an object and set retention
policy on that object.
You will need new files in your ECS node to test the retention policy feature.
Copy a few small files from C:\Lab Software\Test Files location in your
management station to the ECS node using WinSCP.
16. On the ECS Node in Putty run the S3curl command as below:
Then, select the Http Headers tab in the bottom pane like you see below.
You can see that there is a new header x-emc-retention-policy set with the
retention policy as value. You will not find this header for other files that you
uploaded directly from S3 Browser.
18. Click on other files uploaded through S3 Browser and check their headers.
Using a retention policy with objects instead of hard coding a retention period
value provides more manageability. Any change to the retention policy
automatically applies to every object configured with that particular retention
policy.
19. Similar to the above, you can upload other objects and set a different retention
policy on them. Upload another sample file with the retention20min retention
policy using S3curl and check its http header.
20. Now, try to delete the file before the retention policy expires.
Similar to the retention period set on bucket: the retention policy will not let you
to delete the object until the object lifetime exceeds the time period specified
via the retention policy.
21. You can also set a specific retention time period on objects using S3curl
commands.
Go back to your the ECS node session in PuTTY and create a new file for
upload using below command.
Note: The unit of retention period in the command above is in seconds. So, in
the command you are setting object retention of 10 minutes on the
retentionperiod.txt file.
You can see below that the command has executed successfully.
Do you see the new file you uploaded in the previous step? Click on the file to
select it.
23. Repeat the delete file operation with its retention period set.
24. At this point, you understand what retention period and policies are, and how
they work on object and bucket level.
Next, experiment with which takes precedence, the retention set at bucket
level or the object level. You can do that by trying the scenario below:
Now, try to delete the object after 5 minutes. What happens, are you able to
delete the object?
Next, you can try the reverse: set the retention period on the bucket to be less
than the retention period of the object. Then try deleting the object and
observe the behavior.
1. If not already, login to the ECS Site 1 Luna portal at 192.168.1.5 using the
credentials below:
2. Navigate to Manage > Buckets. In the Bucket Management page, select your
namespace ns1 from the Namespace drop-down list. Click Edit Bucket on
your casbucket.
3. Select Next then Next again to view the Optional setting page.
4. In the Optional Edit Bucket page, scroll down to the Enforce Retention
section.
Here are the options for advanced retention settings are displayed. Below is
the detailed description of the options displayed.
Bucket Retention Period: The bucket retention period is set at the bucket or
object level. It prevents the objects to be modified or deleted until the retention
period elapses, after the original object creation time. If both a bucket-level
and an object-level retention period are set, the longer period will be enforced
on the bucket. In a Compliance-enabled environment, Bucket Retention Period
is mandatory unless retention information in the object is enforced.
5. The retention period can be set in units ranging from seconds to years. There
is also an Infinite option which when selected from the drop-down prevents
any modification of the object indefinitely.
Enforce Retention: On
Click Save.
1. Login to the ECS Site Luna portal at 192.168.1.5 using the credentials below:
Name: ns2_admin
Password: P@ssw0rd!
Click Save.
Click OK to warning.
4. After successful creation of management user, you can see the user listed in
the Management User page.
5. The next step is to create a new namespace, mapping the management user
created in previous step, as the Namespace Admin. You will also enable hard
quota setting on this namespace.
a. Name: ns2
Notification Only at: Known as soft quota, this option will trigger a notification
when the capacity used reaches the specified limit.
Block Access Only at: Known as a hard quota setting which, when reached,
prevents write/update access to buckets in the namespace.
Block Access at: Known as a hard quota setting which, when reached,
prevents write/update access to the buckets in the namespace and the quota
setting at which you are notified.
Note: 1 GiB is the minimum value that can be set for the quota.
Click Save.
6. Now that you have a namespace created, the next step is to login to the ECS
Portal as the new Namespace Administrator and create buckets in the
namespace.
Logout from the portal and login as Namespace Administrator using the
credentials below:
7. As a Namespace Administrator, you will now create an object user. This object
user will be used to perform read and write operations on the buckets created
in the ns2 namespace.
a. From the ECS Portal select Manage > Users > NEW OBJECT USER
8. You are now going to create a bucket in the namespace with the user4
created in the previous step as the owner. You will also enable quota on this
bucket.
In the Bucket Management page, select your namespace ns2 from the drop-
down.
Click New Bucket and create a bucket with the following details for Basic
Configuration:
Note: Like the namespace quota, a hard quota is set on this bucket to prevent
upload operations when the bucket’s quota limit is reached.
1. Name: bucket4
2. Namespace: ns2
9. Click Save.
9. Now create another bucket in the same namespace ns2. But this bucket will
not have quota enabled.
Use the following details to create new bucket in the Basic section:
a. Name: bucket5
b. Namespace: ns2
d. Bucket Owner: user4 (object user you created earlier in this lab)
e. Click Next then Next again so that you are on the Optional page.
g. Click Save
10. Upon successful creation of bucket5 the Bucket Management page would
look as seen below. You can see that bucket4 has 1 GiB of hard quota
enabled and bucket5 does not have any quota set.
From the menu bar select Accounts then select Add new account
Fill in the fields with the following details then select Add new account.
12. In the S3 Browser’s Bucket Explorer pane on the left, you can see the buckets
bucket4 and bucket5 listed by default. This is because the user4 is the owner
of both the buckets.
Now upload some files into bucket4 from C:\Lab Software\Test Files path in
your management station.
Choose three of the largest mp4 files for the upload operation.
13. You can see below that bucket4 has around 1.38 GB of files.
14. Upload two files to bucket5 total size not more than 1 GiB
Check the number of files in a bucket and the total object size in it from the
Properties tab in the bottom of S3 Browser.
Select the bucket name and then select the Properties tab to view the
corresponding information.
15. To test the quota option, it is very important to check the ECS Metering and
ensure that the number of objects in the buckets (bucket4 and bucket5) listed
in the Metering page match the actual number of files/objects in the bucket.
To verify the object counts in the ECS Portal, ensure you are logged in as
ns2_admin.Navigate to Monitor then select Metering.
b. Select the namespace from the list in the left pane using the arrow icon and
then select the bucket4 and bucket5 from the list using the arrow icon.
c. Click Apply.
Scroll down to see the number of objects, objects created, and objects deleted
in the bucket.
16. As you see below, the Object Count should display the actual number of
objects along with size of the uploaded objects in the respective bucket.
IMPORTANT: There can be an update time lag. Before you move on to the
next lab steps insure that the object count is correct. This may require you to
apply the defined filter multiply times.
17. Using the user4 account in the S3 Browser, upload files into bucket5 from
At some point when you try to upload additional files the upload operation will
fail based on the Block Access at setting that you have defined.
You can see that the status shows “Failed – Forbidden: Check if quota has
been exceeded” error.
18. Log out of your ECS Portal, then log back in as root. As root user, navigate to
Monitor > Events, then select the Alerts.
Highlighted below are the quota exceeded notifications for the namespace
ns2, as well as for bucket4.
Scenario:
IAM Configuration
19. If not already, login to the ECS Site 1 Luna portal at (192.168.1.5) using the
credentials below:
20. Select Manage > Namespace. Click the Edit button for the ns1 namespace.
21. Look at the Namespace Root User field, it is automatically populated with
root@@ns1. This is the default format.
22. Click the MANAGE button next to the Namespace Root User field.
23. Select On to enable UI access for the Namespace root user for IAM.
Enter the Namespace Root User password and the Confirm Namespace Root
User password:
24. Navigate to the Manage > Identity and Access (S3). On the Identity and
Access Management page, select the ns1 namespace from the dropdown.
a. Name: iamuser
b. Click Next
26. On the Permissions page, the new user can be added to a group and attach
policies. For now, leave the default settings. We will add a group and setup
policy later. Click Next.
27. Here you can attach tags to add metadata to the new user. Leave this blank.
Click NEXT.
28. Review the new user configuration and click Create User. The new user is
created with an Access key ID and the Access Secret Key.
To save the access information, either copy and paste the Access key ID and
Access Secret Key to Notepad or you can download the (dot)csv file.
Click the Download (dot)csv and open Notepad or Notepad++. Here you
can see the Access Key ID and Access Secret Key for the IAM user.
Click Complete.
29. Now you will use S3curl to test the IAM user permissions. Open a PuTTY
session to 192.168.1.5
Click Open.
31. Now you will edit the .s3curl file using the vi command.
vi .s3curl
To edit the file contents using vi you will need to place vi into INSERT mode
by pressing the i key on the keyboard. You use the keyboard arrow keys to
move the cursor around to the desired locations that need to be edited.
Once you have completed ALL the changes, take vi out of INSERT mode by
pressing the ESC key on the keyboard. To save the file with changes, type
:wq!
Now test access to bucket1 using the IAM user called iamuser using the
s3curl.pl command.
The result is an access denied because permissions are not configured for
the IAM user.
32. You will now add permissions to the iamuser. Logout of the ECS Portal and
login as the Namespace Root User:
Login: root@@ns1
Password: P@ssw0rd!
33. Navigate to Manage > Identity and Access (S3). Select ns1 from
Namespace dropdown, and select the Policies tab.
34. In this tab, you can create a new managed policy or use one of the five
predefined managed policies provided. You then can attach a policy to a
user, group or role.
35. To do this; select the Users tab. We will create an inline policy only for the
specific IAM user created earlier.
c. Click Add Inline Policy and Enter a name for the policy:
a. Name: iampolicy1
Click NEXT.
a. In the Service field you must select one of three choices. Select S3.
Actions allow you to set the granularity of the user’s permissions.
b. Select List to enable ListBucket and ListAllMyBuckets permissions.
c. Here you can select a specific bucket or all resources. Select All Resources.
d. Request Condition allows you to set a source IP restriction or create a
condition key. We will skip this field.
Click Next
36. In the Review page, verify your choices and click SAVE. A new inline Policy
37. Now you will test the access permissions for the new IAM user using the
s3curl command.
The contents of bucket1 are listed. If you attempted to write a new object to
bucket1, it would fail with an access denied error. The IAM user does not have
write permissions to bucket1. Write command example below:
./s3curl.pl --debug --id=my_IAM_profile --put Test.txt --
https://192.168.1.5:9021/bucket1/Test.txt -k
38. IAM also supports groups and roles. You will now create a group and add the
IAM User to it.
a. Navigate to Manage > Identity and Access (S3). Select the Groups tab,
select ns1 from the Namespace dropdown, and click NEW GROUP.
Select ECS Managed. A list of pre-defined policies are listed. Select the policy
called ECSS3FullAccess.
NOTE: If a new policy needs to be created, you must go to the Policy tab in
the Identity and Access Management page and create the new Managed
Policy first.
Click Next
Review the new group and the policies that are attached.
Click Save.
The new group has need created. Now you will add a user to the group. Click
the down arrow next to the Edit button and select Add Users.
The new user is added to the group and will follow the policies of that group.
40. Similar to IAM user access keys, the namespace Root Access Key tab
creates access keys for the Root user account to access S3 and the IAM
APIs.
These are also long-term credentials which consists of an access key ID and
secret access key.
This user can have two Access Keys associated with access at any time.
1. Navigate to Manage > Identity and Access (S3). Select the namespace
ns1 from the dropdown. Then click the Root Access Key tab.
To save the root access information, you can either copy and paste the
Access key ID and Access Secret Key to Notepad or you can Download
(dot)csv.
Click Close
Scenario:
Using readily available data clients, test basic I/O access by performing "CRUD"
operations on ECS data repositories (commonly referred to as "buckets")
Two user groups named Finance and Sales reside in AD. These groups will
be considered as individual tenants and they will have their own namespace
created in ECS.
Note: This structure is used for simple proof-of-concept (POC) only. There is
a single Active Directory server which simulates a realistic representation of an
Enterprise customer using ECS, with multiple business units within the
enterprise representing ECS tenants. All business units are sharing a single
Active Directory setup.
2. In this lab, each user group within Active Directory (i.e. each tenant) will have
two types of user: Admin and Object. The Active Directory structure is
preconfigured and made available for you in this lab.
You will use these Active Directory details to add your authentication provider
from the ECS Portal.
All users will have the same AD privilege and will be part of two AD groups:
Domain users and User group, and the user group is named by their tenant.
Shown below are the properties of fadmin and fuser1 for the Finance tenant.
Similarly, Sales group users have access to sadmin and suser1 users which
are members of Domain users and the Sales tenant.
From the ECS perspective, the Admin users (fadmin & sadmin) will be
considered as management users - specifically, namespace admins. They
will have access to the ECS Portal with limited capabilities - each can manage
their own namespace, e.g. add or remove users in their own namespace.
fuser1, fuser2, suser1 and suser2 are ECS Object users who will have
access only to the ECS object store, to perform CRUD operations.
In your Active Directory environment, all users have been configured with
P@ssw0rd! as their respective password.
3. In the New Authentication Provider page, enter the following values from the
below table (NOTE: There are NO spaces after the commas):
The Group whitelist below are the Active Directory groups which will be
allowed to access the ECS storage.
Click Save.
4. From the ECS Portal select Users > Management Users. You will create two
new management users which are [email protected] and [email protected]
Username: [email protected]
System Administrator: No
System Monitor: No
Click Save
You will use this authentication provider in the next lab to create namespaces
with domain configuration.
2. Next, you need to create namespaces for the tenants (Finance and Sales)
with Domain details.
Name: finance_ns
4. Click DOMAIN.
Click Save.
a. Domain: dell.edu
c. Attribute: objectCategory
6. Log out of the ECS Portal. Now login to ECS Portal 192.168.1.5 as the new
Namespace Administrator using these credentials:
Click Dashboard in the navigation pane. Ignore any errors that might appear
at the top of your browser.
Notice that the Namespace Management page has only one namespace
listed, which is owned by [email protected]
When you login as this Namespace Admin, you can only view the namespace
that this Admin account owns.
8. Navigate to other ECS management views like Storage pools, VDC etc. Are
you able to view the details?
You cannot see those details because the Namespace Administrator’s access
is limited to bucket and object user management of a namespace. The user
will not be authorized to view other ECS system administrative attributes.
Name: [email protected]
Namespace: finance_ns
Click GENERATE & ADD SECRET KEY in the S3/Atmos section then select
Show Secret Key. Copy this key to Notepad++ as you will be using it to verify
I/O access.
Select Close
Now, logoff from the ECS portal and login as [email protected] using the AD
password.
You can see that the authentication succeeds against AD/LDAP, but the user
will not be able to view or perform any operation in the ECS Portal because
the user is not authorized.
10. Login to the ECS Portal 192.168.1.5 as root user with P@ssw0rd! as the
password.
Name: sales_ns
Domain: dell.edu
Groups: Sales
Attribute: objectCategory
Domain: dell.edu
Groups: Sales
Attribute: objectCategory
Click Save.
12. Now, log off from the ECS portal and login as the Sales namespace
administrative user using these credentials:
Password: P@ssw0rd!
13. Navigate through different pages and observe what this user can view and the
actions the user is able to perform.
Were you able to see other namespaces and their object users?
1. Now that you have the secret access key and object user created for the
domain user [email protected] from the previous lab, follow the steps below to
perform read/write operations in the S3 Browser.
The trial version of the S3 Browser only allows up to two accounts, you will
need to delete one account: S3 Browser Accounts > Manage Accounts >
Delete user4.
Create a new account for [email protected] using the secret access key from
the ECS Portal.
2. Ensure you are logged into the ECS Portal as either root or [email protected]
3. Select Manage > Buckets > NEW BUCKET, and create a new bucket for
[email protected]
Namespace: finance_ns
5. Upload a few files from C:\Lab Software\Test Files path in your management
station to verify I/O access.
6. (OPTIONAL STEP)
Perform the same operation using the Sales tenant group and Sales users.
Then, create a bucket for a Sales user in the S3 Browser.
You can then test the multi-tenancy data isolation by trying to read the buckets
Follow the instructions in the the previous lab: “Test ACLs with local object
users in ECS” to create ACLs and add external buckets.
Scenario:
Browse through the ECS Monitoring Data and Perform Basic Health Checks
From the ECS Portal Dashboard you will see basic system information. You
can hover your mouse cursor over points in the performance graph. Click on a
highlighted category to examine more details.
Expand Monitor.
From the Date Time Range drop-down select Custom. In the From field,
enter yesterday’s date. Similarly, in the To field, enter today’s date. Your
Namespace ns1 along with others will show up in the Namespace listing.
Select Namespace for ns1. This will populate the Select Buckets listing with
the buckets you have previously created.
From the Select Buckets list select all buckets that are part of ns1 namespace by
using the arrow icon.
Click Apply. Once applied, scroll down the screen to view object metrics and
traffic that have occurred during the custom date range selected.
3. Using the Monitor menu, select Events and observe the recent Audit and
Alert activities which have occurred during your lab exercises.
4. From the Monitor menu, select Capacity Utilization to view Capacity, Used
Capacity, Garbage Collection, Erasure Encoding and CAS Processing.
Click the History button to view the Capacity history. You can hover your
mouse cursor over points in the graph to view metrics at a specific time.
Now choose All Nodes and Disks. This will show your node(s) and status.
You can click your Node(s) name to view further details.
NOTE: If your ECS is not configured for Geo Replication the fields will be
blank.
Alert Policies
Alert policies are created to alert about metrics and are triggered when the
specified conditions are met. Alert policies are created per VDC. There are two
types of alert policy:
System alert policies are predefined and exist in ECS during deployment.
All the metrics have an associated system alert policy.
System alert policies cannot be updated or deleted.
System alert policies can be enabled/disabled.
Alert is sent to the UI and all channels (SNMP, SYSLOG, and Secure Remote
Services).
You can create User-defined alert policies for the required metrics.
Alert is sent to the UI and customer channels (SNMP and SYSLOG).
For more information on Alert Messages please consult the latest ECS Monitoring
Guide. You must sign in, or create a account for access to ECS
Documentation. https://www.dell.com/support/home/en-us
7. Alert policies are configured from the ECS Portal. Select Settings > Alerts
Policy.
To create a new User Defined Alert Policy, select NEW ALERT POLICY
2. Use the metric type drop-down menu to select a metric type. Metric Type is
a grouping of statistics. It consists of:
a. Btree Statistics
b. CAS GC Statistic
c. Geo Replication Statistics
d. Metering Statistics
e. Garbage Collection Statistics
f. EKM
3. Use the metric name drop-down menu to select a metric name which is
based off the metric type.
4. Select level:
Instances describe how many data points to check and how many should
match the specified conditions to trigger an alert. For metrics where
historical data is not available only the latest data is used.
7. Select conditions:
You can set the threshold values and alert type with Conditions. The alerts
can be either a Warning Alert, Error Alert, or Critical Alert.
8. To add more conditions with multiple thresholds and with different alert
levels, select Add Condition.
Scenario:
This lab will simulate a VDC Temporary Site Outage (TSO). It will allow you to see
how ECS reacts to a TSO events and to allow you to see the behavior with Access
During Outage (ADO) enabled on a Federated Global bucket that is part of a three
(3) site VDC global replication group. You will also initiate a Permanent Site Outage
(PSO) and observe a Failover process.
Simulate a network failure on one of three VDCs that are part of a global
replication group.
Observe the behavior and process that the ECS system goes through to allow
continued access to objects from the other VDC sites in the global replication
group with ADO enabled.
Access existing data objects and write new data objects via the S3 Browser
during the TSO event from the remaining VDC site nodes.
Permanently remove the failed VDC from the global replication group, initiating
a Permanent Site Outage (PSO) via the ECS Portal.
Observe the behavior and process that the ECS system uses, called Failover,
to re-protect objects and meta data on the remaining VDCs in the global
replication group via the ECS Portal.
a. If not already log into the ECS Portal on ECS Site 1 Luna (192.168.1.5)
using the credentials root/P@ssw0rd!
f. Under the Access During Outage section, select On. Leave the Read-Only
checkbox un-checked.
a. Open the CONSOLES window in your VLP lab environment (left-hand side of
the VLP) to log into ECS Site 1 Luna (192.168.1.5).
c. You are going to use a tool called Network Manager Text User Interface
(nmtui) to disable the network port on the luna node. The nmtui tool is a handy
tool that allows you to easily configure your network interfaces in Linux system.
At the command prompt type in the following command: nmtui <return>
Use the arrow keys on your keyboard to select the option Activate a
connection. Hit the Enter key on your keyboard.
d. Use the arrow keys on your keyboard to select the option Activate a
connection. Hit the Enter key on your keyboard.
e. The Wired view will come up, and the ens192 network interface will be
highlighted.
f. Use the right arrow key on your keyboard and highlight the Deactivate option
and hit the Enter key on your keyboard. You will see the Wired view for the
ens192 network change to Activate. This means that the ens192 network
interface has been deactivated.
g. Use the down arrow key and select Back and hit the Enter key.
h. Use the down arrow keys to select the Quit option and hit the Enter key on your
keyboard.
Note: Stay logged in to the luna host via the console window, you will use it
later to activate the network port using the nmtui tool.
11. At this point the network port on the luna node is down. Go back into the ECS
Portal on the ECS Site 2 Phobos (192.168.1.6) and select Manage >
Replication Group.
12. At this point, you do not have a Temporary Site Outage (TSO). The other
13. You can also verify the luna VDC site failure by going to the Dashboard View
or the Alert View in the ECS Portal for either the Phobos VDC site and/or the
Deimos VDC site.
In the Alert View click the Acknowledge button under the Actions column for
both the Phobos VDC site (192.168.1.6), and the Deimos VDC site
(192.168.1.7) in their respective ECS Portals.
14. Now that ECS has detected the TSO for the luna VDC site, bring up the S3
Browser application. Select the Accounts tab > Manage accounts.
15. From the Storage Accounts window, select the user1 account and click the
Edit button.
17. You are connected to the phobos node on vdc2_phobos. This is one of the
non-owning VDCs for bucket1, bucket2, and object user user1. luna_vdc1 is
owning VDC site for the buckets and the user.
Selecting bucket1 with ADO turned On which allows you to get to the data
objects in bucket1.
With ADO turned Off on bucket2 this non-owning site is NOT allowed access
to the data objects, access fails.
18. If you change or modify the S3 Account for user1 to a node at ECS Site 3
Deimos you will see the same behavior because the site is non-owning VDC
site.
b. From the Storage Accounts window, select the user1 account and click the
edit button.
19. Now try and access bucket1 and bucket2 from the deimos node
(192.168.1.7:9021) at ECS Site 3 Deimos VDC. You again will see the same
behavior on bucket1 and bucket2.
20. The next steps are going fail the VDC known as vdc1_luna and remove it
from the replication group rg_global_luna_phobos_deimos. This process is
known as a Permanent Site Outage (PSO).
21. Select Manage > Virtual Data Center and click the down arrow next to the
Edit button for vdc1_luna. Select Fail the VDC.
22. A Confirm VDC Failure message comes up. Click the checkbox confirmation
to fail the VDC and click the OK button.
23. Refresh the screen and you will see that vdc1_luna has a status of
Permanently Failed.
24. Select Manage > Replication Group and click the down arrow to open up the
rg_global_luna_phobos_deimos replication group. Click the Edit button.
25. Click the Remove button for the vdc1_luna Virtual Data Center.
26. A Confirm Remove VDC message comes up. You must click the checkbox,
and then click the OK button. Then click the SAVE button.
27. Go to Monitor > Geo Replication > Failover Processing to see that the
rg_global_luna_phobos_deimos replication group has gone into a failover
process to sync up the remaining VDCs in this replication group.
NOTE: Wait about 5 minutes, this may take a few minutes for the process to
kick-off and show up in the ECS Portal view and Dashboard.
On the Dashboard in the ECS Portal under Geo Monitoring section, you can
also see that a Failover is in progress.
You can login to the ECS Site 3 Deimos ECS Portal and go to Monitor >
Geo Replication >Failover Processing to see that a failover process is also
occurring on this VDC.
28. Go to Manage > Replication Group and rename the global replication group.
Click the Edit button and change the name of the replication group in the
name fields. You will see that you now have a local replication group and a two
VDC global replication group.
From: rg_global_luna_phobos_deimos
To: rg_global_phobos_deimos
29. The final step is to delete the failed VDC from the configuration.
1. Go to Manage > Virtual Data Center, select the Edit button for vdc1_luna,
and select Delete.
30. Eventually the Failover process will get to 100% on both the phobos VDC
and deimos VDC indicating the data objects and metadata have been
resynchronized and re-protected.
Clean Up
31. To clean up, you will reconnect the network port of the luna server node.
a. Open the CONSOLES window in your VLP lab environment (left-hand side
of the VLP) to log into the node luna (IP: 192.168.1.5)
d. Use the arrow keys on your keyboard to select the option Activate a
connection. Hit the Enter key on your keyboard.
e. The Wired view will come up, and the ens192 network interface will be
highlighted.
f. Use the right arrow key on your keyboard and highlight the Activate option
and hit the Enter key on your keyboard. You will see the Wired view for the
ens192 network change to Deactivate. This means that the ens192 network
interface has been re-enabled.
g. Use the down arrow key and select Back and hit the Enter key.
h. Use the down arrow keys to select the Quit option and hit the Enter key on
your keyboard. Exit out of the luna node console.
Scenario:
Dell EMC™ ECS GeoDrive™ provides a local file system interface through which
you can store and retrieve files on a Dell EMC ™ Cloud server. Use GeoDrive to
store and retrieve files (such as pictures, movies and documents) in the cloud using
the same applications and tools that you use today.
2. After the required items are installed a reboot is required. Save changes you
have made to Notepad++ and close all windows. Click Yes to start the
reboot.
2. When the reboot is finished, select the CTRL+ALT+DEL button and login to
the management jump server:
Login: DELL\Administrator
Password: P@ssw0rd!
3. Login to the ECS Portal on the luna VDC (192.168.1.5) credentials: root /
P@ssw0rd!
4. Create a new S3 objects user and a new bucket that the new object user
owns.
User Name: user6 (Generate an S3 Secret Key for this object user and record
it in Notepad++)
Bucket Information
Bucket Name: bucket7 (bucket owner is object user6)
5. Click Run on the Open File – Security Warning message, click OK on the
language selection window.
6. When the GeoDrive Setup Wizard appears, click Next at the introduction
screen.
c. Clear the optional setting for the Enable GeoDrive Feedback checkbox and
click Install.
7. Click the Windows Start Icon in the lower left-hand corner and click the Dell
EMC GeoDrive.
8. When the GeoDrive application opens up select Hosts and click the Add
button.
d. Secret Access Key: Secret Key for the user6 object user
10. Click the Test button to validate the connect to the ECS node:
11. The Connection Test Results screen appears. You may get a security
certificate error.
Click the Install button to install a certificate into the computer certificate store.
12. Click the Test button again, and you should get the Connection Test Results
with a result of Success. Click the Close button. Then click the OK button.
14. Click the Add GeoDrive Icon and fill in the following information:
a. GeoDrive: select E
d. Click Next
15. Under the Settings section, select the ECS host from the drop down and select
bucket7 from the Bucket list drop down. Leave all other setting as their
defaults.
Click Next.
16. On the Logging screen, leave the default setting and click the Finish button.
Drive: E
Host: ECS
Status: Active
18. You can now use the E Drive (GeoDrive) on the Windows Jump Server to
write and read data to/from the ECS Appliance.
Scenario:
Configure a new bucket in ECS and access that bucket as NFS Share from a Linux
host using the local Linux user. The already created user1 will be used.
Create a new filesystem bucket and a new user in your Linux host
1. Using PuTTY connect to your Linux box using IP address 192.168.1.8
(Hostname: CentOS8) with the following credentials:
Username: root
Password: P@ssw0rd!
2. If not already, login to the ECS Site 1 Luna Portal VDC (192.168.1.6) and
create a new filesystem bucket and enable it for filesystem.
c. Namespace: ns1
e. Click Next
4. Open the S3 Browser application and verify that you can see the nfsbucket
bucket with the user1 object user account selected.
5. PuTTY to the CentOS8 node (192.168.1.8) that you will use as the NFS client.
Create a new Linux user “user1” on the CentOS8 node.
a. useradd user1
b. id user1
a. su - user1
a. mkdir nfs
The nfs directory will be used later, to mount the nfs export from ECS.
8. Type “logout” or “exit” to return back to the root prompt.
If the translation is not created, when you attempt to mount the ECS NFS share to
your Linux system and try to list the contents of the directory, a large number will
be displayed instead of showing the username and group name of the local Linux
user.
9. From the ECS Portal (on the luna VDC node) select File and click the NEW
USER / GROUP MAPPING tab. Click NEW USER/GROUP MAPPING
b. Namespace: ns1
c. ID: Enter the number acquired in step 3 of Create a new user in our Linux
host
d. Type: User
e. Click Save
11. Select File, click the Exports tab. Select your namespace, ns1 then click on
NEW EXPORT.
a. Namespace: ns1
b. Bucket: nfsbucket
f. Click Add.
14. Examine the NFS exports from ECS using the following command:
1. showmount -e 192.168.1.6
1. cd /home/user1/nfs
1. su - user1
1. cd nfs
2. ls -la
19. Now you will create a dummy file using the following command:
ls –la
22. Now upload a file from C:\Lab Software\Test Files directory to the
nfsbucket, as shown below:
a. ls -la
Using ECS Community Edition Software and various I/O tools in this lab you have
become familiar with the following:
ECS Portal
Configure an ECS storage infrastructure
Validate I/O access using S3, Swift, CAS and Hadoop
Explore the use of Retention, ACLs, Bucket Policies, and Quotas
Test I/O client access to ECS using Active Directory service
Explore the Temporary Site Outage (TSO) and Permanent Site Outage (PSO)
with and without Access During Outage (ADO)
Configure and use ECS NFS
Configure and use ECS Geo-Drive
OpenStack Swift
If you wish to limit container1 access, you will need to run some curl commands.
You can run curl by opening an SSH session (with credentials: admin/ChangeMe)
to your primary ECS node, using PuTTY from your virtual desktop.
The following commands assign object user swiftuser1 to group1 and configure
the bucket container1 with group1 permissions. In this example, any users in this
group will have read-only access to container1 after all the commands are run.
export MANAGEMENT_USER=root
export MANAGEMENT_PASSWORD=P@ssw0rd!
2. #Get authentication token
curl -I -s --location-trusted -k
$MANAGEMENT_ENDPOINT/login -u
"$MANAGEMENT_USER:$MANAGEMENT_PASSWORD"
3. #Set variable for management token
export MANAGEMENT_TOKEN=<token-returned-by-last-command>
4. #Check management group of swift user
curl -s $MANAGEMENT_ENDPOINT/object/user-
password/<swift-username> -k -H "X-SDS-AUTH-
TOKEN:$MANAGEMENT_TOKEN" -H "Accept: application/json"
5. #Set swift login variables
export SWIFT_USER=<swift-username>
export SWIFT_PASSWORD=<swift-password>
export SWIFT_ENDPOINT=https://<your-ecs-node-ip>:9025
6. #Authenticate using swift as object user
curl -I -s -k -H "X-Auth-Token:$SWIFT_TOKEN" -H
"Accept:application/json"
$SWIFT_ENDPOINT/v1/<ns1>/<swift-container>
You must sign in, or create a account for access to ECS Documentation.
A profile contains the hostname/IP, a port, and a management user who then
authenticates a profile to the host. Profiles are stored in .json files in the home
directory with the name prefix ecscliconfig_. The ECS CLI uses the active
profile to authenticate and send commands. The asterisk (*) next to a profile name
indicates the active profile.
Create and authenticate at least one profile to configure the ECS CLI.
Note: You can create several profiles but only one profile is active at any time.
Procedure:
The ECS CLI configuration handles the -hostname and -port arguments, and the
tokens for subsequent management requests. However, you are required to
authenticate a profile. Profile authentication stores a token which remains active for
24 hours. When the token becomes inactive, you must re-authenticate the profile.
You can also re-authenticate a profile before a token becomes inactive.
Procedure:
ecscli authenticate
Running with config profile: demoprofile User:admin
host:port:10.1.83.51:4443 Password:
Authentication result:admin: Authenticated Successfully
/Users/username/demoprofile/rootcookie: Cookie saved
successfully
Use the most common ECS CLI commands Example
"strawberry",
"version": "3.0.0.0.86239.1c9e5ec"
},
{
"ip": "10.245.137.86",
"isLocal": false, "nodeid": "10.245.137.86",
"nodename": "logan-strawberry.ecs.lab.emc.com", "rackId":
"strawberry",
"version": "3.0.0.0.86239.1c9e5ec"
},
{
"ip": "10.245.137.87",
"isLocal": false, "nodeid": "10.245.137.87",
"nodename": "lehi-strawberry.ecs.lab.emc.com", "rackId":
"strawberry",
"version": "3.0.0.0.86239.1c9e5ec"
},
{
"ip": "10.245.137.88",
"isLocal": false, "nodeid": "10.245.137.88",
"nodename": "murray-strawberry.ecs.lab.emc.com", "rackId":
"strawberry",
"version": "3.0.0.0.86239.1c9e5ec"
}
]
}
ecscli -h
The ecscli command line tool has a configuration profile that will handle the
optional args (ie hostname, port, cookie). However, a top level command is
required possibly followed by a subcommand and options for that. Please use -h
for a list of commands and info.
positional arguments:
{config,authenticate,authentication,baseurl,billing,bucket,
cas,datastore,failedzones,keystore,meter,mgmtuserinfo,monit
or,nodes,objectuser,objectvpool,nfs,secretkeyuser,system,na
mespace,varray,vdc_data,vdc,passwordgroup,dashboard,transfo
rmation,vdc_keystore}
Use One Of Commands
config ecscli profile configuration
authenticate Authenticate ECS user
authentication Operations on Authentication
baseurl Operations on Base URL
billing Operations to retrieve ECS billing information
bucket Operations on Bucket
cas Operations on CAS profile
datastore Operations on datastore
failedzones Get failed zone information
keystore Operations on keystore
meter Get metering statistics for the given time bucket
mgmtuserinfo Operations on Mgmtuserinfo
monitor Get monitoring events for the given time bucket
nodes Operations to retrieve ECS datanodes information
objectuser Operations on Objectuser
objectvpool Operations on ObjectVPool
nfs Operations on NFS
secretkeyuser Operations on Secretkeyuser
system Operations on system
namespace Operations on Namespace
varray Operations on varray
vdc_data Operations on VirtualDataCenter
vdc Operations on VirtualDataCenter
passwordgroup Operations on Passwordgroup
dashboard Operations on replication group links
transformation Operations on Centera transformation
vdc_keystore Operations on vdc keystore certificate
optional arguments:
-h, --help show this help message and exit
-hostname <hostname>, -hn <hostname>
Hostname (fully qualifiled domain name) or IPv4
address (i.e. 192.0.2.0) or IPv6 address inside quotes
and brackets (i.e. "[2001:db8::1]") of ECS
-port <port_number>, -po <port_number>
port number of ECS
-cf <cookiefile>, -cookiefile <cookiefile>