MR-1CN-ECSMGTMON Lab Guide
MR-1CN-ECSMGTMON Lab Guide
Monitoring
Lab Guide
November 2016
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR
WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS
IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. The
trademarks, logos, and service marks (collectively "Trademarks") appearing in this publication are the property of EMC Corporation
and other parties. Nothing contained in this publication should be construed as granting any license or right to use any Trademark
without the prior written permission of the party that owns the Trademark.
EMC, EMC², the EMC logo, AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender,
Atmos, Authentica, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Aveksa, Bus-
Tech, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, EMC CertTracker. CIO
Connect, ClaimPack, ClaimsEditor, Claralert ,CLARiiON, ClientPak, CloudArray, Codebook Correlation Technology, Common
Information Model, Compuset, Compute Anywhere, Configuration Intelligence, Configuresoft, Connectrix, Constellation Computing,
CoprHD, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge , Data Protection Suite. Data Protection Advisor, DBClassify,
DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS ECO, Document
Sciences, Documentum, DR Anywhere, DSSD, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender , EMC
Centera, EMC ControlCenter, EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass, FLARE,
FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, Illuminator ,
InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, Isilon, ISIS,Kazeon, EMC LifeLine,
Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor , Metro, MetroPoint, MirrorView, Mozy,
Multi-Band Deduplication,Navisphere, Netstorage, NetWitness, NetWorker, EMC OnCourse, OnRack, OpenScale, Petrocloud,
PixTools, Powerlink, PowerPath, PowerSnap, ProSphere, ProtectEverywhere, ProtectPoint, EMC Proven, EMC Proven Professional,
QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine,
SAN Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, Silver Trail, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne,
SRDF, EMC Storage Administrator, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX,
Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex, UltraPoint, UltraScale, Unisphere, Universal Data Consistency, Vblock, VCE.
Velocity, Viewlets, ViPR, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, Virtualize Everything, Compromise
Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net,
WebXtender, xPression, xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta, Zero-Friction Enterprise Storage.
LAB EXERCISE 3: BASIC TESTS OF I/O ACCESS FROM VARIOUS DATA CLIENTS................................................................. 24
LAB 3: PART 1 – CREATE ECS NAMESPACES, LOCAL USERS AND BUCKETS ........................................................................................25
LAB 3: PART 2 – PERFORMING ECS METADATA SEARCH ...............................................................................................................34
LAB 3: PART 3 - TEST I/O ACCESS TO ECS FROM THE AWS S3 BROWSER ........................................................................................38
LAB 3: PART 4 - TEST I/O ACCESS TO ECS FROM CYBERDUCK (OPENSTACK SWIFT OBJECTS) ................................................................42
LAB 3: PART 5 – PUT AND GET CENTERA C-CLIPS FROM ECS USING CAS TOOLS................................................................................56
LAB 3: PART 6 – TEST “DATA-IN-PLACE” ACCESS TO S3 DATA WITHIN ECS FROM HADOOP ................................................................69
LAB 4: PART 1 – TEST ACLS WITH LOCAL OBJECT USERS IN ECS .....................................................................................................85
LAB 4: PART 2 – DEFINE ECS RETENTION POLICIES AND STUDY THEIR EFFECT ................................................................................. 101
LAB 4: PART 3 – ADVANCED RETENTION MANAGEMENT ............................................................................................................ 114
LAB 4: PART 4 – CONFIGURE AND VERIFY ENFORCEMENT OF ECS QUOTAS ..................................................................................... 117
LAB 5: PART 1 – REVIEW THE CONFIGURATION OF AN ACTIVE DIRECTORY SERVER ........................................................................... 129
LAB 5: PART 2 – ADD ACTIVE DIRECTORY SERVER AS AN ECS AUTHENTICATION PROVIDER ................................................................ 131
LAB 5: PART 3 – CONFIGURE ECS NAMESPACES WITH DOMAIN GROUPS FOR MULTI-TENANCY ........................................................ 133
LAB 5: PART 4 – VERIFY I/O ACCESS TO ECS FROM TENANT USERS.............................................................................................. 137
LAB 6: PART 1 – VIEW ECS MONITORING DATA AND PERFORM BASIC HEALTH CHECKS ................................................................... 143
Purpose: Review lab guide for this class, and establish a Remote Desktop
session to your management station
• 1 x Windows Management Station. This server is where you are going to perform most of
the lab exercises. It provides access into the other components of this lab. This server also
provides DNS and AD services to the environment.
• 1 x Linux server running a Hadoop node. This node runs HortonWorks HDP 2.3. This
environment provides the Hadoop File System that will be used with the ECS ViPRFS
Client.
• Two sites with one ECS node each. Each node is a VM running ECS 3.0 software (ECS
Community Edition – Single node). Real world ECS installs require a minimum four-node
setup; this one-node install is for demonstration purposes only with limited functionality.
It’s worth mentioning that although this is a virtual environment, all lab exercises perform
as a real world ECS install.
Step Action
1 Your instructor should have assigned you an ECS lab pod number, and given you the lab
configuration sheet showing the IP addresses of various components within that pod. If
you don’t have either of these, contact your instructor.
From your lab configuration sheet, write down the information below for your pod. You
will need it for lab access throughout this class:
2 VDC Login Credentials (VDC is the platform used to access the lab equipment)
You should receive instructions from your instructor on how to login to the EMC
Education Services VDC (Virtual Data Center). Write down the following information
from your instructor:
Note: If you are using a personal laptop, Citrix Receiver and XenApp (www.citrix.com)
applications must be installed in order to access VDC
3 At this point, you are logged in on your management station, from which you have
convenient access to all needed tools, and every other host in your pod.
You may disconnect now from your Remote Desktop by closing the RDP window. This
leaves the session up and running and you can connect back in to the same session at
any time using the Administrator/P@ssw0rd credentials.
Step Action
1 Open Google Chrome browser in your management station, and type the IP address of
your ECS node in the address bar (192.168.73.54) of the site1 vECS-1 node.
2 If there is a security certificate error, click Advanced and then click Proceed (unsafe).
4 Once authenticated, take a moment, expand and explore the following options:
Dashboard, Monitor, Manage, and Settings. These options are located on the left side
of the screen.
Purpose: Using the ECS web portal, configure the core ECS storage
infrastructure elements for your system: Storage Pool(s), VDC(s) and
Replication Group(s)
Create VDC(s)
Step Action
1 Bring up a browser and provide the IP address (192.168.73.54) of the site1 vECS-1
node. This will bring to the ECS Portal login screen. Provide the authentication below to
log into the remote ECS Portal:
2 When you login to the ECS portal for the first time, the GETTING STARTED checklist is
invoked. Since you will configure the system following the lab guide, click:
Name: pod#site1sp1 (where "#" is your Pod number, and the "1" at the end
indicates that this is the first storage pool you are creating. Example: pod1site1sp1)
From the Available Nodes field, select all nodes available (a minimum of 1 node is
required) and click the Add icon “+” to add them to the Selected Nodes area.
Notice the host name of your ECS node. Each node has a unique default name, and each
rack would have a unique color. These values make up the name that cannot be
changed. See the appendix at the end of this lab guide for more information.
5 When the nodes are selected click Save to create the storage pool.
6 Warning! The creation of the storage pool is a time sensitive step. You must allow a
minimum of 15 minutes for this to complete. The storage pool may show ready as its
status but you must not proceed to the next lab exercise until at least 15 minutes has
elapsed since the Save button was clicked. The status may show partially ready when 1
node is selected.
Step Action
1 Go to the Virtual Data Center Management page by navigating to Manage > Virtual Data Center.
Before creating the VDC, an Access Key must be generated. Click Get VDC Access Key.
2 When the key is generated, copy it since it will be required in the next step. Open a new Notepad an
paste the Access Key by using the <Ctrl>+<V> keys together.
3 Let's proceed to create the VDC by selecting Virtual Data Center under Manage and click New Virtu
Data Center.
4 On the New Virtual Data Center page, enter the following information to successfully create a VDC
within your assigned ECS Appliance:
Name: pod#site1vdc1 (where "#" is your Pod number, and the "1" at the end indicates that this i
the first VDC you are creating. Example: pod1site1vdc1)
Replication Endpoints: Enter the public IP address of each node in the VDC's storage pools
(192.168.73.54). Supply them as a comma-separated list.
Management Endpoints: Enter the public IP address of each node in the VDC's storage pools
(192.168.73.54). Supply them as a comma-separated list.
Warning! Allow at least five minutes for the VDC to become available before proceeding onto the
next lab exercise.
Step Action
1 Navigate to Manage > Replication Group to open the Replication Group Management
page. Click New Replication Group to create a replication group for your pod.
2 On the New Replication Group page, enter the name of your Replication Group
Name: pod#site1rg1 (where "#" is your Pod number, and the "1" at the end indicates
that this is the first replication group you are creating. Example: pod1site1rg1)
Click Add VDC and select the VDC (Created in Lab1-Part2) and Storage Pool (in Lab1-Part1) from
the drop-down list.
Once the replication group has been created, its status should be Online. Contact your
instructor if it is not.
Step Action
1 Bring up a browser and provide the IP address (192.168.73.56) of the site2 vECS-2 node. This will br
to the ECS Portal login screen. Provide the authentication below to log into the remote ECS Portal:
2 To create a storage pool which will be a part of the VDC that we will create, go to the Storage Pool
Management page by navigating to Manage > Storage Pools and click New Storage Pool.
In the Available Nodes field, select the remote node and click the Add icon “+” to add them to th
Selected Nodes area.
Warning! The creation of the storage pool is a time sensitive step. You must allow a minimum of 15
minutes for this to complete. The storage pool may show ready as its status but you must not procee
to the next lab exercise until at least 15 minutes has elapsed since the Save button was clicked. The
status may show partially ready when 1 node is selected.
4 Go to the Virtual Data Center Management page by navigating to Manage > Virtual Data Center.
Before creating the VDC, an Access key must be generated. Click Get VDC Access Key. When the key
generated, copy it to the Notepad.
5 Once you have copied the site 2 key to Notepad you can close the vECS-2 Portal to avoid any confus
in the future. We will not need it the vECS-Site2 portal again.
7 On site1 vECS-1 go to the Virtual Data Center Management page by navigating to Manage > Virtual
Data Center. Click New Virtual Data Center to create a virtual data center.
8 On the New Virtual Data Center page, enter the following information to create a VDC within your
assigned ECS Appliance:
Key: <Paste the Access Key generated for site2 from step 6>
Click Save.
9 The VDC Federation is successfully created which is shown by two VDCs with two different endpoint
10 To create a global replication group for the VDC Federation, go to the Replication group managemen
page by navigating to Manage > Replication Group. Click New Replication Group to create a
replication group.
11 On the New Replication Group page, enter the name of your Replication Group
Name: pod#site2rg2 (where "#" is your Pod number, and the "2" at the end indicates that this is the
second replication group you are creating. Example: pod1site2rg2)
Click Add VDC and select both the VDCs (in the primary instance and remote instance) and corresponding
Storage Pools (in the primary instance and remote instance) from the drop-down list.
Click Save.
Note: When you go to select the second site VDC it may indicate that it is Temporary Unavailable. Wait for it
available so you can select the Storage Pool.
Purpose: Using readily available data clients, test basic I/O access by
performing "CRUD" operations on ECS data repositories (commonly
referred to as "buckets")
• Create an object user. Then, generate and retrieve S3 Access Key for that user.
• Create a bucket, and assign the object user as the bucket owner.
Step Action
1 Using http://vdc.emc.com Login to the VM that you have been assigned using the
username and password that has been shared with you.
You will perform all the lab exercises from this management station.
2 Using Chrome browser login to the site1 vECS-1 portal at 192.168.73.54 using the
credentials below
3 Navigate to Manage > Namespaces and on the Namespace Management page, click
New Namespace.
Name: pod#ns1 (where "#" is your Pod number, and the "1" at the end indicates
that this is the first namespace you are creating)
Note: A namespace can have more than one admin user. If there are
multiple admin users, enter comma separated user names in the User
Admin field. In this lab, we will keep things simple and make the root
user the namespace admin
Leave the remaining namespace options and configuration to the default value for this
lab.
Click Save.
5 After successful creation of a namespace, notice that it gets listed on the Namespace
Management page, as shown below.
You can at any time, use the Edit action to modify the Namespace properties. But note
that the Namespace name once created cannot be modified. You must delete the
namespace using Delete action, and recreate a new Namespace with the desired name.
6 Now, we need to create a user who can own a bucket and perform read and write
operations in it.
ECS has two types of user roles: Management users, who can perform ECS
administrative operations, and Object users, who can access ECS object storage for
CRUD operations (create, read, update and delete).
So let’s create a new object user for the namespace that we created in the previous
step. We will then use the object user to perform I/O operations on the bucket that we
will be creating soon.
Name: pod#ouser1 (“#”is your pod number, the "1" indicates that this is the first object
user you are creating)
An object user is mapped to a namespace, confining the user’s access only to the
buckets in the namespace the user is mapped to.
The Object Access section has options to generate password for various clients (S3,
Swift and CAS) that are supported for ECS object store access.
9 Click the S3 secret access key field (screenshot shown above). Press <Ctrl>+<A> to select
and <Ctrl>+<C> to copy the key to an editor such as Notepad.
You will need this key later to create an S3 account, and access the ECS object store
using S3 Browser application.
10 Now that we have an object user created, let’s create a bucket with this object user as
the bucket owner.
Name: pod#bucket1 (# is your Pod number and the "1" is just our naming
convention, implying this is the first bucket that is created)
Bucket Owner: pod#ouser1 (the object user name you created. The bucket
owner will have the ability to modify bucket ACL and thus provide/remove
bucket access to other object users in the namespace)
13 Below are the other bucket configuration options. For now, leave all of these at default
values. You will experiment with some of these options in a later lab.
Click Save.
14 Upon successful creation of bucket, you can see the bucket listed on the Bucket
Management page as shown below.
Note that you can filter and view the buckets in a particular namespace by selecting the
namespace from the Namespace drop-down list.
You cannot modify the bucket name, replication group and namespace attributes of a
bucket. But the Edit bucket option, under the Actions list, will allow you to change other
bucket properties like bucket owner, quota, ACLs, etc. which you will explore in
subsequent lab exercises.
Step Action
2 Enter the following details for the new bucket (# indicates your Pod number).
Name: pod#bucket2
Namespace: pod#ns1
Replication Group: pod#site2rg2
Bucket Owner: pod#ouser1
4 To configure Metadata Search keys, the namespace admin must know the metadata
attributes that are required to be searchable. While system metadata attributes are
available to be selected, user metadata keys need to be manually entered.
image-width (integer)
image-height (integer)
image-viewcount (integer)
gps-latitude (decimal)
gps-longitude (decimal)
Click Add.
In the Key Name field, type x-amz-meta-image-width. The name is already prefixed,
complete the rest.
Click Add.
6 To configure Additional Search keys, repeat the previous step for the remaining four
metadata search keys.
image-height integer
image-viewcount integer
gps-latitude decimal
gps-longitude decimal
When the five keys are complete, scroll down, and then click Save.
7 Verify that you have created an object user and provisioned 2 new buckets for the
teams in the media unit. They will now use the object user to ingest and access data
• Perform CRUD operations on ECS buckets as an object user who you created in the
previous lab
Step Action
When installing, accept all the defaults, choose the Create shortcut to Desktop option.
That way, it will be easy for you to launch the application when required.
Account Name pod#ouser1 (Your object user name. # is your pod number)
Storage Type S3 Compatible Storage
192.168.73.54:9021
Endpoint can be the IP address of any one of the nodes you have
configured in the storage pool of your VDC.
REST Endpoint
ECS has specific port number designated for each client interface.
ECS S3 interface uses port 9020 for http, and port 9021 for https
connection
Access Key ID pod#ouser1 (# is your pod number)
<S3 secret access key>
S3 Secret access key of the object user that you generated is Lab3
Secret Access Key
Part1 Step8 and copied to notepad, from user management screen
in the ECS Portal
Note: # is your Pod number. See below for an example of how to fill in each field.
4 Soon after you add the account, the S3 Browser shows the 2 buckets pod#bucket1 and
pod#bucket2 that were created in the previous lab. You can see that in the left pane,
below.
This is because the object user was set as the bucket owner when the bucket was
created. Other object users in the same namespace cannot view this bucket until the
bucket owner modifies the ACL to allow a new object user to view and operate on a
bucket.
You may see a task that has failed, in the Tasks pane at the bottom of the S3 Browser,
as shown below. It is related to S3 Browser and does not concern ECS. So you may
ignore this error and proceed.
5 If you click the Permissions tab in the bottom pane, you can see that the object user
has Full Control permission set on both buckets, since the bucket owner by default,
would have full access over the bucket.
You will experiment with the bucket permissions also known as ACL (Access Control
List) for different object users in lab 3.
6 Now, click the Upload button and choose Upload files(s) to upload a file to the bucket.
Note: Use any of the files in the C:\lab\Files folder for testing uploads and downloads.
7 Now, upload more files into the bucket and try to download it using Download button.
You can also delete a file(s) using the Delete button.
Step Action
Click Manage > Users > Object Users, and then click New Object User.
Create an object user named swiftuser1 to connect to the ECS using swift.
3 Groups: admin
Groups: admin
5 Install Cyberduck from the C:\Lab directory on your lab machine’s virtual desktop.
During the installation do not install Bonjour. Accept other defaults.
Open Cyberduck.
Close the dialog box with the X when done and settings will be saved.
9 Select the Always Trust check-box and then click Continue if there is a warning about
an invalid certificate. Click yes if prompted to install a security certificate.
12 The container will be created and available for file upload, download, and delete. It will
appear in the ECS Portal as a bucket. Be sure to select the namespace the bucket was
created in and verify in your ECS instance that the new bucket was created.
ECS Portal:
If prompted about an invalid certificate, click Continue. This will copy the file to the
container, as shown below.
15 Configure Cyberduck for swiftuser2 by repeating the steps 5 to 10 of this lab exercise
using the information below. Choose Swift (OpenStack Object Storage) as the
connection type. For Tenant ID:Access Key you will use your pod # in pod#ns1.
17 You will see the container created by swiftuser1. This is because any ECS user with a
configured Swift password is placed by default in the admin group, and has full
permissions to all Swift containers.
18 (OPTIONAL STEP)
This step is optional this step is optional and may be performed or simply reviewed.
If you wish to limit container1 access, you will need to run some curl commands. You
can run curl by opening an SSH session (with credentials root/P@ssw0rd) to your
primary ECS node, using PuTTY in your virtual desktop.
The following commands assign object user swiftuser1 to group1, and configure the
bucket container1 with group1 permissions. In this example, any users in this group will
have read-only access to container1 after all the commands are run.
Step Action
Namespace: pod#ns1
2 In ECS Portal Manage > Users create a new object user named pod#casuser (where # is
your pod number) using the existing namespace and click Next to Add Passwords.
4 Copy the content of PEA File generated to the clipboard (Select the text and press
<CTRL> + <C>).
In Windows Explorer open Notepad and save the contents in a file named pea.p to your
Desktop. Click Close
5 From the ECS Portal, navigate to Manage > Buckets. On the Bucket Management page,
select your namespace so your buckets are listed. Once selected, open the
corresponding Actions drop-down list and choose Edit ACL for pod#casbucket (where #
is your pod number).
7 Fill in the User Name field with the CAS user name you’ve created in step 2 of this lab
exercise.
Be sure pod#casuser (where # is your pod number) has Full Control checked on the
bucket and click Save.
8 Go back to the Management > Users page and edit the user pod#casuser
Click Close.
10 Right Click the Start menu icon and open the Run box.
Type cmd and press Enter.
Right Click on the upper right corner on the window and select Properties
In Options Tab > Edit Options > Quick Edit Mode (allowing copy and paste)
poolOpen 192.168.73.54?pea.p
Note: The command shown is using the relative path to the PEA file. The absolute path
can be specified alternatively using the following command:
13 In your Windows VM, copy the file C:\Lab\Test.txt to the C:\ JCASScript-win32-3.2.35
directory.
14 Transfer the file and save it on the ECS in a clip in CAS bucket, run the command:
fileToClip Test.txt
Using your mouse you can highlight and copy the new clip ID returned by the
“fileToClip” command as the “<contentAddress>”.
This saves the clip to a file named “savedclip.txt” in your local C:\ JCASScript-win32-
3.2.35 directory.
20 Type exit and press Enter to quit JCASScript. Close the command window.
For simplicity, our system is predicated on a single Hadoop node, running HortonWorks HDP 2.3. For
management of Hadoop, you will access the Ambari Management portal via HTTP. In order to access ECS
storage, your Hadoop node will need to access the ECS appliance nodes via IP and be configured with
access to a S3 bucket with file system enabled.
Hadoop has two authentication modes: Simple and Kerberos. In our lab, we will implement Simple
authentication. With Simple authentication, Unix users can be users to ECS and they will appear as
‘anonymous’ to ECS and will have full control of the data space in the ECS bucket configured for HDFS.
Kerberos uses kinit and users must be specifically given permission to areas within a data space.
Please refer to your pod’s IP configuration document for the IP address of your Hadoop node.
From your ECS management portal, access the Manage > Buckets page and create a
new bucket for both S3 and HDFS access with the following parameters:
2 From the Manage > Buckets main page, on your created bucket, click Edit ACL from
the Actions drop-down list.
3 From the Bucket ACLs Management page, select Group ACLs. Add group name public
and provide all permissions
Now that we’ve set these parameters on our bucket, let’s configure Hadoop to access
ECS.
5 From the Windows command prompt copy the zip file to your Hadoop instance using the
“pscp” command as shown below:
cd \Lab
pscp hdfsclient-2.2.1.0.77331.4f57cc6.zip [email protected]:/var/tmp
if asked to “Store key in cache?” press: Y
if asked for Password type: hadoop
User: root
Password: hadoop
7 Unzip the hdfsclient you pscp’d to /var/tmp and locate the latest jar file
cd /var/tmp
ls -lia
unzip hdfsclient-2.2.1.0.77331.4f57cc6.zip
cd viprfs-client-2.2.1.0.77331.4f57cc6
cd client
ls -li
8 Copy the jar file to a directory in Hadoop’s classpath. You need to copy it to the library
directory in Hadoop’s classpath. First, determine what the classpath is:
hadoop classpath
/usr/hdp/2.3.0.0-2557/hadoop/conf:/usr/hdp/2.3.0.0-
2557/hadoop/lib/*:/usr/hdp/2.3.0.0-
2557/hadoop/.//*:/usr/hdp/2.3.0.0-2557/hadoop-
hdfs/./:/usr/hdp/2.3.0.0-2557/hadoop-
hdfs/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-
hdfs/.//*:/usr/hdp/2.3.0.0-2557/hadoop-
yarn/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-
yarn/.//*:/usr/hdp/2.3.0.0-2557/hadoop-
mapreduce/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-
mapreduce/.//*:::/usr/share/java/mysql-connector-java-
5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-
bin.jar:/usr/share/java/mysql-connector-
java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-
2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf
Note: In a multi-node Hadoop cluster, you will need to copy this jar file to every node,
and to the same lib directory.
cp -p viprfs-client-2.2.1.0-hadoop-2.7.jar /usr/hdp/2.3.0.0-2557/hadoop/lib/
10 Backup the core-site.xml file. The core-site.xml file contains properties specific to
components of Hadoop, i.e., MapReduce, HDFS, etc. In HortonWorks Hadoop, the
core-site.xml file is located in /etc/hadoop/conf
cp -p /etc/hadoop/conf/core-site.xml /etc/hadoop/conf/core-site.xml.orig
Username: admin
Password: admin
When you first login to Ambari, you’ll be presented with the dashboard. On the left-
hand side of the screen you will see a list of services. From this menu, select HDFS.
13 Next, open the Service Actions drop-down list and select Stop.
16 Once both services have been stopped, it’s time to edit the core-site.xml file via the
Ambari interface. From the left-hand menu, select HDFS, and then select the Configs
tab. Next, select the Advanced tab.
Scroll down to the bottom of the page and select/open the Custom core-site menu.
17 Add ViPR/ECS specific values to Custom core-site. The core-site XML syntax is a basic
markup where you define key/value pairs. So, for instance, <thisisakey>and this would
be the value</thisisakey> would be the syntax you would find in a core-site.xml file.
However, with Ambari, all you need to define is the name of the key, and the
corresponding value.
18 Add the following properties to Custom core-site, via the Ambari interface. You can cut
and paste from here, modifying for your environment where needed.
Key Value
fs.AbstractFileSystem.viprfs.impl com.emc.hadoop.fs.vipr.ViPRAbstractFileSystem
fs.permissions.umask-mode 022
Example:
fs.vipr.installation.Pod1.hosts
10.126.67.13,10.126.67.14,10.126.67.15,10.126.67.
16
fs.vipr.installation.<Pod#>.resolution dynamic
Example:
fs.vipr.installation.Pod1.resolution
fs.vipr.installation.<Pod#>.resolution. 900000
dynamic.time_to_live_ms
Example:
fs.vipr.installation.Pod1.resolution.dynamic.time_to
_live_ms 900000
Example:
Pod1 (This is case-sensitive)
fs.viprfs.auth.anonymous_translation CURRENT_USER
fs.viprfs.auth.identity_translation NONE
fs.viprfs.impl com.emc.hadoop.fs.vipr.ViPRFileSystem
In the pop-up window, leave the Notes field blank and click Save again.
20 First Start the HDFS services which you stopped prior to adding your core-site
key/values.
Click the service name, then from Service Actions, select Start.
Open your S3 browser using the pod#ouser1 account. Select the pod#hdfsbucket and
click the Upload button to upload a file to the bucket.
Note: Use any of the files in the C:\Lab\Files folder for this test.
23 Return to the Putty session (SSH connection) in your Hadoop instance. Run the
following commands and verify the object created:
Example:pod1ns1
hdfs dfs -ls viprfs://pod1hdfsbucket.pod1ns1.Pod1/
24 Now test the integration from your Hadoop node with CLI.
From the Putty session in your Hadoop node, test the following commands:
hadoop fs -ls /
Example:
hdfs dfs -copyFromLocal /var/tmp/hdfsclient-
2.2.1.0.77331.4f57cc6.zip
viprfs://pod1hdfsbucket.pod1ns1.Pod1/tmp1
25 b) Verify connectivity to ECS storage. First, create a tmp directory and then verify that
the directory has been created.
Example:
hdfs dfs -mkdir viprfs://pod1hdfsbucket.pod1ns1.Pod1/tmp1
26 c) Write data to your ECS bucket. For simplicity, write the zipped jar file for ViPR.
Example:
hdfs dfs -copyFromLocal /var/tmp/hdfsclient-
2.2.1.0.77331.4f57cc6.zip
viprfs://pod1hdfsbucket.pod1ns1.Pod1/tmp1/
Example:
hdfs dfs -ls viprfs://pod1hdfsbucket.pod1ns1.Pod1/tmp1
28 Return to the S3 Browser and verify that the data was written.
Create a second, new object user in the existing namespace you created in the previous lab.
Modify the bucket ACL to provide access to the new object user.
Using the S3 Browser, verify that the ACL defined is regulating read/write access as you expected.
Step Action
1 Login to the Primary vECS-1 Portal at 192.168.73.54 using the following credentials:
3 Next, create an account for this object user in S3 Browser. Because we are using the
free version of the S3 browser, we are only allowed to have two accounts.
Account Name pod#ouser2 (Your object user name. # is your pod number)
Storage Type S3 Compatible Storage
192.168.73.54:9021
Endpoint can be the IP address of any one of the nodes you have
configured in the storage pool of your VDC.
REST Endpoint
ECS has specific port number designated for each client interface.
ECS S3 interface uses port 9020 for http, and port 9021 for https
connection.
Access Key ID pod#ouser2 (# is your pod number)
<S3 secret access key>
Secret Access Key S3 Secret access key of the object user that you copied from user
management screen in the ECS Portal.
Note: # is your pod number. See below for an example of how to fill-in each field:
4 After completing the previous step, you will now be using the new S3 account that was
just created for the pod#ouser2 user.
5 In the Bucket Explorer pane, S3 Browser will automatically list only the bucket(s) owned
by this pod#ouser2 user. To view other buckets that the same user has access to (via
ACLs), you must use the Add External Bucket under the Buckets menu of the S3
Browser.
6 Enter name of the bucket you created in the previous lab (pod#bucket1) and click Add
External bucket.
8 Select the bucket to view the contents. You will see the following popup message.
Click Yes.
9 What do you see? You receive an error stating “Acces Denied.” Click OK.
This error occurs because the pod#ouser2 does not have read access on the bucket:
10 Now we will check what the bucket ACL looks like in the ECS Portal.
12 Choose the Edit ACL option from the Actions drop-down list of pod#bucket1.
13 You can see that there are two types of bucket ACLs shown:
User ACL - enables admin user to provide read and write privileges on a bucket
for an object user.
Group ACL - allows you set permissions for a set of pre-defined group.
We will first test User ACLs and then move on to Group ACLs.
As in the example below, you can see that by default, the User ACL has an entry for the
bucket owner with Full Control permission.
14 We want the pod#ouser2 user to read bucket contents, so we will add a new rule for
this user.
You can see a list of available permissions. Unselect all the permissions except for Read.
We will assign only read privilege to the user.
Click Save.
15 Upon successful creation of the rule, you can see that the object user was added to the
User ACL list as seen below:
Now you can see the files that you had uploaded to pod#bucket1 as pod#ouser1 user in
the previous lab.
Did you succeed? No, because the pod#ouser2 does not have write permission on the
bucket. You can see the “Access Denied” error in the Tasks pane at the bottom of S3
Browser as shown below:
Experiment with various ACL permissions and test how they affect the operations you
can perform from the S3 Browser.
18 You tested how you could use ACLs to give permission to a user for bucket access.
Now you will see how Group ACLs can be used to provide permissions on a large set of
pre-defined user groups.
19 Let’s first try the All users Group ACL. For this, you need to create a new object user in
the ECS Portal. From the ECS Portal, create a new object user:
20 Now, add a new Group ACL rule to allow all users to perform read operation. In the ECS
Portal, navigate to Manage > Buckets.
Then, select Edit ACL from the Actions drop-down list for the pod#bucket1 bucket.
You can see that the Group ACL does not have any rules. Click Add.
Unselect all permissions except the Read permission, and click Save.
This rule will provide read permission on the bucket to all authenticated users.
23 Now, your Group ACL will appear as it does in the example below:
24 Now that you have Read permission set on the bucket for all authenticated users in the
same namespace, try to read this bucket as pod#ouser3 using the S3 Browser.
Note: S3 Browser free edition will allow a maximum of two accounts. Therefore, you will
receive a warning when you try to add a new account for pod#ouser3. Click No when
the pop-up message appears.
27 Select Add external bucket to have the pod#bucket1 listed on the bucket explorer pane.
Now you can see that the pod#ouser3 is able to read the bucket. Note that there is no
ACL that specifically adds access to this particular user; our all users Group ACL enabled
the user to read buckets.
28 Let’s also experiment with the public Group ACL. Adding permission to this group
enables even anonymous, or unauthenticated, users to access the bucket.
S3 Browser will not allow us to create an account without any credentials. So we'll use
the curl command-line utility to test public access.
29 Connect to vECS-1 your node using putty to 192.168.73.54. (Putty executable is located
at C:\lab\putty in your management station.
Login: root
Password: P@ssw0rd
30 Issue the curl command below, which is an anonymous request to read the
pod#bucket1 bucket.
curl https://10.126.67.23:9021/pod#bucket1/ -H "x-emc-
namespace:pod#ns1" -k
Replace # with your pod number. In the command above, make sure to use Test.txt (or
some other small text file) that can be viewed with the Linux "cat" command. Note that
the file should have been already uploaded into the bucket by pod#ouser1.
As you can see below, you will receive the Access Denied error. This is expected,
because the bucket ACL does not permit anonymous user access.
31 Next in the ECS Portal, create a Group ACL which gives read permission to the public
group. This will allow both authenticated and anonymous users to perform read access
on the bucket.
You can see that the Group ACL does not have any rules. Click Add.
Upon successful creation, the Group ACL of the bucket will appear as below:
• Use s3curl to create objects with retention policies and retention period.
• Experiment with bucket and object retention and determine which take precedence.
Step Action
Password: P@ssw0rd
On the Bucket Management page, select your namespace pod#ns1 from the drop-
down list.
For the pod1bucket1 bucket, open the corresponding Actions drop-down list and select Edit
Bucket.
3 On the Edit Bucket page, you can see the Bucket Retention section.
The bucket retention period is set at the bucket or object level. It prevents the objects
to be modified or deleted until the retention period elapses, after the original object
creation time.
The bucket retention period can be set in units ranging from seconds to years.
There is also an Infinite option which when checked, prevents any modification of the
object indefinitely.
Click Save.
6 The delete operation will fail because the object/file creation time is not more than the
1 month retention period that you had set on the bucket. You can see the error
message by clicking on the Failed task in the Tasks pane at the bottom of the S3
Browser.
As you can see below, the status message states that the object cannot be deleted
because it is subject to retention:
7 Modify the retention period of bucket to a smaller duration (duration less than the
current age of your test object, based on its creation time). Try again to delete the
object in the bucket. You can see that the Delete operation succeeds without any
problem.
8 Next, let us experiment with retention at the object level using a retention policy.
Retention policies can be configured for the Namespace. You can create multiple
retention policies in a Namespace, and set them to appropriate objects using S3 curl
commands.
Navigate to Manage > Namespace and click Edit on your pod#ns1 Namespace.
Value: 10 minutes
Click Add.
Value: 20 minutes
Click Save.
You will use these two retention polices pod#rpolicy1 and pod#rpolicy2 on two
different objects in pod#bucket1 and test how retention works.
11 ECS Portal does not offer the ability to set retention policy on objects. You will need to
use s3curl utility to set this option.
S3curl is the Amazon S3 authentication tool for curl. Since ECS uses custom header
with x-emc string prefixed, s3curl script should be modified to include the x-emc in the
header attribute.
You can find the pre-modified s3curl.pl file at C:\lab\s3curl path in your management
station. You can find more details on modifications to be made on s3curl.pl file at
http://www.emc.com/techpubs/ecs/ecs_create_bucket-1.htm#GUID-2E37CDB4-12FB-
4BA7-9379-7D45044331E2
Copy these files to any one of your ECS nodes using the commands below.
On your Windows Management Host open a command prompt.
cd c:\lab\s3curl
Note: The dot_s3curl.txt should be named as .s3curl on the ECS node, and reside in
the home directory of the root user.
12 Log in to the ECS node as root using PuTTY, located in C:\lab\putty path in your
management station.
Edit the .s3curl file that you have copied to the root directory.
vi .s3curl
%awsSecretAccessKeys = (
my_profile => {
id => 'pod#ouser1',
key => ‘<S3 Secret Access Key copied from ECS Portal>'
},
root_profile => {
id => 'root',
key => 'P@ssw0rd'
},
);
push @endpoints,(’192.168.73.54',’logangreen.emc.edu’,
);
You need to update the my_profile with your object user’s credentials. Update the
endpoints with the IP address of the ECS node that you are currently logged into and
its hostname. (Run the “hostname” command to get the FQDN of your ECS node).
14 In the putty session, run the below command to test if s3curl is functional.
./s3curl.pl
If everything is properly configured, it should display the s3curl help.
15 Now let us try to upload a file to the pod#bucket1 bucket as an object, and set
retention policy on that object.
You will need new files in your ECS node to test the retention policy feature.
Copy a few small files from C:\lab\files location in your management station to the ECS
node using pscp– back on your Windows host in the Command Prompt.
cd c:\lab\files
16 Now, on the ECS Node in Putty run the s3curl command as below:
Then, select the Http Headers tab in the bottom pane as in the example below.
You can see that there is a new header x-emc-retention-policy set with the retention
policy as value. You will not find this header for other files that you uploaded directly
from S3 Browser.
18 Click on other files uploaded through S3 Browser and check their headers.
Using a retention policy with objects instead of hardcoding a retention period value
provides more manageability. Any change to the retention policy automatically applies
to every object configured with that particular retention policy.
19 Similar to the above, you can upload other objects and set a different retention policy
on them. Upload another sample file with the pod#rpolicy2 retention policy using
s3curl, and check its http header.
20 Now, try to delete the file before the retention policy expires.
Remember that the pod#bucket1 bucket already has a retention period on it. In order
to avoid conflicts, you may want to disable the bucket-level retention period (set it 0
seconds) before you try the retention policy use case.
Similar to the retention period set on a bucket, the retention policy will not let you
delete the object until the object life time exceeds the time period specified via the
retention policy.
21 You can also set a specific retention time period on objects using s3curl commands.
Return to the ECS node session in putty.
Note: The unit of retention period in the above command is in seconds. In the
command, you are setting object retention of 10 minutes on the retentionperiod.txt
file.
You can see above that the command has executed successfully.
Do you see the new file you uploaded in the previous step? Click the file to select it.
23 Repeat the delete file operation with its retention period set.
24 At this point, you understand what retention period and policies are, and how they
work on object and bucket level.
Next, experiment with which takes precedence, the retention set at bucket level or the
object level. You can do this by using the following scenario:
Now, try to delete the object after 5 minutes. What happens, are you able to delete
the object?
Next, you can try the reverse: set the retention period on the bucket to be less than
the retention period of the object. Then try deleting the object and observe the
behavior.
Step Action
2 Navigate to Manage > Buckets. On the Bucket Management page, select your
namespace pod#ns1 from the Namespace drop-down list.
3 For the pod1casbucket bucket created previously in Lab 3 Part 5, open the
corresponding Actions drop-down list and select Edit Bucket.
4 On the Edit Bucket page, scroll down to the Bucket Retention section and click Show
Options.
Upon clicking the button, the options for advanced retention settings are displayed.
Below is the detailed description of the options displayed.
Enforce Retention Information in Object: If this control is enabled, no CAS object can
be created without retention information (period or policy). An attempt to save such an
object will return an error.
Bucket Retention Period: The bucket retention period is set at the bucket or object
level. It prevents the objects to be modified or deleted until the retention period
elapses, after the original object creation time. If both a bucket-level and an object-level
retention period are set, the longer period will be enforced on the bucket. In a
Compliance-enabled environment, Bucket Retention Period is mandatory unless
retention information in the object is enforced.
5 The retention period can be set in units ranging from seconds to years. There is also an
Infinite option which when checked prevents any modification of the object indefinitely.
Click Save.
• Create two buckets in the namespace, with one of the buckets enabled with hard quota
Step Action
Select Local User option (a Namespace Administrator can be a local ECS user or a user in
Active Directory).
Click Save.
Note: As mentioned on the New Management User page, a management user without
the System Administrator rights will be able to login to the ECS portal only if the user is
mapped as a Namespace Administrator for a namespace.
4 After successful creation of management user, you can see the user listed on the
Management User page.
5 The next step is to create a new namespace, mapping the management user created in
previous step, as the Namespace Admin. You will also enable hard quota setting on this
namespace.
Notification Only at-Known as soft quota, this option will trigger a notification
when the capacity used reaches the specified limit
Block Access at-Known as hard quota, this option will block any further upload
operation when the quota limit is reached. This also sends a notification when a
specified percentage of the quota is reached.
Note: 1 GB is the minimum value that can be set for the quota.
Click Save.
6 Now that you have a namespace created, the next step is to login to the ECS Portal as
Namespace Administrator and create buckets in the namespace.
Log out of the portal and log in as Namespace Administrator using the credentials below:
7 As a Namespace Administrator, you will now create an object user. This object user will
be used to perform read and write operations on the buckets created in the pod#ns2
namespace.
8 Now, create a bucket in the namespace with the pod#ns2ouser1 created in the previous
step as the owner. You will also enable quota on this bucket.
On the Bucket Management page, select your namespace pod#ns2 from the drop-down
list.
Click New Bucket and create a bucket with the following details:
Name: pod#bucket1
Replication Group: <Your replication group>
Namespace: pod#ns2
Bucket Owner: pod#ns2ouser1 (object user you created earlier in this lab)
Quota: Enabled with 'block access at' set to 1 GB
Note: Similar to the namespace quota, a hard quota is set on this bucket to prevent
upload operations when the bucket’s quota limit is reached.
Click Save.
9 Now we will create another bucket in the same namespace pod#ns2, but this bucket will
not be quota enabled.
Name: pod#bucket2
Replication Group: <Your replication group>
Namespace: pod#ns2
Bucket Owner: pod#ns2ouser1 (object user you created earlier in this lab)
Quota: Disabled
10 Upon successful creation of the buckets, the Bucket Management page will appear as in
the example below. You can see that pod2bucket1 has 1 GB of hard quota enabled and
pod2bucket2 does not have any quota set.
Start the S3 Browser Accounts > Manage Accounts > delete pod#ouser3.
Create new account for pod#ns2ouser1. Fill in the fields with the following details:
Account Name pod#ns2ouser1 (Your object user name. # is your pod number)
Storage Type S3 Compatible Storage
192.168.73.54:9021
Endpoint can be the IP address of any one of the nodes you have
configured in the storage pool of your VDC.
REST Endpoint
ECS has specific port number designated for each client interface.
ECS S3 interface uses port 9020 for http, and port 9021 for https
connection
Access Key ID pod#ns2ouser1 (# is your pod number)
<S3 secret access key>
Secret Access Key S3 Secret access key of the object user that you copied from user
management screen in the ECS Portal
Note: # is your Pod number. See below for an example of how to fill in each field.
Note that you can switch between user accounts any time: Select the Accounts tab and
then select the required account name
12 In the S3 Browser Bucket Explorer pane on the left, you can see the buckets
pod#bucket1 and pod#bucket2 are listed by default. This is because the pod#ns2ouser1
is the owner of both the buckets.
Now upload three files into the pod#bucket1 from C:\lab\files path in your management
station.
Choose files with size around 350 MB for the upload operation.
13 You can see below that pod1bucket1 has approximately 1 GB of files in it.
14 Similarly, upload two files to pod#bucket2 with total size not more than 1 GB.
You can also check the number files in a bucket and the total object size in it from the
Properties tab in the bottom of S3 Browser.
Select the bucket name and then select the Properties tab to view the corresponding
information.
15 To test the quota option, it is very important to check the ECS Metering and ensure that
the number of objects in the buckets (pod#bucket1 and pod#bucket2) listed on the
Metering page match the actual number of files/objects in the bucket.
Select the namespace from the list using the Add icon “+”and select the bucket from the
list using the Add icon “+”.
Click Apply.
Wait for approximatelyu 30 – 40 minutes for the Object Count to reflect the actual
number of files in the bucket.
16 As you can see below, the Object Count should display the actual number of objects
uploaded in the buckets.
Upload a small file, of a few KB in size to pod#bucket1 from C:\lab\files location in your
management station.
Note that the pod#bucket1 already has files of around 1 GB size in it. So when you try to
upload additional file, the upload operation will fail based on the Block Access at setting
that you had defined.
You can see that the status shows “Failed – Forbidden: Check if quota has been
exceeded” error.
You did not enable quota on this bucket, so why did the upload operation fail?
Highlighted below are the quota exceeded notifications for the namespace as well as the
bucket.
Purpose: Using readily available data clients, test basic I/O access by
performing "CRUD" operations on ECS data repositories (commonly
referred to as "buckets")
Step Action
1 To demonstrate the multi- tenancy feature of ECS, the following structure is created in
Active Directory.
Two user groups named Finance and Sales are created in AD. These groups will be
considered as individual tenants and they will have their own namespace created in
ECS.
Note that this structure is used for simple proof-of-concept purposes only. We have a
single Active Directory server, which would be a realistic representation of an Enterprise
customer of ECS, with multiple business units within the enterprise representing ECS
tenants. All business units are sharing a single Active Directory setup.
2 In this experiment, each user group within Active Directory (i.e. each tenant) will have
two types of users: Admin user and Object users. All users will have same AD privilege
and they will be part of two AD groups: Domain users user group, and the user group
named by their tenant.
Shown below are the properties of fadmin and fuser for the Finance tenant. Similarly,
Sales group will have sadmin and suser users who are members of Domain users and
Sales group.
From the ECS perspective, the Admin users (fadmin and sadmin) will be considered as
management users - specifically, namespace admins. They will have access to ECS
Portal with limited capabilities - each can manage their own namespace, e.g. add or
remove users in their namespace.
fuser1, fuser2, suser1 and suser2 are ECS Object users who will have access only to the
ECS object store, to perform CRUD operations.
In Active Directory, all users have been configured with ChangeMe1 as their password.
The above Active Directory structure is pre-created, and made available for you in this
lab. You will use these Active Directory details to add your authentication provider from
the ECS Portal.
Step Action
The Group whitelist above, will list the Active Directory groups which will be allowed to
access the ECS storage.
Click Save.
You can use the Edit option from the Actions, if you need to modify the authentication
provider.
6 You will use this authentication provider in the next lab to create namespaces with
domain configuration.
Step Action
2 Next, we need to create namespaces for the tenants (Finance and Sales) with the
domain details.
Name: pod#financens
User Admin: [email protected]
Domain Group Admin: [email protected]
Replication Group: pod1site2rg2
Domain: corp.emc.edu
Groups: Finance (This namespace will be assigned for Finance tenant users)
Attribute: objectCategory
Values: CN=Person,CN=Schema,CN=Configuration,DC=corp,DC=emc,DC=edu
Click Save.
6 Now, try to login to ECS Portal as the Namespace Administrator using these credentials:
How are these credentials being checked? Is it done by ECS, or by some other
component in your environment?
Notice that the Namespace Management page has only one namespace listed, which is
owned by [email protected].
When you log in as this Namespace Admin, you can only view the namespace that this
Admin owns.
8 Navigate to other ECS management views like Storage pools, VDC etc. Are you able to
view the details?
You cannot see those details because the Namespace Administrator’s access is limited
to bucket and object user management of a namespace. The user will not be authorized
to view other ECS system administrative attributes.
9 Now navigate to the User Management page and add a new domain object user using
the following details:
Name: [email protected]
Namespace: pod#financens (# is your pod number)
Now, log off of the portal and log in as [email protected] using AD password. You
can see that the authentication succeeds against LDAP but the user will not be able to
view or perform any operation in the ECS Portal because the user is not authorized.
10 Log in to the ECS Portal as root user with P@ssw0rd as the password.
Navigate to the Namespace Management page and create another namespace for the
Sales tenant using the below details.
Name: pod#salesns
User Admin: [email protected]
Domain Group Admin: [email protected]
Replication Group: pod#site2rg2
Domain: corp.emc.edu
Groups: Sales
Attribute: objectCategory
Values: CN=Person,CN=Schema,CN=Configuration,DC=corp,DC=emc,DC=edu
11 Now, log off from the portal and login as Sales namespace administrative user using
these credentials:
12 Navigate through different pages and observe what this user is able to view and the
actions the user is able to perform.
Were you able to see other namespaces and their object users?
You will also explore the self-service REST API feature available for domain users to create an object user
account for themselves, and claim their S3 secret access key.
Step Action
1 We will first explore the self-service ECS REST API to authenticate as a domain user and
then create a S3 secret access key.
Log in to one of your ECS nodes using putty using the following credentials:
Login: root
Password: P@ssw0rd
2 First, you need to authenticate as a domain user and get a cookie file for subsequent
REST calls.
Run the command below from the root path to authenticate as domain user, fuser1.
If you had created an object user for fuser1 in the previous lab, delete it before you try
the below command.
curl -L --location-trusted -k
https://192.168.73.54:4443/login?using-cookies=true -u
"[email protected]:ChangeMe1" -c cookiefile –v
Example
3 Run ls –li command to verify whether the cookie file is generated. You should see a
file named cookiefile.
4 Then, issue the REST API call to retrieve the S3 secret access key for the user. Note that
the cookiefile is being passed as one of the arguments for authentication.
curl -k https://192.168.73.54>:4443/object/secret-keys -b
cookiefile -v -H "Content-Type: application/json" -X POST -
d "{}"
5 Successful completion of the REST call generates the S3 secret key for the
[email protected] domain user.
The above REST call not only creates the S3 secret key but also creates an object user in
ECS.
Return to the ECS Portal and verify whether a new object user has been created on the
User Management page.
6 Now that you have secret access key and object user created for the domain user
[email protected], follow the steps below to perform read/write operations in the
S3 Browser.
The trial version of the S3 Browser only allows up to two accounts, you will need to
delete one: S3 Browser Accounts > Manage Accounts > Delete pod#ns2ouser.
Create a new account for [email protected] using the secret access key from the
ECS Portal.
9 (OPTIONAL STEP): this step is optional and may be performed or simply reviewed.
Use the self-service REST API call to create an object user, and generate S3 secret key
for suser1 who belongs to the Sales tenant group.
Then, create a bucket for this user in S3 Browser. You can then test the multi-tenancy
data isolation by trying to read the buckets created by Finance tenant users. Follow the
instructions in Lab 3: Part 1 “Test ACLs with local object users in ECS” to create ACLs and
add external bucket.
Purpose: Test the metering and monitoring capabilities that are provided in
the ECS web portal.
View the available monitoring data from the portal for single-
site and multi-site environments
References: EMC Elastic Cloud Storage (ECS) Version 2.0 ECS Documentation 302-
001-980 01
Step Action
On the ECS Portal, expand the Monitor menu and select Metering.
In the Date Time Range drop-down list, select Custom. In the From field, enter
yesterday’s date. Similarly, in the To field, enter today’s date.
From the Select Namespaace list box, highlight pod#ns1. From the Select Buckets list
box select pod#bucket1 by using the Add icon “+”. Click Apply. This will show object
metrics and traffic that have occured within the past day in pod#bucket1.
2 From the Monitor menu, select Events and observe the recent events that have
occoured during the course of your lab exercises.
3 From the Monitor menu, select Capacity Utilization to view the storage pool capacity.
Click the History button to view the capacity history. You can hover your mouse over
points in the graph to view metrics at a specific time. Metrics are updated every hour.
4 Navigate to Monitor > Traffic Metrics to view the traffic metrics for the VDC. Click History
for a graphical representation. Clicking the VDC will show metrics on a per node basis.
5 Select your pod number. This will bring up further traffic metrics data for each ECS node
in your cluster. Click History for a graphical representation. This will display resource
usage history.
6 Click Hardware Health and then choose the storage pool pod#site1sp1 (where # is your
POD #). This will show node and disk health. You can click your storage pool to view
further details per node.
7 Click Node & Process Health. Click the VDC named pod#site1vdc1 (where # is your pod
#). Here you can monitor the current resources usage for that VDC on a per node basis.
Clicking the History button displays a graphical representation of resource usage history.
8 Click Chunk Summary. Click the drop-down arrow to view further details.
10 Click Recovery Status. The progress of recovery of a storage pool can be tracked here.
11 Click Disk Bandwidth to view disk performance for the VDCs listed.
What is the peak read speed of your ECS for pod#site1vdc1 (where # is your pod #)?
_____________________________
12 Click Geo Replication. There are several buttons available to view further details on the
geo-configuration. Click through these buttons to view those attributes. If your ECS is
not configured for Geo Replication the fields will be blank.
13 (OPTIONAL STEP): this step is optional and may be performed or simply reviewed.
It is possible to retrieve monitoring data using the REST API. You will need to run some
curl commands. You can run curl by opening an SSH session (with credentials
root/P@ssw0rd) to any of your ECS nodes, using PuTTY in your virtual desktop.
The following commands use the REST API to pull monitoring data from ECS:
NOTE: The python -m json.tool creates pretty output for the json information. The
xmllint --format - creates pretty output for the xml information