Advanced Concepts For Clustered Data Ontap 8.3.1 V1.1-Lab Guide
Advanced Concepts For Clustered Data Ontap 8.3.1 V1.1-Lab Guide
ONTAP 8.3.1
December 2015 | SL10238 Version 1.1
TABLE OF CONTENTS
1 Introduction...................................................................................................................................... 4
2 Lab Environment............................................................................................................................. 5
3 Lab Activities................................................................................................................................... 7
3.1 Lab Preparation......................................................................................................................... 7
3.1.1 Accessing the Command Line..............................................................................................................................7
3.1.2 Accessing System Manager................................................................................................................................. 9
3.6 SnapMirror................................................................................................................................36
3.6.1 Exercise.............................................................................................................................................................. 36
1 Introduction
This Lab Guide provides the steps to complete the Insight 2015 Hands-on Lab for Advanced Concepts for
clustered Data ONTAP 8.3.1.
1 Lab Objectives
This lab provides an introduction into a number of more advanced features found in clustered Data ONTAP,
including the Command Line Interface (CLI), load-sharing mirrors, IPspaces, Quality of Service (QoS), cluster
peering, Disaster Recover for Storage Virtual Machines (SVM-DR), administrative users and roles, and Active
Directory Authentication Tunneling.
1 Prerequisites
This lab builds on the concepts covered in the Basic Concepts for Clustered Data ONTAP 8.3 lab, and requires
knowledge of the topics covered in that lab. You should already understand the concepts and know how to
use OnCommand System Manager, how to configure a Storage Virtual machine (SVM), and how to create
aggregates, volumes, and LIFs. You should also have a basic knowledge of Windows administration. Knowledge
of UNIX is not required, but a Linux virtual machine (VM) is provided.
Your starting point for this lab is a cluster named cluster1, with two nodes named cluster1-01 and cluster1-02.
There are two SVMs, svm1 and svm2, each hosting a variety of volumes.
Before you start the lab, launch System Manager and get familiar with the cluster configuration, including location,
naming, and status of the aggregates, volumes, LIFs, and SVMs.
The terms Storage Virtual Machine (SVM) and Vserver are used interchangeably in this lab. SVM is used to
describe virtualized storage systems as a concept. Vserver is the term used to refer to SVMs in the clustered
Data ONTAP command line and in the System Manager user interface. SVMs configured in this lab follow the
naming convention svmN, where N is a number, and svm is shorthand for Storage Virtual Machine.
2 Lab Environment
The following figure illustrates the network configuration.
Figure 2-1:
Table 1 shows the host information used in this lab.
Table 1: Host Information
Host Name
Operating System
Role/Function
IP Address
cluster1
cluster
192.168.0.101
cluster1-01
cluster 1, node 1
192.168.0.111
cluster1-02
cluster 1, node 2
192.168.0.112
cluster2
cluster
192.168.0.102
cluster2-01
cluster 2, node 1
192.168.0.121
JUMPHOST
Windows 2008 R2
192.168.0.5
rhel1
Linux server
192.168.0.61
DC1
Windows 2008R2
Active Directory/DNS
192.168.0.253
Table 2 lists the user IDs and passwords used in this lab.
Host Name
User ID
Password
JUMPHOST
DEMO\Administrator
Netapp1!
cluster1
admin
Netapp1!
cluster2
admin
Netapp1!
rhel1
root
Netapp1!
DC1
DEMO\Administrator
Netapp1!
Comments
3 Lab Activities
In this lab, you will perform the following tasks:
Explore the CLI in more detail, and set it to work in an SVM context.
Navigate the node-scoped CLI.
Check the cluster and SVM administrative roles, users, and groups.
Configure load-sharing mirrors to protect the namespace.
Learn about IPspace, Broadcast Domains, and Subnets.
Use QoS to manage tenants and workloads.
Create intercluster LIFs and create a cluster peering relationship for SnapMirror.
Create a Disaster Recovery for Storage Virtual Machines (SMV DR) relationship from one cluster to
another, perform a cutover operation, and then revert back to the primary.
Configure authentication tunneling for cluster administrators (refer to the appendix).
Add a volume that has a different language setting from the SVM that contains the volume (refer to the
appendix).
Learn about new automated nondisruptive upgrade features in Data ONTAP 8.3.
This is a self-guided lab. You can complete or skip any exercise.
The expected time for you to complete the entire lab is approximately 1 hour and 30 minutes.
Note: Before you begin the lab activities, you should understand how to log into and out of the clustered
Data ONTAP system by using the CLI and System Manager.
Figure 3-1:
If you already have another PuTTY session open then this step will only bring that session into focus
on the display. If your intention is to open another PuTTY session, then right-click on the PuTTY toolbar
icon and select PuTTY from the context menu.
Once PuTTY launches, you can connect to one of the hosts in the lab by following the next steps. This
example shows a user connecting to the Data ONTAP cluster named cluster1.
2. By default PuTTY should launch into the Basic options for your PuTTY session display as shown in the
screenshot. If you accidentally navigate away from this view just click on the Session category item to
return to this view.
3. Use the scrollbar in the Saved Sessions box to navigate down to the desired host and double-click it
to open the connection. A terminal window will open and you will be prompted to log into the host. You
can find the correct username and password for the host in the Lab Host Credentials table in the Lab
Environment section at the beginning of this guide.
Figure 3-2:
If you are new to the clustered Data ONTAP CLI, the length of the commands can seem a little
initimidating. However, the commands are actually quite easy to use if you remember the following three
tips:
Make liberal use of the Tab key while entering commands, as the clustered Data ONTAP
command shell supports tab completion. If you hit the Tab key while entering a portion of a
command word, the command shell will examine the context and try to complete the rest of
the word for you. If there is insufficient context to make a single match, it will display a list of all
the potential matches. Tab completion also usually works with command argument values, but
there are some cases where there is simply not enough context for it to know what you want,
in which case you will just need to type in the argument value.
You can recall your previously entered commands by repeatedly pressing the up-arrow key,
and you can then navigate up and down the list using the up and down arrow keys.When you
find a command you want to modify, you can use the left arrow, right arrow, and Delete keys
to navigate around in a selected command to edit it.
Entering a question mark character ? causes the CLI to print contextual help information.
You can use this character by itself, or while entering a command.
The Cluster CLI section of this lab guide covers the operation of the clustered Data ONTAP CLI in much
greater detail.
Caution: The commands shown in this guide are often so long that they span multiple lines.
When you see this, in every case you should include a space character between the text from
adjoining lines.
If you intend to use copy/paste of commands from the guide to the lab, when dealing with multiline commands you can only copy one line at a time. If you try to copy multiple lines at once then
the commands will fail in the lab.
Figure 3-3:
The OnCommand System Manager Login window opens.
2. Note the tabs at the top of the browser window. This lab contains multiple clusters, and each tab opens
System Manager for a different cluster.
3. Enter the User Name admin, and the Password Netapp1!.
4. Click the Sign In button.
Figure 3-4:
System Manager is now logged in to cluster1, and displays a summary page for the cluster. If you are
unfamiliar with System Manager, here is a quick introduction to its layout. Please take a few moments to
expand and browse these tabs to familiarize yourself with their contents.
5. Use the tabs on the left side of the window to manage various aspects of the cluster. The Cluster tab
accesses configuration settings that apply to the cluster as a whole.
6. The Storage Virtual Machines tab allows you to manage individual Storage Virtual Machines (SVMs,
also known as Vservers).
7. The Nodes tab contains configuration settings that are specific to individual controller nodes.
10
6
7
Figure 3-5:
Tip: As you use System Manager in this lab, you may encounter situations where buttons
at the bottom of a System Manager pane are beyond the viewing size of the window, and no
scroll bar exists to allow you to scroll down to see them. If this happens, you have two options;
either increase the size of the browser window (you might need to increase the resolution of
your jumphost desktop to accommodate the larger browser window), or in the System Manager
window, use the tab key to cycle through all the various fields and buttons, which eventually
forces the window to scroll down to the non-visible items.
11
Go up one directory
Manage clusters
(DEPRECATED)-Display dashboards
Manage system events
Quit the CLI session
Manage export policies and rules
Show the history of commands for this CLI session
Manage jobs and job schedules
Manage LUNs
Display the on-line manual pages
Manage MetroCluster
Manage physical and virtual network connections
QoS settings
Execute a previous command
Show/Set the rows for this CLI session
Run interactive or non-interactive commands in
the nodeshell
The security directory
Display/Set CLI session settings
Manage SnapMirror
Display operational statistics
Manage physical storage, including disks,
aggregates, and failover
The system directory
Go to the top-level directory
Manage virtual storage, including volumes,
snapshots, and mirrors
Manage Vservers
Type any base command to move into that branch of the command hierarchy. For example, the volume branch
contains all commands related to volumes. The prompt changes to show you the part of the command tree you
are working in.
Type ? again. This time the hierarchy shows you the specific subcommands available for that part of the
command tree.
cluster1::> volume
cluster1::volume> ?
aggregate>
autosize
clone>
create
delete
efficiency>
file>
modify
mount
move>
offline
online
qtree>
quota>
rename
restrict
show
show-footprint
show-space
size
snapshot>
12
unmount
cluster1::volume>
Unmount a volume
To show the syntax for a particular command, enter the command and follow it with ?.
cluster1::volume> size ?
-vserver <vserver name>
[-volume] <volume name>
[[-new-size] <text>]
cluster1::volume>
Vserver Name
Volume Name
[+|-]<New Size>
Tab completion works by completing what you are typing, and prompting you for what is recommended next while
you are still typing part of a command directory or command. It can even provide options for the values required
to complete the command.
Try tab completion by backspacing to clear the size command, typing the modify command, and pressing the Tab
key. The next option is automatically filled in. Press Tab again to get a list of options, and then type 1 to complete
the text svm1. Press Tab again to get the -volume option, and type in the volume name svm1_vol02. Continue
using tab completion until you get to -security-style unix. Before you press Enter, backspace to delete the word
unix, and type ?.
The output should look like this example:
cluster1::volume> modify -vserver svm1 -volume svm1_vol02 -size 1GB -state online
-policy default -user 0 -group 0 -security-style ?
mixed
ntfs
unix
Backspace to delete the modify command, and type .. to move up one level in the command hierarchy, or type
top to return to the root of the command tree.
cluster1::volume> top
cluster1::>
Type history to show the commands that you executed in the current session, or use the up arrow to repeat
recently executed commands. Use the right and left arrows, and the backspace key to edit and rerun the
commands. Alternatively, you can use the! (number) syntax to run a previous command in the list.
cluster1::> history
1 rows 0
2 volume
3 top
cluster1::>
You may notice the rows 0 command in the history list output shown in this guide (and not shown in your lab).
rows 0 disables output paging on the command console. After you run rows 0, the console stops prompting you to
Press <space> to page down, <return> for next line, or q to quit. We suggest you leave the existing pagination
setting in place while you proceed through this lab.
Certain commands require different privilege levels. By default, you are logged in with admin privilege. To enter
advanced or diag privilege, run the set -privilege <level> command, or use set <level> as the shorter
version of the command. An * is appended to the prompt to show that you are not in the default privileged level.
Note: There is no access to advanced or diag privilege commands in System Manager.
The best practice is to initiate non-admin privilege only as needed, then return to admin privilege with the
commands set -priv admin or set admin.
cluster1::> set advanced
Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y
cluster1::*>
cluster1::*> set admin
cluster1::>
13
You can type abbreviations to run a command. For example, vol show is recognized as volume show. Be aware
that command abbreviations are limited. For instance, there are also volume show-footprint or volume show-space
commands, so the abbreviation vol sho is not unique to a single command, and therefore not recognized.
You can use pattern matching with wildcards when running commands. For example:
cluster1::> vol show svm2*
Vserver
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----svm2
svm2_root
aggr1_cluster1_02 online RW
20MB
18.88MB
5%
svm2
svm2_vol01
aggr1_cluster1_01 online RW
1GB
1023MB
0%
2 entries were displayed.
cluster1::>
When running commands, you see only certain fields by default. To display all fields, run the -instance command.
cluster1::> network interface show -lif cluster_mgmt -instance
Vserver Name: cluster1
Logical Interface Name: cluster_mgmt
Role: cluster-mgmt
Data Protocol: none
Home Node: cluster1-01
Home Port: e0c
Current Node: cluster1-01
Current Port: e0c
Operational Status: up
Extended Status: Is Home: true
Network Address: 192.168.0.101
Netmask: 255.255.255.0
Bits in the Netmask: 24
IPv4 Link Local: Subnet Name: Administrative Status: up
Failover Policy: broadcast-domain-wide
Firewall Policy: mgmt
Auto Revert: false
Fully Qualified DNS Zone Name: none
DNS Query Listen Enable: false
Failover Group Name: Default
FCP WWPN: Address family: ipv4
Comment: IPspace of LIF: Default
cluster1::>
You will often see a very large number of fields for a particular object. To show a few specific fields, limit the
number of displayed fields by using the -fields qualifier.
Remember, you can use ? to show all possible values. Try using wildcards to show only items with svm1 in the
name.
cluster1::> network interface show ?
[ -by-ipspace | -failover | -instance | -fields <fieldname>, ... ]
[ -vserver <vserver> ]
Vserver Name
[[-lif] <lif-name>]
Logical Interface Name
[ -role {cluster|data|node-mgmt|intercluster|cluster-mgmt} ]
Role
[ -data-protocol {nfs|cifs|iscsi|fcp|fcache|none}, ... ]
Data Protocol
[ -home-node <nodename> ]
Home Node
[ -home-port {<netport>|<ifgrp>} ]
Home Port
[ -curr-node <nodename> ]
Current Node
[ -curr-port {<netport>|<ifgrp>} ]
Current Port
[ -status-oper {up|down} ]
Operational Status
[ -status-extended <text> ]
Extended Status
[ -is-home {true|false} ]
Is Home
[ -address <IP Address> ]
Network Address
[ -netmask <IP Address> ]
Netmask
[ -netmask-length <integer> ]
Bits in the Netmask
[ -auto {true|false} ]
IPv4 Link Local
[ -subnet-name <subnet name> ]
Subnet Name
[ -status-admin {up|down} ]
Administrative Status
[ -failover-policy {system-defined|local-only|sfo-partner-only|ipspace-wide|disabled|broadcastdomain-wide} ]
14
Failover Policy
[ -firewall-policy <policy> ]
Firewall Policy
[ -auto-revert {true|false} ]
Auto Revert
[ -dns-zone {zone-name|none} ]
Fully Qualified DNS Zone Name
[ -listen-for-dns-query {true|false} ] DNS Query Listen Enable
[ -failover-group <failover-group> ]
Failover Group Name
[ -wwpn <text> ]
FCP WWPN
[ -address-family {ipv4|ipv6|ipv6z} ]
Address family
[ -comment <text> ]
Comment
[ -ipspace <IPspace> ]
IPspace of LIF
cluster1::> network interface show svm1* -field home-node
vserver lif
home-node
------- --------------- ----------svm1
svm1_admin_lif1 cluster1-01
svm1
svm1_cifs_nfs_lif1 cluster1-01
2 entries were displayed.
cluster1::>
You can set other options to customize the behavior of the CLI. A useful option is to set the default timeout value
for CLI sessions. Check the settings on your system and, if they are not set, modify the timeout to be 0. This
setting disables the timeout for your CLI session.
cluster1::>
cluster1::>
cluster1::>
CLI session
cluster1::>
The set command, which you already used to specify the privilege level, has other options shown in the next
example. See what happens when you set different options. Remeber to set the options back before you
continue.
cluster1::> set ?
[[-privilege] {admin|advanced|diagnostic}]
[ -confirmations {on|off} ]
[ -showallfields {true|false} ]
[ -showseparator <text (size 1..3)> ]
[ -active-help {true|false} ]
[ -units {auto|raw|B|KB|MB|GB|TB|PB} ]
[ -rows <integer> ]
[ -vserver <text> ]
[ -node <text> ]
[ -stop-on-error {true|false} ]
cluster1::>
Privilege Level
Confirmation Messages
Show All Fields
Show Separator
Active Help
Data Units
Pagination Rows ('0' disables)
Default Vserver
Default Node
Stop On Error
show
command. You should see the root volume for each node (vol0), as
15
cluster1::>
To display only the volumes in the SVM named svm1, issue volume
a temporary context for just svm1. Try this command:
The prompt changes to the SVM that you selected (svm1), and you see only the volumes that belong to svm1. As
long as you are in the SVM context, you will not have to use the -vserver <SVM name> qualifier.
List the available commands. You will see a different (restricted) command list. For example, there is no storage
command. This is because the SVM shell is running with sufficient privileges to execute only the specific
commands that are relevant to an SVM. Once you type exit and return to the cluster prompt, you have full
command access over all entities in the cluster.
svm1::> ?
up
dashboard>
exit
export-policy
history
job>
lun>
man
network>
redo
rows
security>
set
snapmirror>
statistics>
system>
top
volume>
vserver>
svm1::> exit
cluster1::>
Go up one directory
(DEPRECATED)-Display dashboards
Quit the CLI session
Manage export policies and rules
Show the history of commands for this CLI session
Manage jobs and job schedules
Manage LUNs
Display the on-line manual pages
Manage physical and virtual network connections
Execute a previous command
Show/Set the rows for this CLI session
The security directory
Display/Set CLI session settings
Manage SnapMirror
Display operational statistics
The system directory
Go to the top-level directory
Manage virtual storage, including volumes,
snapshots, and mirrors
Manage Vservers
Current
Current Is
Node
Port
Home
------------- ------- ---cluster1-01
cluster1-02
e0c
e0c
true
true
You can establish an SSH session to any of these node management LIFs. Use your admin/Netapp1!
credentials. The prompt is the same as the prompt for the cluster management CLI.
16
In addition to its own LIF, each node also has its own root volume. The node root volume is bound to the cluster
node. It contains configuration files, logs, and other files associated with a nodes normal operation. A node root
volume is part of the physical cluster infrastructure. It is not associated with an SVM, does not hold user data, and
does not contain junctions to other volumes.
The CLI you have access to in this lab is exactly the same as if you created an SSH session to the cluster
management LIF. The difference is that the node management LIF always resides on its own node because it
is an IP address used specifically for managing a particular node. The node management LIF does not fail over
to another node if the home node is shut down. For this reason, you should use the cluster management LIF to
manage the cluster, because this LIF can, and will, fail over to another node. As long as the cluster is active, you
always have a reachable cluster management LIF.
However, suppose that a node is no longer in the cluster. If the node is still up, you can create an SSH session to
its node management LIF to run node-specific diagnostics, because these will not be accessible from the cluster
management CLI.
Model
Owner
Location
----------- -------- --------------SIMBOX
SIMBOX
Note: See the Model column? Have you noticed any other indication that you are running a Data ONTAP
simulator rather than physical hardware?
To run a single specific command for one node, specify that node by using the node
<command> command.
cluster1::> node run -node cluster1-02 aggr status
Aggr State
Status
aggr1_cluster1_02 online
raid_dp, aggr
64-bit
aggr0_cluster1_02 online
raid_dp, aggr
64-bit
aggr2_cluster1_02 online
raid_dp, aggr
64-bit
cluster1::>
Options
nosnap=on
root, nosnap=on
nosnap=on
When the command is executed, it displays the aggregates defined on that node and returns you to the cluster
prompt.
In this case, node scope syntax is used instead of clustered Data ONTAP syntax, and the output is also formatted
differently. The node-scoped CLI does not support tab completion.
17
aggregate show.
The prompt changes to the node of the shell you are in. To return to the cluster management CLI, enter exit, or
press Ctrl-D. For now, stay in the node shell.
List the available commands.
cluster1-02> ?
?
acpadmin
aggr
backup
cdpd
cf
clone
cna_flash
coredump
date
dcb
df
disk
disk_fw_update
download
echo
ems
environment
fcadmin
fcp
fcstat
file
flexcache
cluster1-02>
fsecurity
halt
help
hostname
ic
ifconfig
ifgrp
ifstat
ipspace
key_manager
keymgr
license
logger
man
maxfiles
mt
ndmpcopy
ndp
netstat
options
partner
passwd
ping
ping6
pktt
priority
priv
qtree
quota
rdfile
reallocate
restore_backup
revert_to
route
rshstat
sasadmin
sasstat
savecore
shelfchk
sis
smnadmin
snap
snapmirror
software
source
stats
storage
sysconfig
sysstat
timezone
traceroute
traceroute6
ups
uptime
version
vfiler
vlan
vmservices
vol
wafltop
wcc
wrfile
ypcat
ypgroup
ypmatch
ypwhich
The following list identifies situations in which you should use the node shell:
When you modify the size of the node root volume. Using the node shell is necessary because the
node root volume is considered a 7-Mode volume and can be modified only in the node scope.
When running the snapshot delta command. The cluster management CLI does not currently include
this command. The command is available as in System Manager, through a ZAPI, or it can be run from
the node shell.
Note: In general, do not perform network configuration or storage provisioning from the node shell. You
should only use it for those functions that you cannot perform from the cluster management CLI, or from
System Manager.
18
3.3.3 Exercise
In this exercise, you will create a load-sharing mirror of svm1s root volume on each node in cluster1. The
purpose of this exercise is to illustrate the requirement that load-sharing mirrors must be updated after a new
volume is junctioned into svm1s root volume, and before the volume becomes visible to clients.
1. Create the load sharing mirrors by using the volume create command. Like all volume create commands,
this command requires Vserver, volume, and aggregate parameters. The size parameter is specified
19
to match the size of svm1s root volume. The type parameter is set to DP, which is short for data
protection.
From the cluster1 CLI:
cluster1::> volume create -vserver svm1 -volume svm1_root_lsm1 -aggregate aggr1_cluster1_01
-size 20MB -type DP
[Job 560] Job is queued: Create svm1_root_lsm1.
Job 560] Job succeeded: Successful
cluster1::> volume create -vserver svm1 -volume svm1_root_lsm2 -aggregate aggr1_cluster1_02
-size 20MB -type DP
[Job 561] Job is queued: Create svm1_root_lsm2.
Job 561] Job succeeded: Successful
cluster1::> volume show -vserver svm1 -volume svm1_root_lsm*
Vserver
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----svm1
svm1_root_lsm1
aggr1_cluster1_01
online
DP
20MB
19.89MB
0%
svm1
svm1_root_lsm2
aggr1_cluster1_02
online
DP
20MB
19.89MB
0%
2 entries were displayed.
cluster1::>
2. Run the snapmirror create command to create SnapMirror relationships between the new load-sharing
mirror volumes and svm1s root volume. In this command, specify the source and destination volumes
by using the //svm_name/volume_name syntax. The source of the relationship is svm1s root volume;
the destination is the load-sharing mirror volumes. The relationship type is LS, which is short for load
sharing.
Set the update schedule to weekly; this interval is long enough to prevent the relationship to update while
you are completing this exercise. In a production environment, the update schedule is typically set to a
shorter time frame.
From the cluster1 CLI:
cluster1::> snapmirror create -source-path //svm1/svm1_root -destination-path
//svm1/svm1_root_lsm1 -type LS -schedule weekly
[Job 562] Job is queued: snapmirror create for the relationship with destination "cluster1://
svm1/svm1_root_lsm1".
[Job 562] Job succeeded: SnapMirror: done
cluster1::> snapmirror create -source-path //svm1/svm1_root -destination-path
//svm1/svm1_root_lsm2 -type LS -schedule weekly
[Job 564] Job is queued: snapmirror create for the relationship with destination "cluster1://
svm1/svm1_root_lsm2".
[Job 564] Job succeeded: SnapMirror: done
cluster1::>
3. Initialize the SnapMirror relationships between svm1s root volume and the newly created load-sharing
mirrors. All the mirrors can be updated with a single command, snapmirror initialize-ls-set. This
command uses the same //svm_name/volume_name syntax used for the source volume. The destination
volumes do not need to be specified because the cluster already knows about the load-sharing mirror
relationships.
From the cluster1 CLI:
cluster1::> snapmirror initialize-ls-set -source-path //svm1/svm1_root
[Job 565] Job is queued: snapmirror initialize-ls-set for source "cluster1://svm1/svm1_root".
cluster1::>
4. Create a new volume in svm1. The junction path for this new volume will be /parent2. /parent2 can
be thought of as a new directory under the root of svm1s namespace, which lies at /. As with the other
volume create commands, specify the SVM (by using the vserver parameter), the volume name, the
aggregate in which the volume will initially reside, and its size. In addition, specify the export policy to
use for controlling client access to the volume.
From the cluster1 CLI:
cluster1::> volume create -vserver svm1 -volume svm1_vol05 -size 1G -junction-path
20
5. At this point, you have a new volume in svm1, located in the namespace location /parent2. However,
because you have not updated the load-sharing mirror of the SVM root volume, this namespace location
is not visible.
If you do not yet have a PuTTY session open to the RHEL linux client named rehl1, open one now
(right-click the PuTTY icon on the task bar, and select PuTTY from the context menu, username root,
password Netapp1!) and run the following command.
[root@rhel1 ~]# ls /mnt/svm1
parent
[root@rhel1 ~]#
Notice that you can see the volume parent, but not parent2?
6. To be able to see the new namespace location, the load-sharing mirror set must be updated. You can do
this update by using the snapmirror update-ls-set command, which has a command syntax similar to
the snapmirror initialize-ls-set command used earlier.
From the cluster1 CLI:
cluster1::> snapmirror update-ls-set -source-path //svm1/svm1_root
[Job 567] Job is queued: snapmirror update-ls-set for source "cluster1://svm1/svm1_root".
cluster1::>
7. Run the snapmirror show command to verify that the mirror relationships have finished their update.
Repeat until the mirror state is Idle and the relationship status is Snapmirrored.
From the cluster1 CLI:
cluster1::> snapmirror show
Source
Destination Mirror Relationship
Path
Type Path
State
Status
----------- ---- ------------ ------- -------------cluster1://svm1/svm1_root
LS
cluster1://svm1/svm1_root_lsm1
Snapmirrored
Idle
cluster1://svm1/svm1_root_lsm2
Snapmirrored
Idle
2 entries were displayed.
cluster1::>
Progress
Total
Last
Progress Healthy Updated
--------- ------- --------
true
true
Repeat this command as necessary until the mirror state is Idle, and the relationship status is
Snapmirrored.
8. At this point, the new volume should be visible to clients. Go back to the Linux client and run the ls
command to verify that the volume can now be accessed, using ls /mnt/svm1.
From the Linux client:
[root@rhel1 _]# ls /mnt/svm1
parent parent2
[root@rhel1 ~]
You should be able to see both the parent1 and parent2 volumes.
21
3.4.1.1 IPspaces
An IPspace is a logical construct that represents a space containing unique IP addresses. With clustered Data
ONTAP 8.3, multiple SVMs can have overlapping IP addresses provided that each of those SVMs resides in a
different IPspace.
When you create an IPspace, it only needs a name. The ipspace
an IPspace called my_ipspace.
command creates
The Default broadcast domain contains ports that are in the Default IPspace. These ports are used
primarily to serve data. Cluster management and node management ports are also in this broadcast
domain.
The Cluster broadcast domain contains ports that are in the Cluster IPspace. These ports are used
for cluster communication, and include all cluster ports from all nodes in the cluster.
If you create unique IPspaces to separate client traffic, then you must create a broadcast domain in each of those
IPspaces. If your cluster does not require separate IPspaces, then all broadcast domains (and all ports) reside in
the system-created Default IPspace.
When you create a broadcast domain, you need to specify the name of the broadcast domain, an IPspace, an
MTU value, and a list of ports.
3.4.1.3 Subnets
Subnets in clustered Data ONTAP 8.3 provide a way to provision blocks of IP addresses at a time. They simplify
network configuration by allowing the administrator to specify a subnet during LIF creation, rather than an IP
address and netmask. A subnet object in clustered Data ONTAP does not need to encompass an entire IP
subnet, or even a maskable range within a subnet.
A subnet is created within a broadcast domain, and it contains a pool of IP addresses. You can allocate IP
addresses in a subnet to ports in the broadcast domain when LIFs are created. When you remove the LIFs, the IP
addresses are returned to the subnet pool, and are available for future LIFs.
If you specify a gateway when defining a subnet, a default route to that gateway is automatically added to the
SVM when you create a LIF using that subnet.
22
3.4.2 Exercise
In this exercise, you will use the new networking constructs introduced with Data ONTAP 8.3: IPspaces,
broadcast domains, and subnets. You will examine the new network route command, and view the automatically
created failover groups.
Note: These steps are performed in the CLI because you can only create an IPspace through the CLI,
and the creation of a subnet through System Manager requires a default gateway. In most production
environments, System Manager is sufficient.
Tip: The following steps are performed on cluster2, not cluster1. You will need to open a new PuTTY
session to cluster2 for this exercise.
To create a new IPspace you use the network ipspace create command This command requires only one
argument, ipspace, which contains the name of the IPspace you want to create.
1. Create a new IPspace on cluster2. You will use this IPspace in the next steps to create a broadcast
domain and a subnet.
cluster2::> network
cluster2::> network
IPspace
------------------Cluster
Cluster
cluster2, svm1-dr
Default
Default
new-ipspace
new-ipspace
3 entries were displayed.
cluster2::>
2. Create a new broadcast domain. You will need to specify a name, an IPspace in which it can reside, a
set of physical network ports, and an MTU value.
From the cluster2 CLI:
cluster2::> network port broadcast-domain create -ipspace new-ipspace -broadcast-domain
new-broadcast-domain -mtu 1500 -ports cluster2-01:e0g,cluster2-01:e0h
cluster2::> network port broadcast-domain show
IPspace Broadcast
Update
Name
Domain Name
MTU Port List
Status Details
------- ----------- ------ ----------------------------- -------------Cluster Cluster
9000
Default Default
1500
cluster2-01:e0a
complete
cluster2-01:e0b
complete
cluster2-01:e0c
complete
cluster2-01:e0d
complete
cluster2-01:e0e
complete
cluster2-01:e0f
complete
new-ipspace
new-broadcast-domain
1500
cluster2-01:e0g
complete
cluster2-01:e0h
complete
3 entries were displayed.
cluster2::>
3. Create a new subnet using your newly created IPspace and broadcast domain. The subnet object
requires a name, a broadcast domain, an IPspace, a subnet mask, and a range of IP addresses.
From the cluster2 CLI:
cluster2::> network subnet create -subnet-name new-subnet -broadcast-domain
new-broadcast-domain -ipspace new-ipspace -subnet 192.168.0.0/24
-ip-ranges 192.168.0.170-192.168.0.179
cluster2::> network subnet show
IPspace: Default
Subnet
Broadcast
Avail/
Name
Subnet
Domain
Gateway
Total
Ranges
23
10/10
192.168.0.170-192.168.0.179
4. The network route command is new in clustered Data ONTAP 8.3. Use this command to view routing
information without viewing routing groups.
You will not see any changes to the routing table output that are caused by the creation of the IPspace,
broadcast domain, and subnet in the previous step because you have not created any SVMs that use the
IPspace, broadcast domain, or subnet.
From the cluster2 CLI:
cluster2::> network route show
Vserver
Destination
Gateway
Metric
------------------- --------------- --------------- -----cluster2
0.0.0.0/0
192.168.0.1
20
cluster2::>
5. Because all ports in a layer 2 broadcast domain provide the same network connectivity, LIF failover
groups are created automatically in clustered Data ONTAP 8.3 when a broadcast domain is created. Use
the network interface failover-groups show command to view automatically created failover groups.
The automatically configured failover groups have the same name as the broadcast domain that you
created.
From the cluster2 CLI:
cluster2::> network interface failover-groups show
Failover
Vserver
Group
Targets
---------------- ---------------- -------------------------------------------cluster2
Default
cluster2-01:e0a, cluster2-01:e0b,
cluster2-01:e0c, cluster2-01:e0d,
cluster2-01:e0e, cluster2-01:e0f
new-ipspace
new-broadcast-domain
cluster2-01:e0g, cluster2-01:e0h
2 entries were displayed.
cluster2::>
You can create IPspaces by using only the CLI, but you can create subnet objects and broadcast
domains by using either the CLI, or System Manager. In this subsection, you will learn about the System
Manager capabilities for modifying these objects.
First, examine the options available to modify an existing broadcast domain.
6. In Chrome, click the tab for cluster2, and sign in to System Manager (username admin, password
Netapp1!).
7. In the left pane, click the Cluster tab.
8. In the left pane, navigate to cluster2 > Configuration > Network.
9. In the Network pane, click the Broadcast Domains tab.
10. Click Refresh to make sure that you are seeing the latest information.
11. In the Broadcast Domain list, select the new-broadcast-domain entry.
12. Click Edit.
24
6
9
12
10
8
11
Figure 3-6:
The Edit Broadcast Domains dialog box opens. Examine the options available to modify the broadcast
domain.
13. Click Cancel to close the dialog box.
25
13
Figure 3-7:
14.
15.
16.
17.
26
14
15
17
16
Figure 3-8:
The Edit Subnet dialog box opens. Examine the options available to modify the subnet.
18. In the Broadcast Domain area of the dialog box, expand Show ports on this domain. Review the
various settings.
19. When finished, click Cancel to discard any changes you might have made.
27
18
19
Figure 3-9:
Tip:
Export policies, which restrict which clients can access an exported volume or share, are not covered in
this lab, but export policy misconfiguration is a common problem that can easily be misinterpretted as a
networking problem. If you are able to reach a data LIF through the network by using a utility such as ping,
and you have verified that protocol access is enabled and configured properly, check your export policy
configuration to verify that it allows access from the client you are attempting to use.
If you would like to learn more about export policies and how to troubleshoot them, please refer to the
"Securing Clustered Data ONTAP" lab.
28
3.5.1 Exercise
In this activity, you examine the QoS configuration using System Manager on cluster1. This exercise uses a
workload generator to drive I/O to an SVM on cluster1. After the workload generator starts, you will configure QoS
and see the reduction of I/O operations serviced to the workload generator.
The workload generator runs directly on the Windows jumphost, and targets I/O to the drive letter Z:. The
jumphost has the drive letter Z: mapped to a CIFS share on svm1 in cluster1. The CIFS share is defined on the
volume svm1_vol01 inside svm1.
Note: This exercise uses the PuTTY session for cluster1.
From the Windows host JUMPHOST:
1. Double-click the workload.bat file on the left side of the desktop to start the workload generator.
Figure 3-10:
2. A Windows command prompt window opens, and starts outputting metrics about the I/O load that it is
generating against the share mounted on the jump host Z: drive.
Figure 3-11:
In particular, note the values shown for the ios: field that quantifies that I/O load. In this exercise, you
will configure QoS to limit these I/O operations, thus reducing the amount of load serviced by the cluster.
29
3.
4.
5.
6.
In System Manager for cluster1, click the browser tab for cluster1.
In the left pane, click the Storage Virtual Machines tab.
Navigate to cluster1 > svm1 > Policies > QoS Policy Groups.
In the QoS Policy Group pane, click Create.
3
4
Figure 3-12:
Figure 3-13:
30
The Create Policy Group dialog box closes, and you return to the System Manager window.
9. Your newly created policy should be listed in the QoS Policy Groups pane.
Figure 3-14:
10. In the left pane of System Manager, navigate to Storage Virtual Machines > cluster1 > svm1 >
Storage > Volumes.
11. In the Volumes pane, select the svm1_vol01 volume.
12. From the buttons at the top of the Volumes pane, click the Storage QoS button. If your browser window
is not wide enough to display all the buttons, you can click the small >> button at the right end of the
row to reveal the hidden buttons. If you do not even see the >> button, try widening your browser
window.
31
10
12
11
Figure 3-15:
The Quality of Service Details dialog box opens.
13. Select the Manage Storage Quality of Service checkbox.
14. Click the option to assign the volume to an Existing Policy Group.
15. Click Choose.
32
13
14
15
Figure 3-16:
The Select Policy Group dialog box opens.
16. Select the 100-KB-sec policy group you created earlier.
17. Click OK.
16
17
Figure 3-17:
33
The Select Policy Group dialog box closes, and you return to the Quality of Service Details dialog
box.
18. Click OK to apply the policy group to the svm1_vol01 volume. This policy group assignment takes effect
as soon as you click OK.
18
Figure 3-18:
19. Quickly go back to the command prompt window that is outputting the metrics from your load generator,
and observe that the reported ios: metric has dropped significantly from its previous level. In the
example in the screenshot, the ios: values dropped from the 1500 range down to 100 (note the
highlighting in the screenshot).
34
19
Figure 3-19:
20. With the workload generator window in focus, press Ctrl-C. When asked if you want to terminate the
batch job, answer y.
20
Figure 3-20:
The workload generator window closes, ending this exercise.
35
3.6 SnapMirror
SnapMirror is the asynchronous replication technology used in clustered Data ONTAP. Asynchronous replication
refers to data that is replicated (backed up to the same site, or an alternate site) on a periodic interval, rather than
as soon as the data is written.
MetroCluster, introduced with clustered Data ONTAP 8.3, provides synchronous replication. Synchronous
replication refers to data that is replicated (backed up to the same site, or an alternate site) as soon as the data is
written. MetroCluster configuration is outside the scope of this lab.
Clustered Data ONTAP 8.3 provides a number of SnapMirror enhancements, including a version-flexible
SnapMirror functionality, that allows the source of a SnapMirror relationship to be upgraded first (assuming that
the source and destination both run a version of clustered Data ONTAP 8.3, or later).
In this lab activity, you create a version-flexible SnapMirror relationship between two volumes in cluster1 and
cluster2. To do this, you first set up cluster peering between cluster1 and cluster2 by adding LIFs dedicated to
intercluster peering, then establish an authenticated relationship between the clusters. After the cluster peering
relationship is created, you will create a SnapMirror relationship between a volume on cluster1 (that serves as the
source of the SnapMirror relationship), and another volume on cluster2 (that serves as the disaster recover (DR)
copy).
3.6.1 Exercise
36
1
5
Figure 3-21:
6.
7.
8.
9.
37
6
7
Figure 3-22:
The dialog box closes and you return to the System Manager window.
10. Your newly created intercluster_lif1 LIF should be listed under the Network Interface tab in the
Networks pane.
11. Every node in cluster1 requires a cluster interconnect LIF, and since cluster1 is a two-node cluster, you
also need to create a cluster interconnect LIF for cluster1-02. Click the Create button again.
38
11
10
Figure 3-23:
12.
13.
14.
15.
39
12
13
14
15
Figure 3-24:
The dialog box closes, and you return to the System Manager window. At this point, you have an
intercluster LIF on each node in cluster1. When you created both intercluster LIFs, you accepted the
default to have Data ONTAP automatically select an IP address from the subnet. Review those LIFs to
verify which IP addresses Data ONTAP assigned to the intercluster LIFs.
16. System Manager should still show the Network Interface list in the Network pane. Scroll down to the
bottom of the list to see the entries for the new intercluster LIFs that you created. The IP addresses of
those LIFs are included in the list entries.
17. If you click a specific LIF, you can see more detail displayed on the bottom of the pane.
40
In this example, the IP addresses for the intercluster LIFs are 192.168.0.158 and 192.168.0.159.
However, because Data ONTAP automatically assigns these addresses, it is possible that the values in
your lab are different from the values in the example.
Attention: Record the actual addresses assigned to the intercluster LIFs in your lab because
you will need them for a later step of the lab.
16
17
Figure 3-25:
18.
19.
20.
21.
22.
41
After you create the intercluster LIFs for cluster1, create the intercluster LIFs for cluster2. cluster2
contains a single node, so you will create only one intercluster LIF for this cluster.
In your Chrome browser, click the browser tab for cluster2.
In the left pane, click the Cluster tab.
Navigate to cluster2 > Configuration > Network.
In the Network pane, click the Network Interfaces tab.
In the Network pane, click Create.
18
21
19
20
22
Figure 3-26:
The Create Network Interface dialog box opens.
23. Set the name to intercluster_lif1 (you can use the same name here that you used on cluster1
because LIF names are scoped to the containing cluster).
24. In the Interface Role section, select the Intercluster Connectivity option.
25. In the Port section, expand the Port or Adapters list for cluster2-01, and select port e0c.
26. Click Create.
42
23
24
25
26
Figure 3-27:
The dialog box closes, and you return to the System Manager window.
27. Record the IP address that Data ONTAP automatically assigned to your LIF. In this example, the
address is 102.168.0.163, but the value may be different in your lab.
43
27
Figure 3-28:
Cluster2 only contains a single node, so this one intercluster LIF is all you need.
28.
29.
30.
31.
44
Now that all your nodes have intercluster LIFs, it's time to establish the cluster peering relationship.
In your Chrome browser, click the browser tab for cluster1.
In the left pane, click the Cluster tab.
Navigate to cluster1 > Configuration > Peers.
In the Peers pane, click Create.
28
29
31
30
Figure 3-29:
The Create Cluster Peer dialog box opens.
32. In the Passphrase box enter Netapp1!.
33. In the Intercluster IP Addresses box, add the IP address that you noted earlier for the intercluster LIF
(intercluster_lif1) from the node cluster2-01.
Caution: In the example shown in this lab, the address was 192.168.0.163, but the address
that Data ONTAP assigned to the LIF in your lab may be different.
34. Click the Create button.
45
32
33
34
Figure 3-30:
The Confirm Create Cluster Peer dialog box opens.
35. Click OK.
35
Figure 3-31:
The dialog box closes, and you return to the System Manager window.
36. An entry for cluster2 now appears in the Peers list, but it is shown as unavailable because the
authentication status is still pending. You have initiated a cluster peering operation from cluster1, but to
complete it, cluster2 must also accept the peering request.
46
36
Figure 3-32:
37.
38.
39.
40.
47
Switch back to cluster2 so that you can accept the cluster peering operation.
In your Chrome browser, click the browser tab for cluster2.
In the left pane, click the Cluster tab.
Navigate to cluster2 > Configuration > Peers.
In the Peers pane, click Create.
37
38
40
39
Figure 3-33:
The Create Cluster Peer dialog box opens.
41. In the Passphrase box enter the same password you used earlier, Netapp1!.
42. In the Intercluster IP addresses box enter the IP addressess that you noted earlier for the intercluster
LIFs (intercluster_lif1 and intercluster_lif2) from the nodes cluster1-01 and cluster1-02.
Caution: In the example shown in this lab, those addresses were 192.168.0.158 and
192.168.0.159, but the addresses that Data ONTAP assigned to the LIFs in your lab may be
different.
43. When finished entering the values, click the Create button.
48
41
42
43
Figure 3-34:
The Confirm Create Cluster Peer dialog box opens.
44. Click the OK button.
44
Figure 3-35:
The dialog box closes, and you return to the System Manager window.
45. System Manager takes a few moments to create the peer relationship between cluster1 and cluster2.
The authentication status for that relationship should change to ok immediately, but the Availability
column will be at peering.
46. Wait a few seconds, then click Refresh every 12 seconds until the Availability column changes from
peering to available.
49
46
45
Figure 3-36:
At this point, the two clusters have an established peering relationship. Next, you can create a
SnapMirror relationship.
50
1
5
2
Figure 3-37:
The Create Mirror Relationship dialog box opens.
7. In the Destination Volume section, verify that the Cluster list to cluster2, and set the Storage Virtual
Machine list to svm1-dr.
8. Note the warning under this list saying that the selected SVM is not peered. Click the Authenticate link
at the end of that sentence.
51
Figure 3-38:
The Authentication dialog box opens.
9. Set the user name to admin, and the password Netapp1!.
10. Click OK.
52
10
Figure 3-39:
The Authentication dialog box closes and the system processes the SVM peering operation. After a few
seconds, you return to the Create Mirror Relationship dialog box.
11. In the Destination Volume section, accept the default values that System Manager populated into the
Volume Name box (svm1_svm1_vol01_mirrror1) and the Aggregate box (aggr1_cluster2_01).
12. In the Configuration Details section, select the Create version flexible mirror relationship
checkbox.This is a new feature introduced in 8.3 that removes the limitation requiring the destination
controller to have a clustered Data ONTAP operating system major version number equal to or
higher than the major version of the source controller. This allows customers to maintain undisrupted
replication during Data ONTAP upgrade cycles.
13. In the Mirror Schedule list, select the daily value.
14. When finished, click Create.
53
11
12
13
14
Figure 3-40:
The Create Mirror Relationship wizard begins the process of establishing and initializing the SnapMirror
relationship between the volumes.
15. When the status of all the initialization operations indicate success, click OK.
54
15
Figure 3-41:
16.
17.
18.
19.
20.
21.
55
You have now successfully established a SnapMirror relationship.To verify the status of that
relationship you'll need to look at the destination cluster.
In Chrome, select the broswer tab for cluster2.
Select the Storage Virtual Machines tab.
Navigate to cluster2 > svm1-dr > Protection.
In the Protection pane, select the relationship for source volume svm1_vol0. This should be the only
relationship listed.
In the lower pane, click the Details tab.
Examine the detail of this relationship, which indicate that it is healthy and that the last transfer just
completed a few moments ago.
16
17
19
18
21
20
Figure 3-42:
This completes the exercise.
56
Figure 3-43:
When -identity-preserve is set to false, only a subset of the source SVM's configuration data is replicated to the
destination SVM, as described in the following figure. This mode is intended for replication to different sites that
have different network resources, or to support the creation of additional read-only copies of the SVM within the
same environment as the source SVM.
57
Figure 3-44:
As with traditional volume Snapmirror, SVM DR relationships can be broken off, reversed, and re-synchronized,
allowing you to cut-over the SVM's services from one cluster to another. If -identity-preserve is set to true then
when you stop the source SVM and start the destination SVM, the destination SVM has the same LIFs, IP
addresses, namespace structure, and so on. However, such a switchover is disruptive for both CIFS (which
requires an SMB reconnect) and NFS (which requires a re-mount).
SVM DR does not replicate iSCSI or FCP configuration in either -identity-preserve mode. The underling volumes,
LUNs, and namespace are still replicated, as are the LIFs if -identity-preserve is set to true, but LUN igroup/
portsets will not be replicated nor will the SVM's iSCSI/FCP protocol configuration. If you want to support iSCSI/
FCP through an SVM DR relationship then you will have to manually configure the iSCSI/FCP protocols, igroups,
and portsets on the destination SVM.
3.7.1 Exercise
3.7.1
In this exercise you will be creating an identity-preserve "true" SVM-DR relationship between the source SVM
svm3 on cluster1 to a new SVM named smv3-dr that you will be creating on cluster2. You will then perform a cutover operation, making svm3-dr the new operational primary, and then reverting the primary back to svm3 again.
Note: This lab utilizes CLI sessions to the storage clusters cluster1 and cluster2, and to the Linux client
rhel1. You will be frequently switching between this sessions, so pay attention to the command prompts in
this exercise to help you issue the commands on the correct hosts.
1. Open a PuTTY sessions to each of cluster1 and cluster2, and log in with the username admin and the
password Netapp1!.
58
2. Open a PuTTY session to rhel1, and log in as root with the password Netapp1!.
3. In the PuTTY session for cluster2, display a list of the SVMs on the cluster.
cluster2::> vserver show
Vserver
----------cluster2
cluster2-01
svm1-dr
Type
------admin
node
data
Subtype
---------default
Admin
State
---------running
Operational
State
----------running
Root
Volume
---------svm1dr_
root
Aggregate
---------aggr1_
cluster2_
01
Type
------admin
node
data
svm3-dr
data
Subtype
---------default
Admin
State
---------running
dp-destination
running
4 entries were displayed.
Operational
State
----------running
Root
Volume
---------svm1dr_
root
stopped
Aggregate
---------aggr1_
cluster2_
01
-
cluster2::>
Notice that the svm3-dr SVM is administratively running but is operationally stopped.
6. On cluster2, initiate an SVM peering relationship between the svm3-dr and svm3.
cluster2::> vserver peer create -vserver svm3-dr -peer-vserver svm3 -applications snapmirror
-peer-cluster cluster1
Info: [Job 315] 'vserver peer create' job queued
cluster2::>
Peering
Applications
-----------------snapmirror
snapmirror
cluster2::>
59
Peering
Applications
Peering
Applications
-----------------snapmirror
snapmirror
cluster1::>
11. On cluster2, create the SnapMirror relationship between the source SVM svm3 and the destination
SVM svm3-dr.
cluster2::> snapmirror create -source-path svm3: -destination-path svm3-dr: -type DP
-throttle unlimited -identity-preserve true -schedule hourly
cluster2::>
If you are familiar with creating volume SnapMirror relationships from the CLI then this command
should look familiar, as it is essentially the same command used for volume SnapMirror, but with few
key differences. Most significant is the format of the values for the -source-path and -destination-path
arguments. Path values for volume SnapMirror take the form <svm>:<volume>, whereas for SVM
DR paths take the form <svm>:. One other difference is the inclusion of the -identity-preserve true
option, which indicates that this is an identity preserve relationship, meaning that all of the SVM's
configuration information should be replicated to the destination SVM. If you were to instead specify identity-preserve false then this would instead be an identity discard relationship.
12. Display the state of the cluster's SnapMirror relationships.
cluster2::> snapmirror show
Source
Destination Mirror Relationship
Path
Type Path
State
Status
----------- ---- ------------ ------- -------------svm1:svm1_vol01
XDP svm1-dr:svm1_svm1_vol01_mirror1
Snapmirrored
Idle
svm3:
DP
svm3-dr:
Uninitialized
Idle
2 entries were displayed.
Progress
Total
Last
Progress Healthy Updated
--------- ------- --------
true
true
cluster2::>
Data ONTAP has created the relationship, but not yet initialized it (i.e. it has not initiated the first data
transfer).
13. Initialize the SnapMirror relationship.
cluster2::> snapmirror initialize -destination-path svm3-dr:
cluster2::>
60
Progress
Total
Last
Progress Healthy Updated
--------- ------- --------
true
true
cluster2::>
Progress
Total
Last
Progress Healthy Updated
--------- ------- --------
true
true
cluster2::>
When you initializes an SVM DR relationship, clustered Data ONTAP starts replicating the configuration
data first, which includes details of the source SVM's volumes, and then afterward starts replicating
the source SVM's constituent volumes. If you issue a snapmirror show -expand command early in the
initialization process then the constituent relationships may not yet exist.
16. Periodically repeat the snapmirror show expand command until you start seeing output for the
constituent relationships.
cluster2::> snapmirror show -expand
Source
Destination Mirror Relationship
Path
Type Path
State
Status
----------- ---- ------------ ------- -------------svm1:svm1_vol01
XDP svm1-dr:svm1_svm1_vol01_mirror1
Snapmirrored
Idle
svm3:
DP
svm3-dr:
Uninitialized
Transferring
svm3:chn
DP
svm3-dr:chn Uninitialized
Idle
svm3:eng
DP
svm3-dr:eng Uninitialized
Idle
svm3:fin
DP
svm3-dr:fin Uninitialized
Idle
svm3:mfg
DP
svm3-dr:mfg Uninitialized
Idle
svm3:prodA DP
svm3-dr:prodA
Uninitialized
Idle
svm3:proj1 DP
svm3-dr:proj1
Uninitialized
Idle
8 entries were displayed.
61
Progress
Total
Last
Progress Healthy Updated
--------- ------- --------
true
true
true
true
true
true
true
true
cluster2::>
show
The relationship has completed initialization, meaning that the destination SVM is now a mirrored copy
of the source SVM.
18. Examine the status of the constituent relationships.
cluster2::> snapmirror show -expand
Progress
Source
Destination Mirror Relationship
Total
Last
Path
Type Path
State
Status
Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- -------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true svm3:
DP
svm3-dr:
Snapmirrored Idle
true
svm3:chn
DP
svm3-dr:chn Snapmirrored Idle
true
svm3:eng
DP
svm3-dr:eng Snapmirrored Idle
true
svm3:fin
DP
svm3-dr:fin Snapmirrored Idle
true
svm3:mfg
DP
svm3-dr:mfg Snapmirrored Idle
true
svm3:prodA DP
svm3-dr:prodA Snapmirrored Idle
true
svm3:proj1 DP
svm3-dr:proj1 Snapmirrored Idle
true
svm3:us
DP
svm3-dr:us
Snapmirrored Idle
true
9 entries were displayed.
cluster2::>
You see here that svm3-dr has 8 volumes, which correspond to the 8 volumes on svm3. Also notice
the two MDV* volumes at the beginning of the output; there are special volumes that clustered Data
ONTAP uses to replicate the SVM DR configuration data from the source SVM to the destination SVM.
20. Display a list of the volume snapshots for svm3-dr.
62
Note: Since this command output is lengthy, the following CLI examples will focus on just the
eng volume, but in your lab feel free to exclude the -volume eng portion of the command so you
can see the snaphots for all of svm3-dr's volumes.
cluster2::> snapshot show -vserver svm3-dr -volume eng
---Blocks--Vserver Volume
Snapshot
Size Total% Used%
-------- -------- ------------------------------------- -------- ------ ----eng
daily.2015-10-03_0010
168KB
0%
37%
daily.2015-10-04_0010
84KB
0%
23%
weekly.2015-10-04_0015
192KB
0%
40%
hourly.2015-10-04_1205
144KB
0%
33%
hourly.2015-10-04_1305
148KB
0%
34%
hourly.2015-10-04_1405
144KB
0%
33%
hourly.2015-10-04_1505
152KB
0%
35%
hourly.2015-10-04_1605
156KB
0%
35%
hourly.2015-10-04_1705
148KB
0%
34%
vserverdr.0.502f4455-6abf-11e5-b770-005056990685.2015-10-04_175125 0B 0% 0%
10 entries were displayed.
cluster2::> exit
The list of snapshots is the same on both the source and destination volumes.
22. On cluster2, initiate a SnapMirror update to transfer any changes on the source SVM since the last
transfer took place to the destination SVM.
cluster2::> snapmirror update -destination-path svm3-dr:
cluster2::>
23. Periodically view the status of the SnapMirror relationships until it goes idle.
cluster2::> snapmirror show
Progress
Source
Destination Mirror Relationship
Total
Last
Path
Type Path
State
Status
Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- -------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true svm3:
DP
svm3-dr:
Snapmirrored Idle
true
2 entries were displayed.
cluster2::>
63
Progress
Source
Destination Mirror Relationship
Total
Last
Path
Type Path
State
Status
Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- -------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true svm3:
DP
svm3-dr:
Snapmirrored Transferring 896KB true
svm3:chn
DP
svm3-dr:chn Snapmirrored Idle
true
svm3:eng
DP
svm3-dr:eng Snapmirrored Idle
true
svm3:fin
DP
svm3-dr:fin Snapmirrored Idle
true
svm3:mfg
DP
svm3-dr:mfg Snapmirrored Idle
true
svm3:prodA DP
svm3-dr:prodA Snapmirrored Idle
true
svm3:proj1 DP
svm3-dr:proj1 Snapmirrored Idle
true
svm3:us
DP
svm3-dr:us
Snapmirrored Idle
true
9 entries were displayed.
cluster2::>
Now there are two vserverdr* snapshots listed. After your first update, SnapMirror maintains 2 rolling
snapshots on the destination volume going forward.
26. On cluster1, look at the snapshots on the source volumes.
cluster1::> snapshot show -vserver svm3 -volume eng
---Blocks--Vserver Volume
Snapshot
Size Total% Used%
-------- -------- ------------------------------------- -------- ------ ----eng
daily.2015-10-03_0010
168KB
0%
37%
daily.2015-10-04_0010
84KB
0%
23%
weekly.2015-10-04_0015
192KB
0%
40%
hourly.2015-10-04_1205
144KB
0%
34%
hourly.2015-10-04_1305
148KB
0%
34%
hourly.2015-10-04_1405
144KB
0%
34%
hourly.2015-10-04_1505
152KB
0%
35%
hourly.2015-10-04_1605
156KB
0%
35%
hourly.2015-10-04_1705
152KB
0%
35%
vserverdr.1.502f4455-6abf-11e5-b770-005056990685.2015-10-04_175646 96KB 0%
25%
10 entries were displayed.
cluster1::>
Even after the first update, the source volumes continues to host a single rolling snapshot for
SnapMirror.
27. The Linux host rhe1l has svm3's root namespace volume NFS mounted at the start of the lab. Display
the /etc/fstab /entry that for this mount. (The /etc/fstab file lists the local disks and NFS filesystems that
should be automatically mounted at system boot time.)
[root@rhel1 ~]# grep svm3 /etc/fstab
svm3:/
/corp
64
nfs
defaults
0 0
[root@rhel1 ~]#
[root@rhel1 ~]#
Progress
Total
Last
Progress Healthy Updated
--------- ------- --------
true
true
65
Progress
Total
Last
Progress Healthy Updated
--------- ------- --------
true
true
cluster2::>
Type
------admin
node
data
Subtype
---------default
Admin
State
---------running
Operational
State
----------running
Root
Volume
---------svm1dr_
root
svm3-dr
data
default
running
stopped
svm3_root
Aggregate
---------aggr1_
cluster2_
01
aggr1_
cluster2_
01
It is administratively running but operationally stopped, as it should be since you have not cut over yet.
36. Examine the status of svm3-dr's LIFs.
cluster2::> net int show -vserver
(network interface show)
Logical
Status
Vserver
Interface Admin/Oper
----------- ---------- ---------svm3-dr
svm3_cifs_nfs_lif1
up/down
svm3_cifs_nfs_lif2
up/down
2 entries were displayed.
svm3-dr
Network
Current
Current Is
Address/Mask
Node
Port
Home
------------------ ------------- ------- ---192.168.0.143/24
cluster2-01
e0e
true
192.168.0.144/24
cluster2-01
e0c
true
cluster2::>
Type
------admin
node
node
data
Subtype
---------default
Admin
State
---------running
Operational
State
----------running
Root
Volume
---------svm1_root
svm2
data
default
running
running
svm2_root
svm3
data
default
running
running
svm3_root
Aggregate
---------aggr1_
cluster1_
01
aggr1_
cluster1_
02
aggr1_
cluster1_
01
66
svm3
Network
Current
Current Is
Address/Mask
Node
Port
Home
------------------ ------------- ------- ----
svm3_cifs_nfs_lif1
up/up
svm3_cifs_nfs_lif2
up/up
2 entries were displayed.
192.168.0.143/24
cluster1-01
e0d
true
192.168.0.144/24
cluster1-01
e0e
true
cluster1::>
The LIFs are both up. If you compare the IP addresses on these LIFs with the ones you saw a couple of
steps back for svm3-dr you'll see that they are the same. This is because you specified the -identitypreserve true option when you established the SVM disaster recovery relationship at the beginning of
this exercise.
39. Stop svm3.
cluster1::> vserver stop -vserver svm3
[Job 1033] Job is queued: Vserver Stop.
[Job 1033] Job succeeded: DONE
cluster1::>
Type
------admin
node
node
data
Subtype
---------default
Admin
State
---------running
Operational
State
----------running
Root
Volume
---------svm1_root
svm2
data
default
running
running
svm2_root
svm3
data
default
stopped
stopped
svm3_root
Aggregate
---------aggr1_
cluster1_
01
aggr1_
cluster1_
02
aggr1_
cluster1_
01
Is
Home
---true
true
cluster1::>
67
Operational
State
----------running
Root
Volume
---------svm1dr_
root
Aggregate
---------aggr1_
cluster2_
01
svm3-dr
data
default
running
stopped
svm3_root
aggr1_
cluster2_
01
Type
------admin
node
data
Subtype
---------default
Admin
State
---------running
Operational
State
----------running
Root
Volume
---------svm1dr_
root
svm3-dr
data
default
running
running
svm3_root
Aggregate
---------aggr1_
cluster2_
01
aggr1_
cluster2_
01
svm3-dr
Network
Current
Current Is
Address/Mask
Node
Port
Home
------------------ ------------- ------- ---192.168.0.143/24
cluster2-01
e0e
true
192.168.0.144/24
cluster2-01
e0c
true
cluster2::>
68
Type
Size Available Used%
---- ---------- ---------- ----RW
1GB
972.5MB
5%
RW
1GB
972.5MB
5%
RW
1GB
972.5MB
5%
RW
1GB
972.5MB
5%
RW
1GB
972.5MB
5%
RW
1GB
972.5MB
5%
RW
20MB
18.85MB
5%
online
RW
1GB
972.5MB
5%
[root@rhel1 ~]#
69
file1.txt
[root@rhel1 ~]#
Progress
Total
Last
Progress Healthy Updated
--------- ------- --------
true
true
cluster1::>
SnapMirror creates the relationship. Since there is an existing relationship for the two SVMs from when
it was going the other direction before it was broken off, the Mirror State shows as Broken-off here.
60. Re-sync the relationship.
cluster1::> snapmirror resync -destination-path svm3:
cluster1::>
62. Periodically display the status of the constituent relationships until they all show Idle.
cluster1::> snapmirror show -expand
70
Progress
Source
Destination Mirror Relationship
Total
Last
Path
Type Path
State
Status
Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- -------svm3-dr:
DP
svm3:
Broken-off Transferring 2.47MB
true
svm3-dr:chn DP
svm3:chn
Snapmirrored Idle
true
svm3-dr:eng DP
svm3:eng
Snapmirrored Idle
true
svm3-dr:fin DP
svm3:fin
Snapmirrored Idle
true
svm3-dr:mfg DP
svm3:mfg
Snapmirrored Idle
true
svm3-dr:prodA DP svm3:prodA
Snapmirrored Idle
true
svm3-dr:prodB DP svm3:prodB
Snapmirrored Idle
true
svm3-dr:proj1 DP svm3:proj1
Snapmirrored Idle
true
svm3-dr:us DP
svm3:us
Snapmirrored Idle
true
cluster1://svm1/svm1_root LS cluster1://svm1/svm1_root_lsm1 Snapmirrored Idle - true cluster1://svm1/svm1_root_lsm2 Snapmirrored Idle - true 10 entries were displayed.
cluster1::>
If you pay attention to the status of the relationship for the prodB volume while running these
commands (and if you are fast enough), you'll see it go from Uninitialized to Transferring to Idle while
the other relationships go from Broken-off to Re-synching to Idle.
63. View the status of the parent relationship.
cluster1::> snapmirror show
Source
Destination Mirror Relationship
Path
Type Path
State
Status
----------- ---- ------------ ------- -------------svm3-dr:
DP
svm3:
Snapmirrored
Idle
cluster1://svm1/svm1_root
LS
cluster1://svm1/svm1_root_lsm1
Snapmirrored
Idle
cluster1://svm1/svm1_root_lsm2
Snapmirrored
Idle
3 entries were displayed.
Progress
Total
Last
Progress Healthy Updated
--------- ------- --------
true
true
true
cluster1::>
Progress
Total
Last
Progress Healthy Updated
--------- ------- --------
true
true
true
cluster1::>
71
cluster1::>
Type
------admin
node
data
Subtype
---------default
Admin
State
---------running
Operational
State
----------running
Root
Volume
---------svm1dr_
root
svm3-dr
data
default
running
running
svm3_root
Aggregate
---------aggr1_
cluster2_
01
aggr1_
cluster2_
01
Type
------admin
node
data
Subtype
---------default
Admin
State
---------running
Operational
State
----------running
Root
Volume
---------svm1dr_
root
svm3-dr
data
default
stopped
stopped
svm3_root
Aggregate
---------aggr1_
cluster2_
01
aggr1_
cluster2_
01
svm3-dr
Network
Current
Current Is
Address/Mask
Node
Port
Home
------------------ ------------- ------- ---192.168.0.143/24
cluster2-01
e0e
true
192.168.0.144/24
cluster2-01
e0c
true
cluster2::>
72
Type
------admin
node
node
data
Subtype
---------default
Admin
State
---------running
Operational
State
----------running
Root
Volume
---------svm1_root
svm2
data
default
running
running
svm2_root
svm3
data
default
running
running
svm3_root
Aggregate
---------aggr1_
cluster1_
01
aggr1_
cluster1_
02
aggr1_
cluster1_
01
svm3
Network
Current
Current Is
Address/Mask
Node
Port
Home
------------------ ------------- ------- ---192.168.0.143/24
cluster1-01
e0d
true
192.168.0.144/24
cluster1-01
e0e
true
cluster1::>
Type
Size Available Used%
---- ---------- ---------- ----RW
1GB
972.5MB
5%
RW
1GB
972.5MB
5%
RW
1GB
972.5MB
5%
RW
1GB
972.5MB
5%
RW
1GB
972.5MB
5%
RW
20MB
972.5MB
5%
RW
1GB
972.5MB
5%
RW
20MB
18.84MB
5%
RW
1GB
972.5MB
5%
cluster1::>
73
[root@rhel1 prodB]#
The file handle is stale because you did not unmount the NFS filesystem prior to the latest SVM DR cutover.
76. Change out of the /corp directory tree so you can unmount the NFS volume.
[root@rhel1 prodB] cd
[root@rhel1 ~]
80. List the contents of the /corp/mfg/chn/prodB directory to see if the file you created on svm3-dr before
the last re-sync and cut-over is present.
[root@rhel1 ~]# ls /corp/mfg/chn/prodB
file1.txt
[root@rhel1 ~]#
Yes, the file is there. It's noteworthy that there was no extra work involved in replicating back
configuration changes that were made on svm3-dr (from creating and mounting a new volume) when it
was running.
81. On cluster2, re-sync the SnapMirror relationship.
cluster2::> snapmirror resync -destination-path svm3-dr:
cluster2::>
82. Periodically check the status of the SnapMirror relationship until it goes Idle.
cluster2::> snapmirror show
Progress
Source
Destination Mirror Relationship
Total
Last
Path
Type Path
State
Status
Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- -------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true svm3:
DP
svm3-dr:
Snapmirrored Idle
true
2 entries were displayed.
cluster2::>
At this point the SVM disaster recovery relationship is back to the state it was in before you initiated any
cutover operations.
This concludes this lab exercise.
74
ONTAP provides a number of predefined roles that can be used; you can also create your own customized roles,
if required.
In System Manager, roles and users are grouped separately under the cluster and the SVM. If you use the CLI,
you will see roles and users together with the same commands.
2
4
3
Figure 3-45:
Next, take a look at the cluster-wide users.
5. In the left pane, select Users.
6. In the Users pane, click Add.
75
6
5
Figure 3-46:
The Add User dialog box opens. Use this dialog box to create a new limited-permission administrative
user for the cluster.
7. Set the user name to intern, and the password to netapp123.
8. Click Add next to the User Login Methods pane.
9. Set the Application drop-down list to ssh, and the Role drop-down list to readonly.
10. Click OK.
76
10
Figure 3-47:
The new user login method you just entered is displayed in the User Login Methods list.
11. Click Add at the bottom of the dialog box.
77
11
Figure 3-48:
The Add User dialog box closes and you return to the System Manager window.
12. If Chrome prompts you to save the password for this site, click Nope.
12
Figure 3-49:
13. The newly created intern account is now included in the list of accounts displayed in the Users pane.
78
13
Figure 3-50:
14. Start a new PuTTY session to cluster1, and log in as the user intern, using the password netapp123.
Try listing what commands are available. Observe that the volume create, or volume move commands,
amongst others, are not available to you because the readonly role you assigned to the intern
account prevents access to commands that modify the cluster configuration.
79
1
5
Figure 3-51:
The Edit Role dialog box opens.
6. Scroll down the Role Attributes list to see the commands that are available to a user with this role. Note
that this role has full access to some commands, read-only access to others, and no access to the rest.
7. Click Cancel to discard any changes you might have made in this dialog box.
80
Figure 3-52:
The Edit Roles dialog box closes and focus returns to the System Manager window. Take a look at the
other roles for this SVM and observe how their permissions differ.
8. In the left pane, select Users.
9. In the Users pane, select the vsadmin user.
10. If you look at the User Login Methods area at the bottom of the Users pane, you can see that the
vsadmin user has the vsadmin role.
81
10
Figure 3-53:
11. Open a PuTTY session and connect to cluster1. Try to log in to cluster1 as vsadmin with the password
Netapp1!.
login as: vsadmin
Using keyboard-interactive authentication.
Password:
Access denied
[email protected] password:
Remember that the user vsadmin is specifically for administering the SVM svm1. To manage an SVM
with delegated SVM-scoped administration, you must log in to the management LIF for the SVM; in this
case, svm1.
12.
13.
14.
15.
16.
82
12
15
13
14
16
Figure 3-54:
Tip: Alternatively, use the cluster management CLI and type network interface show when
logged in as the cluster administrator to obtain this IP address.
17. On this system, the management LIF for svm1 is named svm1-mgmt with the IP address
192.168.0.147. There is also a connection entry in PuTTY named cluster1-svm1. Using the cluster1svm1 connection entry in PuTTY, the vsadmin user, and the password Netapp1!, connect to svm1
over SSH.
login as: vsadmin
Using keyboard-interactive authentication.
Password:
svm1::>
18. As the vsadmin user, attempt to modify a network port or create a new aggregate by using the network
port modify command and the storage aggregate create command.
svm1::> network port modify
Error: "port" is not a recognized command
svm1::> storage aggregate create
Error: "storage" is not a recognized command
These commands are not available to you as the vsadmin user, because control of logical entities
inside svm1 is delegated to vsadmin, while network ports and storage aggregates are physical entities
controlled by the cluster administrator.
19. As the vsadmin user, run the volume new -aggregate ? command.
cluster1::> volume new -aggregate ?
<aggregate name>
Aggregate Name
cluster1::>
83
Attention: You can create new volumes as the vsadmin user, but only on specific aggregates.
The reason is that when the svm1 SVM was set up, the cluster administrator configured svm1 to
allow volume creation on these aggregates. To view this list, run the vserver show svm1 -fields
aggr-list command.
cluster1::> vserver show -vserver svm1 -fields aggr-list
vserver aggr-list
------- ----------------------------------------------------------------------svm1
aggr1_cluster1_01,aggr1_cluster1_02,aggr2_cluster1_01,aggr2_cluster1_02
cluster1::>
interface modify
command.
Attention: You cannot modify network interfaces as the vsadmin user. The vsadmin user
has the vsadmin role, which provides read-only access to the network interface command
directory.
2. After you verify that a domain authentication tunnel does not exist, verify that the CIFS-enabled SVM
(svm1) is a member of the appropriate domain, DEMO.NETAPP.COM.
cluster1::> vserver cifs show -vserver svm1
Vserver:
CIFS Server NetBIOS Name:
NetBIOS Domain/Workgroup Name:
Fully Qualified Domain Name:
Default Site Used by LIFs Without Site Membership:
Authentication Style:
CIFS Server Administrative Status:
CIFS Server Description:
List of NetBIOS Aliases:
cluster1::>
svm1
SVM1
DEMO
DEMO.NETAPP.COM
domain
up
-
3. After you verify that the CIFS-enabled SVM svm1 is a member of the appropriate domain, set up a
domain authentication tunnel.
cluster1::> security login domain-tunnel create -vserver svm1
4. With the authentication tunnel configured, a new authentication method is available to you, domain.
Use this new authentication method to create a new cluster administrator.
cluster1::> security login create -authmethod domain -username DEMO\Administrator
-application ssh
84
5. You can now log in to the cluster as a domain administrator, using the DOMAIN\username syntax. Open
a new PuTTY session as described in the Before You Begin section. When prompted for a username
and password, enter DOMAIN\Administrator as the user name, and Netapp1! as the password.
login as: DEMO\Administrator
Using keyboard-interactive authentication.
Password:
cluster1::>
Cancel an update
Manage the cluster image package repository
Pause an update
Resume an update
Display currently running image information
Display the update history
Display the update transaction log
Display the update progress
Manage an update
Validates the cluster's update eligibility
The cluster image package command directory contains the commands used to manage the software packages
that contain future versions of clustered Data ONTAP. Examine the options that are available under this directory.
cluster1::cluster image> package
cluster1::cluster image package> ?
delete
Remove a package from the cluster image package
repository
get
Fetch a package file from a URL into the cluster
image package repository
show
Display currently installed image information
show-repository
Display information about packages available in
the cluster image package repository
cluster1::cluster image package>
Use the cluster image update command to upgrade a cluster once a new package has been added to the cluster
package repository. Enter the cluster image command directory, and examine the parameters that are available
with the cluster image update command.
cluster1::cluster image package> ..
cluster1::cluster image> update ?
[-version] <text>
[[-nodes] <nodename>, ...]
[ -estimate-only [true] ]
[ -pause-after {none|all} ]
[ -ignore-validation-warning {true|false} ]
85
Update Version
Node
Estimate Only
Update Pause (default: none)
Ignore Validation (default:
[ -skip-confirmation {true|false} ]
[ -force-rolling [true] ]
[ -stabilize-minutes {1..60} ]
false)
Skip Confirmation (default:
false)
Force Rolling Update
Minutes to stabilize (default:
8)
cluster1::cluster image>
86
4 Version History
87
Version
Date
1.0
October 2014
Insight 2014
1.0.1
December 2014
1.1
October 2015
Insight 2015
Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact
product and feature versions described in this document are supported for your specific environment.
The NetApp IMT defines product components and versions that can be used to construct configurations
that are supported by NetApp. Specific results depend on each customer's installation in accordance
with published specifications.
NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any
information or recommendations provided in this publication, or with respect to any results that may be obtained
by the use of the information or observance of any recommendations provided herein. The information in this
document is distributed AS IS, and the use of this information or the implementation of any recommendations or
techniques herein is a customers responsibility and depends on the customers ability to evaluate and integrate
them into the customers operational environment. This document and the information contained herein may be
used solely in connection with the NetApp products discussed in this document.
Go further, faster
2015 NetApp, Inc. All rights reserved. No portions of this presentation may be reproduced without prior written
consent of NetApp, Inc. Specifications are subject to change without notice. NetApp and the NetApp logo are
registered trademarks of NetApp, Inc. in the United States and/or other countries. All other brands or products are
trademarks or registered trademarks of their respective holders and should be treated as such.