0% found this document useful (0 votes)
159 views88 pages

Advanced Concepts For Clustered Data Ontap 8.3.1 V1.1-Lab Guide

Advanced Concepts for Clustered Data Ontap 8.3.1 v1.1-Lab Guide

Uploaded by

Mani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
159 views88 pages

Advanced Concepts For Clustered Data Ontap 8.3.1 V1.1-Lab Guide

Advanced Concepts for Clustered Data Ontap 8.3.1 v1.1-Lab Guide

Uploaded by

Mani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

Advanced Concepts for Clustered Data

ONTAP 8.3.1
December 2015 | SL10238 Version 1.1

TABLE OF CONTENTS

1 Introduction...................................................................................................................................... 4
2 Lab Environment............................................................................................................................. 5
3 Lab Activities................................................................................................................................... 7
3.1 Lab Preparation......................................................................................................................... 7
3.1.1 Accessing the Command Line..............................................................................................................................7
3.1.2 Accessing System Manager................................................................................................................................. 9

3.2 Clustered Data ONTAP CLI.....................................................................................................11


3.2.1 Explore the Command Hierarchy....................................................................................................................... 12
3.2.2 Setting the SVM Context.................................................................................................................................... 15
3.2.3 Node Management CLI...................................................................................................................................... 16
3.2.4 Node-Scoped CLI............................................................................................................................................... 17

3.3 Load-Sharing Mirrors.............................................................................................................. 19


3.3.1 Namespace Overview.........................................................................................................................................19
3.3.2 Load-Sharing Mirror Overview............................................................................................................................19
3.3.3 Exercise.............................................................................................................................................................. 19

3.4 IPspaces, Broadcast Domains, and Subnets....................................................................... 22


3.4.1 Clustered Data ONTAP 8.3 Networking Overview............................................................................................. 22
3.4.2 Exercise.............................................................................................................................................................. 23

3.5 Quality of Service (QoS)......................................................................................................... 28


3.5.1 Exercise.............................................................................................................................................................. 29

3.6 SnapMirror................................................................................................................................36
3.6.1 Exercise.............................................................................................................................................................. 36

3.7 Disaster Recovery for Storage Virtual Machines................................................................. 56


3.7.1 Exercise.............................................................................................................................................................. 58

3.8 Appendix: Additional Administrative Users and Roles....................................................... 74


3.8.1 Cluster-Scoped Users and Roles....................................................................................................................... 75

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3.8.2 SVM Users and Roles........................................................................................................................................79

3.9 Appendix: Active Directory Authentication Tunneling........................................................ 84


3.10 Automated Nondisruptive Upgrades................................................................................... 85
4 Version History.............................................................................................................................. 87

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1 Introduction
This Lab Guide provides the steps to complete the Insight 2015 Hands-on Lab for Advanced Concepts for
clustered Data ONTAP 8.3.1.

1 Lab Objectives
This lab provides an introduction into a number of more advanced features found in clustered Data ONTAP,
including the Command Line Interface (CLI), load-sharing mirrors, IPspaces, Quality of Service (QoS), cluster
peering, Disaster Recover for Storage Virtual Machines (SVM-DR), administrative users and roles, and Active
Directory Authentication Tunneling.

1 Prerequisites
This lab builds on the concepts covered in the Basic Concepts for Clustered Data ONTAP 8.3 lab, and requires
knowledge of the topics covered in that lab. You should already understand the concepts and know how to
use OnCommand System Manager, how to configure a Storage Virtual machine (SVM), and how to create
aggregates, volumes, and LIFs. You should also have a basic knowledge of Windows administration. Knowledge
of UNIX is not required, but a Linux virtual machine (VM) is provided.
Your starting point for this lab is a cluster named cluster1, with two nodes named cluster1-01 and cluster1-02.
There are two SVMs, svm1 and svm2, each hosting a variety of volumes.
Before you start the lab, launch System Manager and get familiar with the cluster configuration, including location,
naming, and status of the aggregates, volumes, LIFs, and SVMs.
The terms Storage Virtual Machine (SVM) and Vserver are used interchangeably in this lab. SVM is used to
describe virtualized storage systems as a concept. Vserver is the term used to refer to SVMs in the clustered
Data ONTAP command line and in the System Manager user interface. SVMs configured in this lab follow the
naming convention svmN, where N is a number, and svm is shorthand for Storage Virtual Machine.

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2 Lab Environment
The following figure illustrates the network configuration.

Figure 2-1:
Table 1 shows the host information used in this lab.
Table 1: Host Information

Host Name

Operating System

Role/Function

IP Address

cluster1

clustered Data ONTAP 8.3

cluster

192.168.0.101

cluster1-01

clustered Data ONTAP 8.3

cluster 1, node 1

192.168.0.111

cluster1-02

clustered Data ONTAP 8.3

cluster 1, node 2

192.168.0.112

cluster2

clustered Data ONTAP 8.3

cluster

192.168.0.102

cluster2-01

clustered Data ONTAP 8.3

cluster 2, node 1

192.168.0.121

JUMPHOST

Windows 2008 R2

primary desktop for lab

192.168.0.5

rhel1

Red Hat Linux 6.5

Linux server

192.168.0.61

DC1

Windows 2008R2

Active Directory/DNS

192.168.0.253

Table 2 lists the user IDs and passwords used in this lab.

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Table 2: User IDs and Passwords

Host Name

User ID

Password

JUMPHOST

DEMO\Administrator

Netapp1!

cluster1

admin

Netapp1!

Same for individual cluster nodes

cluster2

admin

Netapp1!

Same for individual cluster nodes

rhel1

root

Netapp1!

DC1

DEMO\Administrator

Netapp1!

Advanced Concepts for Clustered Data ONTAP 8.3.1

Comments

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3 Lab Activities
In this lab, you will perform the following tasks:

Explore the CLI in more detail, and set it to work in an SVM context.
Navigate the node-scoped CLI.
Check the cluster and SVM administrative roles, users, and groups.
Configure load-sharing mirrors to protect the namespace.
Learn about IPspace, Broadcast Domains, and Subnets.
Use QoS to manage tenants and workloads.
Create intercluster LIFs and create a cluster peering relationship for SnapMirror.
Create a Disaster Recovery for Storage Virtual Machines (SMV DR) relationship from one cluster to
another, perform a cutover operation, and then revert back to the primary.
Configure authentication tunneling for cluster administrators (refer to the appendix).
Add a volume that has a different language setting from the SVM that contains the volume (refer to the
appendix).
Learn about new automated nondisruptive upgrade features in Data ONTAP 8.3.
This is a self-guided lab. You can complete or skip any exercise.
The expected time for you to complete the entire lab is approximately 1 hour and 30 minutes.
Note: Before you begin the lab activities, you should understand how to log into and out of the clustered
Data ONTAP system by using the CLI and System Manager.

3.1 Lab Preparation


3.1.1 Accessing the Command Line
PuTTY is the terminal emulation program used in the lab to log into Linux hosts and storage controllers in order to
run command line commands.
1. The launch icon for the PuTTY application is pinned to the taskbar on the Windows host JUMPHOST as
shown in the following screenshot; just double-click on the icon to launch it.

Figure 3-1:
If you already have another PuTTY session open then this step will only bring that session into focus
on the display. If your intention is to open another PuTTY session, then right-click on the PuTTY toolbar
icon and select PuTTY from the context menu.
Once PuTTY launches, you can connect to one of the hosts in the lab by following the next steps. This
example shows a user connecting to the Data ONTAP cluster named cluster1.

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2. By default PuTTY should launch into the Basic options for your PuTTY session display as shown in the
screenshot. If you accidentally navigate away from this view just click on the Session category item to
return to this view.
3. Use the scrollbar in the Saved Sessions box to navigate down to the desired host and double-click it
to open the connection. A terminal window will open and you will be prompted to log into the host. You
can find the correct username and password for the host in the Lab Host Credentials table in the Lab
Environment section at the beginning of this guide.

Figure 3-2:
If you are new to the clustered Data ONTAP CLI, the length of the commands can seem a little
initimidating. However, the commands are actually quite easy to use if you remember the following three
tips:

Make liberal use of the Tab key while entering commands, as the clustered Data ONTAP
command shell supports tab completion. If you hit the Tab key while entering a portion of a
command word, the command shell will examine the context and try to complete the rest of
the word for you. If there is insufficient context to make a single match, it will display a list of all
the potential matches. Tab completion also usually works with command argument values, but
there are some cases where there is simply not enough context for it to know what you want,
in which case you will just need to type in the argument value.
You can recall your previously entered commands by repeatedly pressing the up-arrow key,
and you can then navigate up and down the list using the up and down arrow keys.When you
find a command you want to modify, you can use the left arrow, right arrow, and Delete keys
to navigate around in a selected command to edit it.
Entering a question mark character ? causes the CLI to print contextual help information.
You can use this character by itself, or while entering a command.

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The Cluster CLI section of this lab guide covers the operation of the clustered Data ONTAP CLI in much
greater detail.
Caution: The commands shown in this guide are often so long that they span multiple lines.
When you see this, in every case you should include a space character between the text from
adjoining lines.
If you intend to use copy/paste of commands from the guide to the lab, when dealing with multiline commands you can only copy one line at a time. If you try to copy multiple lines at once then
the commands will fail in the lab.

3.1.2 Accessing System Manager


On the Jumphost, the Windows 2012R2 Server desktop you see when you first connect to the lab, open the web
browser of your choice. This lab guide uses Chrome, but you can use Firefox or Internet Explorer if you prefer one
of those. All three browsers already have System Manager set as the browser home page.
1. Launch Chrome to open System Manager.

Figure 3-3:
The OnCommand System Manager Login window opens.
2. Note the tabs at the top of the browser window. This lab contains multiple clusters, and each tab opens
System Manager for a different cluster.
3. Enter the User Name admin, and the Password Netapp1!.
4. Click the Sign In button.

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 3-4:
System Manager is now logged in to cluster1, and displays a summary page for the cluster. If you are
unfamiliar with System Manager, here is a quick introduction to its layout. Please take a few moments to
expand and browse these tabs to familiarize yourself with their contents.
5. Use the tabs on the left side of the window to manage various aspects of the cluster. The Cluster tab
accesses configuration settings that apply to the cluster as a whole.
6. The Storage Virtual Machines tab allows you to manage individual Storage Virtual Machines (SVMs,
also known as Vservers).
7. The Nodes tab contains configuration settings that are specific to individual controller nodes.

10

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6
7
Figure 3-5:
Tip: As you use System Manager in this lab, you may encounter situations where buttons
at the bottom of a System Manager pane are beyond the viewing size of the window, and no
scroll bar exists to allow you to scroll down to see them. If this happens, you have two options;
either increase the size of the browser window (you might need to increase the resolution of
your jumphost desktop to accommodate the larger browser window), or in the System Manager
window, use the tab key to cycle through all the various fields and buttons, which eventually
forces the window to scroll down to the non-visible items.

3.2 Clustered Data ONTAP CLI


This section provides an introduction to the clustered Data ONTAP Command Line Interface, or CLI. Here you
will learn about the command hierarchy and the various CLI interfaces (clustershell, nodeshell), and also about a
number of the shell's usability features.
When you open an SSH session to a cluster you are usually doing so to the cluster management LIF. The cluster
management LIF is set up when you first configure the cluster and automatically migrates across the cluster if the
home port or home node on which it is located goes down. When you log into the cluster management LIF you
are accessing the clustershell.
If you have not already opened a PuTTY session to cluster1, please do so now.

11

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3.2.1 Explore the Command Hierarchy


After logging in, you are placed at the top of the command line hierarchy. The commands are built in a command
hierarchy, with associated commands grouped together in branches. These branches are made up of command
directories and commands; this organization is similar to the organization of directories and files within a file
system.
Type ? to list the base commands available at the top level of the hierarchy.
cluster1::> ?
up
cluster>
dashboard>
event>
exit
export-policy
history
job>
lun>
man
metrocluster>
network>
qos>
redo
rows
run
security>
set
snapmirror>
statistics>
storage>
system>
top
volume>
vserver>
cluster1::>

Go up one directory
Manage clusters
(DEPRECATED)-Display dashboards
Manage system events
Quit the CLI session
Manage export policies and rules
Show the history of commands for this CLI session
Manage jobs and job schedules
Manage LUNs
Display the on-line manual pages
Manage MetroCluster
Manage physical and virtual network connections
QoS settings
Execute a previous command
Show/Set the rows for this CLI session
Run interactive or non-interactive commands in
the nodeshell
The security directory
Display/Set CLI session settings
Manage SnapMirror
Display operational statistics
Manage physical storage, including disks,
aggregates, and failover
The system directory
Go to the top-level directory
Manage virtual storage, including volumes,
snapshots, and mirrors
Manage Vservers

Type any base command to move into that branch of the command hierarchy. For example, the volume branch
contains all commands related to volumes. The prompt changes to show you the part of the command tree you
are working in.
Type ? again. This time the hierarchy shows you the specific subcommands available for that part of the
command tree.
cluster1::> volume
cluster1::volume> ?
aggregate>
autosize
clone>
create
delete
efficiency>
file>
modify
mount
move>
offline
online
qtree>
quota>
rename
restrict
show
show-footprint
show-space
size
snapshot>

12

Manage Infinite Volume aggregate operations


Set/Display the autosize settings of the flexible
volume.
Manage FlexClones
Create a new volume
Delete an existing volume
Manage volume efficiency
File related commands
Modify volume attributes
Mount a volume on another volume with a
junction-path
Manage volume move operations
Take an existing volume offline
Bring an existing volume online
Manage qtrees
Manage Quotas, Policies, Rules and Reports
Rename an existing volume
Restrict an existing volume
Display a list of volumes
Display a list of volumes and their data and
metadata footprints in their associated
aggregate.
Display space usage for volume(s)
Set/Display the size of the volume.
Manage snapshots

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

unmount
cluster1::volume>

Unmount a volume

To show the syntax for a particular command, enter the command and follow it with ?.
cluster1::volume> size ?
-vserver <vserver name>
[-volume] <volume name>
[[-new-size] <text>]
cluster1::volume>

Vserver Name
Volume Name
[+|-]<New Size>

Tab completion works by completing what you are typing, and prompting you for what is recommended next while
you are still typing part of a command directory or command. It can even provide options for the values required
to complete the command.
Try tab completion by backspacing to clear the size command, typing the modify command, and pressing the Tab
key. The next option is automatically filled in. Press Tab again to get a list of options, and then type 1 to complete
the text svm1. Press Tab again to get the -volume option, and type in the volume name svm1_vol02. Continue
using tab completion until you get to -security-style unix. Before you press Enter, backspace to delete the word
unix, and type ?.
The output should look like this example:
cluster1::volume> modify -vserver svm1 -volume svm1_vol02 -size 1GB -state online
-policy default -user 0 -group 0 -security-style ?
mixed
ntfs
unix

Backspace to delete the modify command, and type .. to move up one level in the command hierarchy, or type
top to return to the root of the command tree.
cluster1::volume> top
cluster1::>

Type history to show the commands that you executed in the current session, or use the up arrow to repeat
recently executed commands. Use the right and left arrows, and the backspace key to edit and rerun the
commands. Alternatively, you can use the! (number) syntax to run a previous command in the list.
cluster1::> history
1 rows 0
2 volume
3 top
cluster1::>

You may notice the rows 0 command in the history list output shown in this guide (and not shown in your lab).
rows 0 disables output paging on the command console. After you run rows 0, the console stops prompting you to
Press <space> to page down, <return> for next line, or q to quit. We suggest you leave the existing pagination
setting in place while you proceed through this lab.
Certain commands require different privilege levels. By default, you are logged in with admin privilege. To enter
advanced or diag privilege, run the set -privilege <level> command, or use set <level> as the shorter
version of the command. An * is appended to the prompt to show that you are not in the default privileged level.
Note: There is no access to advanced or diag privilege commands in System Manager.
The best practice is to initiate non-admin privilege only as needed, then return to admin privilege with the
commands set -priv admin or set admin.
cluster1::> set advanced
Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y
cluster1::*>
cluster1::*> set admin
cluster1::>

13

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

You can type abbreviations to run a command. For example, vol show is recognized as volume show. Be aware
that command abbreviations are limited. For instance, there are also volume show-footprint or volume show-space
commands, so the abbreviation vol sho is not unique to a single command, and therefore not recognized.
You can use pattern matching with wildcards when running commands. For example:
cluster1::> vol show svm2*
Vserver
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----svm2
svm2_root
aggr1_cluster1_02 online RW
20MB
18.88MB
5%
svm2
svm2_vol01
aggr1_cluster1_01 online RW
1GB
1023MB
0%
2 entries were displayed.
cluster1::>

When running commands, you see only certain fields by default. To display all fields, run the -instance command.
cluster1::> network interface show -lif cluster_mgmt -instance
Vserver Name: cluster1
Logical Interface Name: cluster_mgmt
Role: cluster-mgmt
Data Protocol: none
Home Node: cluster1-01
Home Port: e0c
Current Node: cluster1-01
Current Port: e0c
Operational Status: up
Extended Status: Is Home: true
Network Address: 192.168.0.101
Netmask: 255.255.255.0
Bits in the Netmask: 24
IPv4 Link Local: Subnet Name: Administrative Status: up
Failover Policy: broadcast-domain-wide
Firewall Policy: mgmt
Auto Revert: false
Fully Qualified DNS Zone Name: none
DNS Query Listen Enable: false
Failover Group Name: Default
FCP WWPN: Address family: ipv4
Comment: IPspace of LIF: Default
cluster1::>

You will often see a very large number of fields for a particular object. To show a few specific fields, limit the
number of displayed fields by using the -fields qualifier.
Remember, you can use ? to show all possible values. Try using wildcards to show only items with svm1 in the
name.
cluster1::> network interface show ?
[ -by-ipspace | -failover | -instance | -fields <fieldname>, ... ]
[ -vserver <vserver> ]
Vserver Name
[[-lif] <lif-name>]
Logical Interface Name
[ -role {cluster|data|node-mgmt|intercluster|cluster-mgmt} ]
Role
[ -data-protocol {nfs|cifs|iscsi|fcp|fcache|none}, ... ]
Data Protocol
[ -home-node <nodename> ]
Home Node
[ -home-port {<netport>|<ifgrp>} ]
Home Port
[ -curr-node <nodename> ]
Current Node
[ -curr-port {<netport>|<ifgrp>} ]
Current Port
[ -status-oper {up|down} ]
Operational Status
[ -status-extended <text> ]
Extended Status
[ -is-home {true|false} ]
Is Home
[ -address <IP Address> ]
Network Address
[ -netmask <IP Address> ]
Netmask
[ -netmask-length <integer> ]
Bits in the Netmask
[ -auto {true|false} ]
IPv4 Link Local
[ -subnet-name <subnet name> ]
Subnet Name
[ -status-admin {up|down} ]
Administrative Status
[ -failover-policy {system-defined|local-only|sfo-partner-only|ipspace-wide|disabled|broadcastdomain-wide} ]

14

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Failover Policy
[ -firewall-policy <policy> ]
Firewall Policy
[ -auto-revert {true|false} ]
Auto Revert
[ -dns-zone {zone-name|none} ]
Fully Qualified DNS Zone Name
[ -listen-for-dns-query {true|false} ] DNS Query Listen Enable
[ -failover-group <failover-group> ]
Failover Group Name
[ -wwpn <text> ]
FCP WWPN
[ -address-family {ipv4|ipv6|ipv6z} ]
Address family
[ -comment <text> ]
Comment
[ -ipspace <IPspace> ]
IPspace of LIF
cluster1::> network interface show svm1* -field home-node
vserver lif
home-node
------- --------------- ----------svm1
svm1_admin_lif1 cluster1-01
svm1
svm1_cifs_nfs_lif1 cluster1-01
2 entries were displayed.
cluster1::>

You can set other options to customize the behavior of the CLI. A useful option is to set the default timeout value
for CLI sessions. Check the settings on your system and, if they are not set, modify the timeout to be 0. This
setting disables the timeout for your CLI session.
cluster1::>
cluster1::>
cluster1::>
CLI session
cluster1::>

system timeout modify 30


system timeout modify 0
system timeout show
timeout: 0 minutes

The set command, which you already used to specify the privilege level, has other options shown in the next
example. See what happens when you set different options. Remeber to set the options back before you
continue.
cluster1::> set ?
[[-privilege] {admin|advanced|diagnostic}]
[ -confirmations {on|off} ]
[ -showallfields {true|false} ]
[ -showseparator <text (size 1..3)> ]
[ -active-help {true|false} ]
[ -units {auto|raw|B|KB|MB|GB|TB|PB} ]
[ -rows <integer> ]
[ -vserver <text> ]
[ -node <text> ]
[ -stop-on-error {true|false} ]
cluster1::>

Privilege Level
Confirmation Messages
Show All Fields
Show Separator
Active Help
Data Units
Pagination Rows ('0' disables)
Default Vserver
Default Node
Stop On Error

3.2.2 Setting the SVM Context


For many commands, you must specify the SVM by using the -vserver <SVM name> qualifier. This is because
objects, such as volume names, only need to be unique within the SVM, but could be repeated across multiple
SVMs.
Suppose that you are running a number of commands within the same SVM. In this scenario, you can set a
context to a specific SVM so that you do not need to qualify the commands each time.
Without the SVM context, try the volume
well as the volumes in all of the SVMs.

show

command. You should see the root volume for each node (vol0), as

cluster1::> volume show


Vserver
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----cluster1-01 vol0
aggr0_cluster1_01 online RW
2.85GB
1.16GB
59%
cluster1-02 vol0
aggr0_cluster1_02 online RW
2.85GB
1.04GB
63%
svm1
svm1_root
aggr1_cluster1_01 online RW
20MB
18.88MB
5%
svm1
svm1_vol01
aggr1_cluster1_01 online RW
1GB
972.5MB
5%
svm1
svm1_vol02
aggr1_cluster1_02 online RW
1GB
972.5MB
5%
svm1
svm1_vol03
aggr2_cluster1_01 online RW
1GB
972.5MB
5%
svm1
svm1_vol04
aggr2_cluster1_02 online RW
1GB
972.5MB
5%
svm2
svm2_root
aggr1_cluster1_02 online RW
20MB
18.88MB
5%
svm2
svm2_vol01
aggr1_cluster1_01 online RW
1GB
1023MB
0%
9 entries were displayed.

15

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

cluster1::>

To display only the volumes in the SVM named svm1, issue volume
a temporary context for just svm1. Try this command:

show -vserver svm1.

Alternatively, you can set

cluster1::> vserver context -vserver svm1


Info: Use 'exit' command to return.
svm1::> vol show
Vserver
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----svm1
svm1_root
aggr1_cluster1_01 online RW
20MB
18.88MB
5%
svm1
svm1_vol01
aggr1_cluster1_01 online RW
1GB
972.5MB
5%
svm1
svm1_vol02
aggr1_cluster1_02 online RW
1GB
972.5MB
5%
svm1
svm1_vol03
aggr2_cluster1_01 online RW
1GB
972.5MB
5%
svm1
svm1_vol04
aggr2_cluster1_02 online RW
1GB
972.5MB
5%
5 entries were displayed.
svm1::>

The prompt changes to the SVM that you selected (svm1), and you see only the volumes that belong to svm1. As
long as you are in the SVM context, you will not have to use the -vserver <SVM name> qualifier.
List the available commands. You will see a different (restricted) command list. For example, there is no storage
command. This is because the SVM shell is running with sufficient privileges to execute only the specific
commands that are relevant to an SVM. Once you type exit and return to the cluster prompt, you have full
command access over all entities in the cluster.
svm1::> ?
up
dashboard>
exit
export-policy
history
job>
lun>
man
network>
redo
rows
security>
set
snapmirror>
statistics>
system>
top
volume>
vserver>
svm1::> exit
cluster1::>

Go up one directory
(DEPRECATED)-Display dashboards
Quit the CLI session
Manage export policies and rules
Show the history of commands for this CLI session
Manage jobs and job schedules
Manage LUNs
Display the on-line manual pages
Manage physical and virtual network connections
Execute a previous command
Show/Set the rows for this CLI session
The security directory
Display/Set CLI session settings
Manage SnapMirror
Display operational statistics
The system directory
Go to the top-level directory
Manage virtual storage, including volumes,
snapshots, and mirrors
Manage Vservers

3.2.3 Node Management CLI


Each node in the cluster has its own management LIF. Node management LIFs exist so that you can manage
individual nodes if they lose contact with the rest of the cluster.
Use the following command to display information about the node management LIFs. Each node has an SVM that
owns the management LIF for the node.
cluster1::> network interface show -role node-mgmt
Logical
Status
Network
Vserver
Interface Admin/Oper Address/Mask
----------- ---------- ---------- -----------------cluster1
cluster1-01_mgmt1 up/up 192.168.0.111/24
cluster1-02_mgmt1 up/up 192.168.0.112/24
2 entries were displayed.
cluster1::>

Current
Current Is
Node
Port
Home
------------- ------- ---cluster1-01
cluster1-02

e0c
e0c

true
true

You can establish an SSH session to any of these node management LIFs. Use your admin/Netapp1!
credentials. The prompt is the same as the prompt for the cluster management CLI.

16

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

In addition to its own LIF, each node also has its own root volume. The node root volume is bound to the cluster
node. It contains configuration files, logs, and other files associated with a nodes normal operation. A node root
volume is part of the physical cluster infrastructure. It is not associated with an SVM, does not hold user data, and
does not contain junctions to other volumes.
The CLI you have access to in this lab is exactly the same as if you created an SSH session to the cluster
management LIF. The difference is that the node management LIF always resides on its own node because it
is an IP address used specifically for managing a particular node. The node management LIF does not fail over
to another node if the home node is shut down. For this reason, you should use the cluster management LIF to
manage the cluster, because this LIF can, and will, fail over to another node. As long as the cluster is active, you
always have a reachable cluster management LIF.
However, suppose that a node is no longer in the cluster. If the node is still up, you can create an SSH session to
its node management LIF to run node-specific diagnostics, because these will not be accessible from the cluster
management CLI.

3.2.4 Node-Scoped CLI


The node-scoped CLI is also known as the node shell. It provides access to node-specific commands that might
be required to perform administrative tasks not available in the cluster management CLI.
Administrative tasks that require the use of the node-scoped CLI are rare. The node-scoped CLI is not typically
used to administer a clustered Data ONTAP system. The node-scoped CLI should be used infrequently and with
care.
You enter the node CLI from the cluster management CLI. Go to the cluster1 PuTTY session for this section,
using the procedure described in the Before You Begin section of this lab guide.
You can access the node shell through two methods. The method to use depends on whether you want to run
one specific command, or a series of commands.

3.2.4.1 Single Command


Check the names of your nodes by typing node show.
cluster1::> node show
Node
Health Eligibility Uptime
--------- ------ ----------- ------------cluster1-01 true true
02:19:21
cluster1-02 true true
02:19:05
2 entries were displayed.
cluster1::>

Model
Owner
Location
----------- -------- --------------SIMBOX
SIMBOX

Note: See the Model column? Have you noticed any other indication that you are running a Data ONTAP
simulator rather than physical hardware?
To run a single specific command for one node, specify that node by using the node
<command> command.
cluster1::> node run -node cluster1-02 aggr status
Aggr State
Status
aggr1_cluster1_02 online
raid_dp, aggr
64-bit
aggr0_cluster1_02 online
raid_dp, aggr
64-bit
aggr2_cluster1_02 online
raid_dp, aggr
64-bit
cluster1::>

run -node <node-name>

Options
nosnap=on
root, nosnap=on
nosnap=on

When the command is executed, it displays the aggregates defined on that node and returns you to the cluster
prompt.
In this case, node scope syntax is used instead of clustered Data ONTAP syntax, and the output is also formatted
differently. The node-scoped CLI does not support tab completion.

17

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The equivalent clustered Data ONTAP CLI command is storage

aggregate show.

cluster1::> storage aggregate show -node cluster1-02


Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_cluster1_02 3.02GB 141.3MB 95% online
1 cluster1-02
raid_dp,
normal
aggr1_cluster1_02 102.3GB 101.3GB 1% online
2 cluster1-02
raid_dp,
normal
aggr2_cluster1_02 102.3GB 101.3GB 1% online
1 cluster1-02
raid_dp,
normal
3 entries were displayed.
cluster1::>

3.2.4.2 Node Shell


If you want to run a number of node-specific commands, start a shell by omitting the command parameter.
cluster1::> node run -node cluster1-02
Type 'exit' or 'Ctrl-D' to return to the CLI
cluster1-02>

The prompt changes to the node of the shell you are in. To return to the cluster management CLI, enter exit, or
press Ctrl-D. For now, stay in the node shell.
List the available commands.
cluster1-02> ?
?
acpadmin
aggr
backup
cdpd
cf
clone
cna_flash
coredump
date
dcb
df
disk
disk_fw_update
download
echo
ems
environment
fcadmin
fcp
fcstat
file
flexcache
cluster1-02>

fsecurity
halt
help
hostname
ic
ifconfig
ifgrp
ifstat
ipspace
key_manager
keymgr
license
logger
man
maxfiles
mt
ndmpcopy
ndp
netstat
options
partner
passwd

ping
ping6
pktt
priority
priv
qtree
quota
rdfile
reallocate
restore_backup
revert_to
route
rshstat
sasadmin
sasstat
savecore
shelfchk
sis
smnadmin
snap
snapmirror
software

source
stats
storage
sysconfig
sysstat
timezone
traceroute
traceroute6
ups
uptime
version
vfiler
vlan
vmservices
vol
wafltop
wcc
wrfile
ypcat
ypgroup
ypmatch
ypwhich

The following list identifies situations in which you should use the node shell:

When you modify the size of the node root volume. Using the node shell is necessary because the
node root volume is considered a 7-Mode volume and can be modified only in the node scope.
When running the snapshot delta command. The cluster management CLI does not currently include
this command. The command is available as in System Manager, through a ZAPI, or it can be run from
the node shell.
Note: In general, do not perform network configuration or storage provisioning from the node shell. You
should only use it for those functions that you cannot perform from the cluster management CLI, or from
System Manager.

Exit the node shell.


cluster1-02> exit
logout
cluster1::>

18

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3.3 Load-Sharing Mirrors


3.3.1 Namespace Overview
Flexible volumes containing NAS data are junctioned into the owning SVM in a hierarchy. This hierarchy presents
NAS clients with a unified view of the storage, regardless of the physical location of flexible volumes inside the
cluster.
When a flexible volume is created within the SVM, the administrator specifies the junction path for the flexible
volume. The junction path is a directory location under the root of the SVM where the flexible volume can be
accessed. A flexible volumes name and its' junction path do not need to be the same.
Junction paths allow each flexible volume to be browsable, like a directory or folder. NFS clients can access
multiple flexible volumes using a single mount point. CIFS clients can access multiple flexible volumes using a
single CIFS share.
A namespace consists of a group of volumes connected using junction paths. It is the hierarchy of flexible
volumes within a single SVM as presented to NAS clients.
A namespace that exists natively inside a storage system provides a single point of management for the
namespace, instead of maintaining separate namespaces for NFS (using automount maps), and CIFS (using
DFS). A namespace can reduce or eliminate the reliance on DFS, automount maps, and complex, ad-hoc storage
provisioning scripts. A namespace also facilitates nondisruptive operation by separating the physical location of
NAS storage from its logical location.
An SVMs top-level flexible volume is known as the SVM root volume. The SVM root volume forms the root of
the flexible volume hierarchy in an SVM. It is the parent, grandparent, or ancestor of every flexible volume in the
SVMs namespace.

3.3.2 Load-Sharing Mirror Overview


Load-sharing mirrors are used to protect the accessibility of an SVMs namespace in case an SVMs root volume
becomes inaccessible.
A load-sharing mirror of a source flexible volume is a full, read-only copy of that flexible volume. Load-sharing
mirrors provide read-only access to the contents of the source flexible volume even if the source becomes
unavailable. A load-sharing mirror can also be promoted to become the read-write volume.
A cluster might have many load-sharing mirrors of a single source flexible volume. When load-sharing mirrors are
used, every node in the cluster should have a load-sharing mirror of the source flexible volume. The node that
currently hosts the source flexible volume should also have a load-sharing mirror. Identical load-sharing mirrors
on the same node yield no performance benefit.
Load-sharing mirrors are updated on demand, or on a schedule that is defined by the cluster administrator. Writes
made to the mirrored flexible volume are not visible to readers of that flexible volume until the load-sharing mirrors
are updated. Similarly, junctions added in the source flexible volume are not visible to readers until the loadsharing mirrors are updated.
Load-sharing mirrors can only support NAS protocols (CIFS or NFSv3). They do not support NFSv4 clients or
SAN client protocol connections (FC, FCoE, or iSCSI).

3.3.3 Exercise
In this exercise, you will create a load-sharing mirror of svm1s root volume on each node in cluster1. The
purpose of this exercise is to illustrate the requirement that load-sharing mirrors must be updated after a new
volume is junctioned into svm1s root volume, and before the volume becomes visible to clients.
1. Create the load sharing mirrors by using the volume create command. Like all volume create commands,
this command requires Vserver, volume, and aggregate parameters. The size parameter is specified

19

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

to match the size of svm1s root volume. The type parameter is set to DP, which is short for data
protection.
From the cluster1 CLI:
cluster1::> volume create -vserver svm1 -volume svm1_root_lsm1 -aggregate aggr1_cluster1_01
-size 20MB -type DP
[Job 560] Job is queued: Create svm1_root_lsm1.
Job 560] Job succeeded: Successful
cluster1::> volume create -vserver svm1 -volume svm1_root_lsm2 -aggregate aggr1_cluster1_02
-size 20MB -type DP
[Job 561] Job is queued: Create svm1_root_lsm2.
Job 561] Job succeeded: Successful
cluster1::> volume show -vserver svm1 -volume svm1_root_lsm*
Vserver
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----svm1
svm1_root_lsm1
aggr1_cluster1_01
online
DP
20MB
19.89MB
0%
svm1
svm1_root_lsm2
aggr1_cluster1_02
online
DP
20MB
19.89MB
0%
2 entries were displayed.
cluster1::>

2. Run the snapmirror create command to create SnapMirror relationships between the new load-sharing
mirror volumes and svm1s root volume. In this command, specify the source and destination volumes
by using the //svm_name/volume_name syntax. The source of the relationship is svm1s root volume;
the destination is the load-sharing mirror volumes. The relationship type is LS, which is short for load
sharing.
Set the update schedule to weekly; this interval is long enough to prevent the relationship to update while
you are completing this exercise. In a production environment, the update schedule is typically set to a
shorter time frame.
From the cluster1 CLI:
cluster1::> snapmirror create -source-path //svm1/svm1_root -destination-path
//svm1/svm1_root_lsm1 -type LS -schedule weekly
[Job 562] Job is queued: snapmirror create for the relationship with destination "cluster1://
svm1/svm1_root_lsm1".
[Job 562] Job succeeded: SnapMirror: done
cluster1::> snapmirror create -source-path //svm1/svm1_root -destination-path
//svm1/svm1_root_lsm2 -type LS -schedule weekly
[Job 564] Job is queued: snapmirror create for the relationship with destination "cluster1://
svm1/svm1_root_lsm2".
[Job 564] Job succeeded: SnapMirror: done
cluster1::>

3. Initialize the SnapMirror relationships between svm1s root volume and the newly created load-sharing
mirrors. All the mirrors can be updated with a single command, snapmirror initialize-ls-set. This
command uses the same //svm_name/volume_name syntax used for the source volume. The destination
volumes do not need to be specified because the cluster already knows about the load-sharing mirror
relationships.
From the cluster1 CLI:
cluster1::> snapmirror initialize-ls-set -source-path //svm1/svm1_root
[Job 565] Job is queued: snapmirror initialize-ls-set for source "cluster1://svm1/svm1_root".
cluster1::>

4. Create a new volume in svm1. The junction path for this new volume will be /parent2. /parent2 can
be thought of as a new directory under the root of svm1s namespace, which lies at /. As with the other
volume create commands, specify the SVM (by using the vserver parameter), the volume name, the
aggregate in which the volume will initially reside, and its size. In addition, specify the export policy to
use for controlling client access to the volume.
From the cluster1 CLI:
cluster1::> volume create -vserver svm1 -volume svm1_vol05 -size 1G -junction-path

20

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

/parent2 -policy default -aggregate aggr1_cluster1_01


[Job 566] Job is queued: Create svm1_vol05.
[Job 566] Job succeeded: Successful
Notice: Volume svm1_vol05 now has a mount point from volume svm1_root. The
load sharing (LS) mirrors of volume svm1_root will be updated according
to the SnapMirror schedule in place for volume svm1_root. Volume
svm1_vol05 will not be visible in the global namespace until the LS
mirrors of volume svm1_root have been updated.
cluster1::>

5. At this point, you have a new volume in svm1, located in the namespace location /parent2. However,
because you have not updated the load-sharing mirror of the SVM root volume, this namespace location
is not visible.
If you do not yet have a PuTTY session open to the RHEL linux client named rehl1, open one now
(right-click the PuTTY icon on the task bar, and select PuTTY from the context menu, username root,
password Netapp1!) and run the following command.
[root@rhel1 ~]# ls /mnt/svm1
parent
[root@rhel1 ~]#

Notice that you can see the volume parent, but not parent2?
6. To be able to see the new namespace location, the load-sharing mirror set must be updated. You can do
this update by using the snapmirror update-ls-set command, which has a command syntax similar to
the snapmirror initialize-ls-set command used earlier.
From the cluster1 CLI:
cluster1::> snapmirror update-ls-set -source-path //svm1/svm1_root
[Job 567] Job is queued: snapmirror update-ls-set for source "cluster1://svm1/svm1_root".
cluster1::>

7. Run the snapmirror show command to verify that the mirror relationships have finished their update.
Repeat until the mirror state is Idle and the relationship status is Snapmirrored.
From the cluster1 CLI:
cluster1::> snapmirror show
Source
Destination Mirror Relationship
Path
Type Path
State
Status
----------- ---- ------------ ------- -------------cluster1://svm1/svm1_root
LS
cluster1://svm1/svm1_root_lsm1
Snapmirrored
Idle
cluster1://svm1/svm1_root_lsm2
Snapmirrored
Idle
2 entries were displayed.
cluster1::>

Progress
Total
Last
Progress Healthy Updated
--------- ------- --------

true

true

Repeat this command as necessary until the mirror state is Idle, and the relationship status is
Snapmirrored.
8. At this point, the new volume should be visible to clients. Go back to the Linux client and run the ls
command to verify that the volume can now be accessed, using ls /mnt/svm1.
From the Linux client:
[root@rhel1 _]# ls /mnt/svm1
parent parent2
[root@rhel1 ~]

You should be able to see both the parent1 and parent2 volumes.

21

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3.4 IPspaces, Broadcast Domains, and Subnets


3.4.1 Clustered Data ONTAP 8.3 Networking Overview
Clustered Data ONTAP 8.3 introduces new networking constructs designed to simplify deployment and
configuration: IPspaces, broadcast domains, and subnets.

3.4.1.1 IPspaces
An IPspace is a logical construct that represents a space containing unique IP addresses. With clustered Data
ONTAP 8.3, multiple SVMs can have overlapping IP addresses provided that each of those SVMs resides in a
different IPspace.
When you create an IPspace, it only needs a name. The ipspace
an IPspace called my_ipspace.

create -ipspace my_ipspace

command creates

3.4.1.2 Broadcast Domains


Broadcast domains enable you to group network ports that belong to the same layer 2 network. The ports in the
group can then be used by an SVM for data or management traffic.
Broadcast domains simplify the configuration of clustered Data ONTAP by making it easier to ensure that all ports
in a failover group reside in the same layer 2 network, and all ports in the same layer 2 network have the same
maximum transmission unit (MTU) values.
A broadcast domain resides in an IPspace. During cluster initialization, the system creates two default broadcast
domains:

The Default broadcast domain contains ports that are in the Default IPspace. These ports are used
primarily to serve data. Cluster management and node management ports are also in this broadcast
domain.
The Cluster broadcast domain contains ports that are in the Cluster IPspace. These ports are used
for cluster communication, and include all cluster ports from all nodes in the cluster.

If you create unique IPspaces to separate client traffic, then you must create a broadcast domain in each of those
IPspaces. If your cluster does not require separate IPspaces, then all broadcast domains (and all ports) reside in
the system-created Default IPspace.
When you create a broadcast domain, you need to specify the name of the broadcast domain, an IPspace, an
MTU value, and a list of ports.

3.4.1.3 Subnets
Subnets in clustered Data ONTAP 8.3 provide a way to provision blocks of IP addresses at a time. They simplify
network configuration by allowing the administrator to specify a subnet during LIF creation, rather than an IP
address and netmask. A subnet object in clustered Data ONTAP does not need to encompass an entire IP
subnet, or even a maskable range within a subnet.
A subnet is created within a broadcast domain, and it contains a pool of IP addresses. You can allocate IP
addresses in a subnet to ports in the broadcast domain when LIFs are created. When you remove the LIFs, the IP
addresses are returned to the subnet pool, and are available for future LIFs.
If you specify a gateway when defining a subnet, a default route to that gateway is automatically added to the
SVM when you create a LIF using that subnet.

22

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3.4.2 Exercise
In this exercise, you will use the new networking constructs introduced with Data ONTAP 8.3: IPspaces,
broadcast domains, and subnets. You will examine the new network route command, and view the automatically
created failover groups.
Note: These steps are performed in the CLI because you can only create an IPspace through the CLI,
and the creation of a subnet through System Manager requires a default gateway. In most production
environments, System Manager is sufficient.
Tip: The following steps are performed on cluster2, not cluster1. You will need to open a new PuTTY
session to cluster2 for this exercise.
To create a new IPspace you use the network ipspace create command This command requires only one
argument, ipspace, which contains the name of the IPspace you want to create.
1. Create a new IPspace on cluster2. You will use this IPspace in the next steps to create a broadcast
domain and a subnet.
cluster2::> network
cluster2::> network
IPspace
------------------Cluster

ipspace create -ipspace new-ipspace


ipspace show
Vserver List
Broadcast Domains
----------------------------- ---------------------------Cluster

Cluster

cluster2, svm1-dr

Default

Default
new-ipspace
new-ipspace
3 entries were displayed.
cluster2::>

2. Create a new broadcast domain. You will need to specify a name, an IPspace in which it can reside, a
set of physical network ports, and an MTU value.
From the cluster2 CLI:
cluster2::> network port broadcast-domain create -ipspace new-ipspace -broadcast-domain
new-broadcast-domain -mtu 1500 -ports cluster2-01:e0g,cluster2-01:e0h
cluster2::> network port broadcast-domain show
IPspace Broadcast
Update
Name
Domain Name
MTU Port List
Status Details
------- ----------- ------ ----------------------------- -------------Cluster Cluster
9000
Default Default
1500
cluster2-01:e0a
complete
cluster2-01:e0b
complete
cluster2-01:e0c
complete
cluster2-01:e0d
complete
cluster2-01:e0e
complete
cluster2-01:e0f
complete
new-ipspace
new-broadcast-domain
1500
cluster2-01:e0g
complete
cluster2-01:e0h
complete
3 entries were displayed.
cluster2::>

3. Create a new subnet using your newly created IPspace and broadcast domain. The subnet object
requires a name, a broadcast domain, an IPspace, a subnet mask, and a range of IP addresses.
From the cluster2 CLI:
cluster2::> network subnet create -subnet-name new-subnet -broadcast-domain
new-broadcast-domain -ipspace new-ipspace -subnet 192.168.0.0/24
-ip-ranges 192.168.0.170-192.168.0.179
cluster2::> network subnet show
IPspace: Default
Subnet
Broadcast
Avail/
Name
Subnet
Domain
Gateway
Total
Ranges

23

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

--------- ---------------dr-subnet 192.168.0.0/24


IPspace: new-ipspace
Subnet
Name
Subnet
--------- ---------------new-subnet
192.168.0.0/24

--------- --------------- --------- --------------Default


7/10
192.168.0.160-192.168.0.169
Broadcast
Avail/
Domain
Gateway
Total
Ranges
--------- --------------- --------- --------------new-broadcast-domain
-

10/10

192.168.0.170-192.168.0.179

2 entries were displayed.


cluster2::>

4. The network route command is new in clustered Data ONTAP 8.3. Use this command to view routing
information without viewing routing groups.
You will not see any changes to the routing table output that are caused by the creation of the IPspace,
broadcast domain, and subnet in the previous step because you have not created any SVMs that use the
IPspace, broadcast domain, or subnet.
From the cluster2 CLI:
cluster2::> network route show
Vserver
Destination
Gateway
Metric
------------------- --------------- --------------- -----cluster2
0.0.0.0/0
192.168.0.1
20
cluster2::>

5. Because all ports in a layer 2 broadcast domain provide the same network connectivity, LIF failover
groups are created automatically in clustered Data ONTAP 8.3 when a broadcast domain is created. Use
the network interface failover-groups show command to view automatically created failover groups.
The automatically configured failover groups have the same name as the broadcast domain that you
created.
From the cluster2 CLI:
cluster2::> network interface failover-groups show
Failover
Vserver
Group
Targets
---------------- ---------------- -------------------------------------------cluster2
Default
cluster2-01:e0a, cluster2-01:e0b,
cluster2-01:e0c, cluster2-01:e0d,
cluster2-01:e0e, cluster2-01:e0f
new-ipspace
new-broadcast-domain
cluster2-01:e0g, cluster2-01:e0h
2 entries were displayed.
cluster2::>

You can create IPspaces by using only the CLI, but you can create subnet objects and broadcast
domains by using either the CLI, or System Manager. In this subsection, you will learn about the System
Manager capabilities for modifying these objects.
First, examine the options available to modify an existing broadcast domain.
6. In Chrome, click the tab for cluster2, and sign in to System Manager (username admin, password
Netapp1!).
7. In the left pane, click the Cluster tab.
8. In the left pane, navigate to cluster2 > Configuration > Network.
9. In the Network pane, click the Broadcast Domains tab.
10. Click Refresh to make sure that you are seeing the latest information.
11. In the Broadcast Domain list, select the new-broadcast-domain entry.
12. Click Edit.

24

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6
9

12

10
8
11

Figure 3-6:
The Edit Broadcast Domains dialog box opens. Examine the options available to modify the broadcast
domain.
13. Click Cancel to close the dialog box.

25

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13

Figure 3-7:

14.
15.
16.
17.

26

Next, examine the options available to modify an existing subnet object.


Click the Subnets tab in the Network pane.
Click Refresh to make sure that you are seeing the latest information.
In the Subnets list, select the entry for new-subnet.
Click Edit.

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

14
15

17

16

Figure 3-8:
The Edit Subnet dialog box opens. Examine the options available to modify the subnet.
18. In the Broadcast Domain area of the dialog box, expand Show ports on this domain. Review the
various settings.
19. When finished, click Cancel to discard any changes you might have made.

27

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

18

19

Figure 3-9:
Tip:
Export policies, which restrict which clients can access an exported volume or share, are not covered in
this lab, but export policy misconfiguration is a common problem that can easily be misinterpretted as a
networking problem. If you are able to reach a data LIF through the network by using a utility such as ping,
and you have verified that protocol access is enabled and configured properly, check your export policy
configuration to verify that it allows access from the client you are attempting to use.
If you would like to learn more about export policies and how to troubleshoot them, please refer to the
"Securing Clustered Data ONTAP" lab.

3.5 Quality of Service (QoS)


Quality of service (QoS) in clustered Data ONTAP allows the cluster administrator to limit the IOPs, or raw
throughput, available to an SVM, LUN, volume, or file (such as a VMDK file). QoS can be used to control
workloads that excessively consume resources, and to manage tenant service levels natively inside the storage
system.

28

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3.5.1 Exercise
In this activity, you examine the QoS configuration using System Manager on cluster1. This exercise uses a
workload generator to drive I/O to an SVM on cluster1. After the workload generator starts, you will configure QoS
and see the reduction of I/O operations serviced to the workload generator.
The workload generator runs directly on the Windows jumphost, and targets I/O to the drive letter Z:. The
jumphost has the drive letter Z: mapped to a CIFS share on svm1 in cluster1. The CIFS share is defined on the
volume svm1_vol01 inside svm1.
Note: This exercise uses the PuTTY session for cluster1.
From the Windows host JUMPHOST:
1. Double-click the workload.bat file on the left side of the desktop to start the workload generator.

Figure 3-10:
2. A Windows command prompt window opens, and starts outputting metrics about the I/O load that it is
generating against the share mounted on the jump host Z: drive.

Figure 3-11:

In particular, note the values shown for the ios: field that quantifies that I/O load. In this exercise, you
will configure QoS to limit these I/O operations, thus reducing the amount of load serviced by the cluster.

29

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3.
4.
5.
6.

In System Manager for cluster1, click the browser tab for cluster1.
In the left pane, click the Storage Virtual Machines tab.
Navigate to cluster1 > svm1 > Policies > QoS Policy Groups.
In the QoS Policy Group pane, click Create.

3
4

Figure 3-12:

The Create Policy Group dialog box opens.


7. Set the fields in the window as follows:

Policy Group Name: 100-KB-sec

Maximum Throughput: 100 KB/s.


8. Click theCreate button.

Figure 3-13:

30

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The Create Policy Group dialog box closes, and you return to the System Manager window.
9. Your newly created policy should be listed in the QoS Policy Groups pane.

Figure 3-14:
10. In the left pane of System Manager, navigate to Storage Virtual Machines > cluster1 > svm1 >
Storage > Volumes.
11. In the Volumes pane, select the svm1_vol01 volume.
12. From the buttons at the top of the Volumes pane, click the Storage QoS button. If your browser window
is not wide enough to display all the buttons, you can click the small >> button at the right end of the
row to reveal the hidden buttons. If you do not even see the >> button, try widening your browser
window.

31

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10
12

11

Figure 3-15:
The Quality of Service Details dialog box opens.
13. Select the Manage Storage Quality of Service checkbox.
14. Click the option to assign the volume to an Existing Policy Group.
15. Click Choose.

32

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13

14
15

Figure 3-16:
The Select Policy Group dialog box opens.
16. Select the 100-KB-sec policy group you created earlier.
17. Click OK.

16

17
Figure 3-17:

33

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The Select Policy Group dialog box closes, and you return to the Quality of Service Details dialog
box.
18. Click OK to apply the policy group to the svm1_vol01 volume. This policy group assignment takes effect
as soon as you click OK.

18

Figure 3-18:
19. Quickly go back to the command prompt window that is outputting the metrics from your load generator,
and observe that the reported ios: metric has dropped significantly from its previous level. In the
example in the screenshot, the ios: values dropped from the 1500 range down to 100 (note the
highlighting in the screenshot).

34

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

19

Figure 3-19:
20. With the workload generator window in focus, press Ctrl-C. When asked if you want to terminate the
batch job, answer y.

20
Figure 3-20:
The workload generator window closes, ending this exercise.

35

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3.6 SnapMirror
SnapMirror is the asynchronous replication technology used in clustered Data ONTAP. Asynchronous replication
refers to data that is replicated (backed up to the same site, or an alternate site) on a periodic interval, rather than
as soon as the data is written.
MetroCluster, introduced with clustered Data ONTAP 8.3, provides synchronous replication. Synchronous
replication refers to data that is replicated (backed up to the same site, or an alternate site) as soon as the data is
written. MetroCluster configuration is outside the scope of this lab.
Clustered Data ONTAP 8.3 provides a number of SnapMirror enhancements, including a version-flexible
SnapMirror functionality, that allows the source of a SnapMirror relationship to be upgraded first (assuming that
the source and destination both run a version of clustered Data ONTAP 8.3, or later).
In this lab activity, you create a version-flexible SnapMirror relationship between two volumes in cluster1 and
cluster2. To do this, you first set up cluster peering between cluster1 and cluster2 by adding LIFs dedicated to
intercluster peering, then establish an authenticated relationship between the clusters. After the cluster peering
relationship is created, you will create a SnapMirror relationship between a volume on cluster1 (that serves as the
source of the SnapMirror relationship), and another volume on cluster2 (that serves as the disaster recover (DR)
copy).

3.6.1 Exercise

3.6.1.1 Create Intercluster LIFs.


Before you set up the authenticated relationship between cluster1 and cluster2, the clusters must be able to
communicate with each other. Intercluster LIFs serve this purpose.
Perform the following tasks to create intercluster LIFs.
Attention: In this exercise, you use System Manager both on cluster1 and on cluster2, so pay special
attention to which cluster you are connected to during each step.
1.
2.
3.
4.
5.

36

In your Chrome browser, click the browser tab for cluster1.


In the left pane, click the Cluster tab.
Navigate to cluster1 > Configuration > Network.
In the Network pane, click the Network Interfaces tab.
In the Network pane, click the Create button.

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1
5

Figure 3-21:

6.
7.
8.
9.

37

The Create Network Interface dialog box opens.


Set the name to intercluster_lif1.
In the Interface Role section, select the Intercluster Connectivity option.
In the Port section, expand the Port or Adapters list for cluster1-01 and select port e0c.
Click Create.

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6
7

Figure 3-22:
The dialog box closes and you return to the System Manager window.
10. Your newly created intercluster_lif1 LIF should be listed under the Network Interface tab in the
Networks pane.
11. Every node in cluster1 requires a cluster interconnect LIF, and since cluster1 is a two-node cluster, you
also need to create a cluster interconnect LIF for cluster1-02. Click the Create button again.

38

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11

10

Figure 3-23:

12.
13.
14.
15.

39

The Create Network Interface dialog window opens.


Set the name to intercluster_lif2.
In the Interface Role section, select the Intercluster Connectivity option.
In the Port section, expand the Port or Adapters list for cluster1-02, and select port e0c.
Click Create.

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

12
13

14

15

Figure 3-24:
The dialog box closes, and you return to the System Manager window. At this point, you have an
intercluster LIF on each node in cluster1. When you created both intercluster LIFs, you accepted the
default to have Data ONTAP automatically select an IP address from the subnet. Review those LIFs to
verify which IP addresses Data ONTAP assigned to the intercluster LIFs.
16. System Manager should still show the Network Interface list in the Network pane. Scroll down to the
bottom of the list to see the entries for the new intercluster LIFs that you created. The IP addresses of
those LIFs are included in the list entries.
17. If you click a specific LIF, you can see more detail displayed on the bottom of the pane.

40

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

In this example, the IP addresses for the intercluster LIFs are 192.168.0.158 and 192.168.0.159.
However, because Data ONTAP automatically assigns these addresses, it is possible that the values in
your lab are different from the values in the example.
Attention: Record the actual addresses assigned to the intercluster LIFs in your lab because
you will need them for a later step of the lab.

16
17

Figure 3-25:

18.
19.
20.
21.
22.

41

After you create the intercluster LIFs for cluster1, create the intercluster LIFs for cluster2. cluster2
contains a single node, so you will create only one intercluster LIF for this cluster.
In your Chrome browser, click the browser tab for cluster2.
In the left pane, click the Cluster tab.
Navigate to cluster2 > Configuration > Network.
In the Network pane, click the Network Interfaces tab.
In the Network pane, click Create.

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

18
21
19

20

22

Figure 3-26:
The Create Network Interface dialog box opens.
23. Set the name to intercluster_lif1 (you can use the same name here that you used on cluster1
because LIF names are scoped to the containing cluster).
24. In the Interface Role section, select the Intercluster Connectivity option.
25. In the Port section, expand the Port or Adapters list for cluster2-01, and select port e0c.
26. Click Create.

42

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

23
24

25

26

Figure 3-27:
The dialog box closes, and you return to the System Manager window.
27. Record the IP address that Data ONTAP automatically assigned to your LIF. In this example, the
address is 102.168.0.163, but the value may be different in your lab.

43

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

27

Figure 3-28:
Cluster2 only contains a single node, so this one intercluster LIF is all you need.
28.
29.
30.
31.

44

Now that all your nodes have intercluster LIFs, it's time to establish the cluster peering relationship.
In your Chrome browser, click the browser tab for cluster1.
In the left pane, click the Cluster tab.
Navigate to cluster1 > Configuration > Peers.
In the Peers pane, click Create.

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

28

29

31
30

Figure 3-29:
The Create Cluster Peer dialog box opens.
32. In the Passphrase box enter Netapp1!.
33. In the Intercluster IP Addresses box, add the IP address that you noted earlier for the intercluster LIF
(intercluster_lif1) from the node cluster2-01.
Caution: In the example shown in this lab, the address was 192.168.0.163, but the address
that Data ONTAP assigned to the LIF in your lab may be different.
34. Click the Create button.

45

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

32
33

34

Figure 3-30:
The Confirm Create Cluster Peer dialog box opens.
35. Click OK.

35
Figure 3-31:
The dialog box closes, and you return to the System Manager window.
36. An entry for cluster2 now appears in the Peers list, but it is shown as unavailable because the
authentication status is still pending. You have initiated a cluster peering operation from cluster1, but to
complete it, cluster2 must also accept the peering request.

46

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

36

Figure 3-32:

37.
38.
39.
40.

47

Switch back to cluster2 so that you can accept the cluster peering operation.
In your Chrome browser, click the browser tab for cluster2.
In the left pane, click the Cluster tab.
Navigate to cluster2 > Configuration > Peers.
In the Peers pane, click Create.

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

37

38

40
39

Figure 3-33:
The Create Cluster Peer dialog box opens.
41. In the Passphrase box enter the same password you used earlier, Netapp1!.
42. In the Intercluster IP addresses box enter the IP addressess that you noted earlier for the intercluster
LIFs (intercluster_lif1 and intercluster_lif2) from the nodes cluster1-01 and cluster1-02.
Caution: In the example shown in this lab, those addresses were 192.168.0.158 and
192.168.0.159, but the addresses that Data ONTAP assigned to the LIFs in your lab may be
different.
43. When finished entering the values, click the Create button.

48

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

41
42

43

Figure 3-34:
The Confirm Create Cluster Peer dialog box opens.
44. Click the OK button.

44
Figure 3-35:
The dialog box closes, and you return to the System Manager window.
45. System Manager takes a few moments to create the peer relationship between cluster1 and cluster2.
The authentication status for that relationship should change to ok immediately, but the Availability
column will be at peering.
46. Wait a few seconds, then click Refresh every 12 seconds until the Availability column changes from
peering to available.

49

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

46

45

Figure 3-36:
At this point, the two clusters have an established peering relationship. Next, you can create a
SnapMirror relationship.

3.6.1.2 Create a SnapMirror Relationship


Because you created a peering relationship between the two clusters, they are now capable of entering into a
SnapMirror relationship between each other. In this exercise, you will establish a SnapMirror relationship between
an SVM volume on each cluster.
1.
2.
3.
4.
5.

In your Chrome browser, click the browser tab for cluster1.


In the left pane, click the Storage Virtual Machines tab.
In the left pane, navigate to cluster1 > svm1 > Storage > Volumes.
In the Volumes pane, select the entry for svm1_vol01.
In the Volumes pane, click the Protect by button on the right side of the button bar. If you dont see this
button, then it may be hidden because your browser window is not wide enough. In this case, use the >>
button at the far right side of the button bar to display the hidden buttons.
6. In the drop-down menu for the Protect by button, select Mirror.

50

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1
5
2

Figure 3-37:
The Create Mirror Relationship dialog box opens.
7. In the Destination Volume section, verify that the Cluster list to cluster2, and set the Storage Virtual
Machine list to svm1-dr.
8. Note the warning under this list saying that the selected SVM is not peered. Click the Authenticate link
at the end of that sentence.

51

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 3-38:
The Authentication dialog box opens.
9. Set the user name to admin, and the password Netapp1!.
10. Click OK.

52

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10
Figure 3-39:
The Authentication dialog box closes and the system processes the SVM peering operation. After a few
seconds, you return to the Create Mirror Relationship dialog box.
11. In the Destination Volume section, accept the default values that System Manager populated into the
Volume Name box (svm1_svm1_vol01_mirrror1) and the Aggregate box (aggr1_cluster2_01).
12. In the Configuration Details section, select the Create version flexible mirror relationship
checkbox.This is a new feature introduced in 8.3 that removes the limitation requiring the destination
controller to have a clustered Data ONTAP operating system major version number equal to or
higher than the major version of the source controller. This allows customers to maintain undisrupted
replication during Data ONTAP upgrade cycles.
13. In the Mirror Schedule list, select the daily value.
14. When finished, click Create.

53

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11

12
13
14
Figure 3-40:
The Create Mirror Relationship wizard begins the process of establishing and initializing the SnapMirror
relationship between the volumes.
15. When the status of all the initialization operations indicate success, click OK.

54

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

15

Figure 3-41:

16.
17.
18.
19.
20.
21.

55

You have now successfully established a SnapMirror relationship.To verify the status of that
relationship you'll need to look at the destination cluster.
In Chrome, select the broswer tab for cluster2.
Select the Storage Virtual Machines tab.
Navigate to cluster2 > svm1-dr > Protection.
In the Protection pane, select the relationship for source volume svm1_vol0. This should be the only
relationship listed.
In the lower pane, click the Details tab.
Examine the detail of this relationship, which indicate that it is healthy and that the last transfer just
completed a few moments ago.

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

16
17

19
18
21
20
Figure 3-42:
This completes the exercise.

3.7 Disaster Recovery for Storage Virtual Machines


3.7
Traditional volume SnapMirror requires you to set up a separate mirroring relationship for each volume you want
to mirror. In cases where you want to mirror many volumes for an SVM you have to set up many SnapMirror
relationships, and even then you have to manually maintain all the configuration for the destination SVM,
including setting up LIFs, namespaces, protocols, and so on.
Disaster Recovery for Storage Virtual Machines, also referred to as SVM DR, is a solution that uses SnapMirror
to mirror a storage virtual machine's (SVM's) entire set of volumes and it's configuration. It simplifies failover by
minimizing or completely avoiding manual configuration at the destination SVM through automated setup and
change management.
To set up an SVM DR relationship you create one SnapMirror relationship that replicates the entire SVM's
contents, and as you add, remove, or re-junction volumes, SVM DR will automatically apply those changes to the
destination SVM according to your replication schedule, potentially along with other SVM configuration settings.
When you create an SVM DR relationship you can choose to replicate all or a subset of the source SVM's
configuration to the Destination SVM. This choice is controlled through the -identity-preserve command line
option.
When -identity-preserve is set to true, SVM DR replicates the source SVM configuration settings listed the
following figure to the destination SVM. Since this mode replicates network identity information, the destination
SVM does require access to the same network resources (physical/virtual networks, Active Directory servers,
etc.) as the source SVM has. This is the identity preserve mode that most customers will likely want to deploy for
disaster recovery needs.

56

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 3-43:
When -identity-preserve is set to false, only a subset of the source SVM's configuration data is replicated to the
destination SVM, as described in the following figure. This mode is intended for replication to different sites that
have different network resources, or to support the creation of additional read-only copies of the SVM within the
same environment as the source SVM.

57

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 3-44:
As with traditional volume Snapmirror, SVM DR relationships can be broken off, reversed, and re-synchronized,
allowing you to cut-over the SVM's services from one cluster to another. If -identity-preserve is set to true then
when you stop the source SVM and start the destination SVM, the destination SVM has the same LIFs, IP
addresses, namespace structure, and so on. However, such a switchover is disruptive for both CIFS (which
requires an SMB reconnect) and NFS (which requires a re-mount).
SVM DR does not replicate iSCSI or FCP configuration in either -identity-preserve mode. The underling volumes,
LUNs, and namespace are still replicated, as are the LIFs if -identity-preserve is set to true, but LUN igroup/
portsets will not be replicated nor will the SVM's iSCSI/FCP protocol configuration. If you want to support iSCSI/
FCP through an SVM DR relationship then you will have to manually configure the iSCSI/FCP protocols, igroups,
and portsets on the destination SVM.

3.7.1 Exercise

3.7.1
In this exercise you will be creating an identity-preserve "true" SVM-DR relationship between the source SVM
svm3 on cluster1 to a new SVM named smv3-dr that you will be creating on cluster2. You will then perform a cutover operation, making svm3-dr the new operational primary, and then reverting the primary back to svm3 again.
Note: This lab utilizes CLI sessions to the storage clusters cluster1 and cluster2, and to the Linux client
rhel1. You will be frequently switching between this sessions, so pay attention to the command prompts in
this exercise to help you issue the commands on the correct hosts.
1. Open a PuTTY sessions to each of cluster1 and cluster2, and log in with the username admin and the
password Netapp1!.

58

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2. Open a PuTTY session to rhel1, and log in as root with the password Netapp1!.
3. In the PuTTY session for cluster2, display a list of the SVMs on the cluster.
cluster2::> vserver show
Vserver
----------cluster2
cluster2-01
svm1-dr

Type
------admin
node
data

Subtype
---------default

Admin
State
---------running

Operational
State
----------running

Root
Volume
---------svm1dr_
root

Aggregate
---------aggr1_
cluster2_
01

3 entries were displayed.


cluster2::>

4. Create the destination SVM svm3-dr.


cluster2::> vserver create -vserver svm3-dr -subtype dp-destination
[Job 314] Job is queued: Create svm3-dr.
[Job 314]
[Job 314] Job succeeded:
Vserver creation completed
cluster2::>

5. List the SVMs on cluster2.


cluster2::> vserver show
Vserver
----------cluster2
cluster2-01
svm1-dr

Type
------admin
node
data

svm3-dr

data

Subtype
---------default

Admin
State
---------running

dp-destination
running
4 entries were displayed.

Operational
State
----------running

Root
Volume
---------svm1dr_
root

stopped

Aggregate
---------aggr1_
cluster2_
01
-

cluster2::>

Notice that the svm3-dr SVM is administratively running but is operationally stopped.
6. On cluster2, initiate an SVM peering relationship between the svm3-dr and svm3.
cluster2::> vserver peer create -vserver svm3-dr -peer-vserver svm3 -applications snapmirror
-peer-cluster cluster1
Info: [Job 315] 'vserver peer create' job queued
cluster2::>

7. View the SVM peering status.


cluster2::> vserver peer show
Peer
Peer
Vserver
Vserver
State
----------- ----------- -----------svm1-dr
svm1
peered
svm3-dr
svm3
initiated
2 entries were displayed.

Peering
Applications
-----------------snapmirror
snapmirror

cluster2::>

8. On cluster1, view the SVM peering status.


cluster1::> vserver peer show
Peer
Peer
Vserver
Vserver
State

59

Peering
Applications

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

----------- ----------- ------------ -----------------svm1


svm1-dr
peered
snapmirror
svm3
svm3-dr
pending
snapmirror
2 entries were displayed.
cluster1::>

9. Accept the pending peering request.


cluster1::> vserver peer accept -vserver svm3 -peer-vserver svm3-dr
Info: [Job 1030] 'vserver peer accept' job queued
cluster1::>

10. View the SVM peering status again.


cluster1::> vserver peer show
Peer
Peer
Vserver
Vserver
State
----------- ----------- -----------svm1
svm1-dr
peered
svm3
svm3-dr
peered
2 entries were displayed.

Peering
Applications
-----------------snapmirror
snapmirror

cluster1::>

11. On cluster2, create the SnapMirror relationship between the source SVM svm3 and the destination
SVM svm3-dr.
cluster2::> snapmirror create -source-path svm3: -destination-path svm3-dr: -type DP
-throttle unlimited -identity-preserve true -schedule hourly
cluster2::>

If you are familiar with creating volume SnapMirror relationships from the CLI then this command
should look familiar, as it is essentially the same command used for volume SnapMirror, but with few
key differences. Most significant is the format of the values for the -source-path and -destination-path
arguments. Path values for volume SnapMirror take the form <svm>:<volume>, whereas for SVM
DR paths take the form <svm>:. One other difference is the inclusion of the -identity-preserve true
option, which indicates that this is an identity preserve relationship, meaning that all of the SVM's
configuration information should be replicated to the destination SVM. If you were to instead specify identity-preserve false then this would instead be an identity discard relationship.
12. Display the state of the cluster's SnapMirror relationships.
cluster2::> snapmirror show
Source
Destination Mirror Relationship
Path
Type Path
State
Status
----------- ---- ------------ ------- -------------svm1:svm1_vol01
XDP svm1-dr:svm1_svm1_vol01_mirror1
Snapmirrored
Idle
svm3:
DP
svm3-dr:
Uninitialized
Idle
2 entries were displayed.

Progress
Total
Last
Progress Healthy Updated
--------- ------- --------

true

true

cluster2::>

Data ONTAP has created the relationship, but not yet initialized it (i.e. it has not initiated the first data
transfer).
13. Initialize the SnapMirror relationship.
cluster2::> snapmirror initialize -destination-path svm3-dr:
cluster2::>

60

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

14. View the status of the SnapMirror relationships again.


cluster2::> snapmirror show
Source
Destination Mirror Relationship
Path
Type Path
State
Status
----------- ---- ------------ ------- -------------svm1:svm1_vol01
XDP svm1-dr:svm1_svm1_vol01_mirror1
Snapmirrored
Idle
svm3:
DP
svm3-dr:
Uninitialized
Transferring
2 entries were displayed.

Progress
Total
Last
Progress Healthy Updated
--------- ------- --------

true

true

cluster2::>

Data has started transferring for the relationship.


Notice that there is only a single entry displayed for the SVM DR relationship, even though behind the
scene there are multiple SnapMirror relationships in operation for this relationship.
15. Display the status of all the constituents for the SVM disaster recovery relationships.
cluster2::> snapmirror show -expand
Source
Destination Mirror Relationship
Path
Type Path
State
Status
----------- ---- ------------ ------- -------------svm1:svm1_vol01
XDP svm1-dr:svm1_svm1_vol01_mirror1
Snapmirrored
Idle
svm3:
DP
svm3-dr:
Uninitialized
Transferring
2 entries were displayed.

Progress
Total
Last
Progress Healthy Updated
--------- ------- --------

true

true

cluster2::>

When you initializes an SVM DR relationship, clustered Data ONTAP starts replicating the configuration
data first, which includes details of the source SVM's volumes, and then afterward starts replicating
the source SVM's constituent volumes. If you issue a snapmirror show -expand command early in the
initialization process then the constituent relationships may not yet exist.
16. Periodically repeat the snapmirror show expand command until you start seeing output for the
constituent relationships.
cluster2::> snapmirror show -expand
Source
Destination Mirror Relationship
Path
Type Path
State
Status
----------- ---- ------------ ------- -------------svm1:svm1_vol01
XDP svm1-dr:svm1_svm1_vol01_mirror1
Snapmirrored
Idle
svm3:
DP
svm3-dr:
Uninitialized
Transferring
svm3:chn
DP
svm3-dr:chn Uninitialized
Idle
svm3:eng
DP
svm3-dr:eng Uninitialized
Idle
svm3:fin
DP
svm3-dr:fin Uninitialized
Idle
svm3:mfg
DP
svm3-dr:mfg Uninitialized
Idle
svm3:prodA DP
svm3-dr:prodA
Uninitialized
Idle
svm3:proj1 DP
svm3-dr:proj1
Uninitialized
Idle
8 entries were displayed.

61

Advanced Concepts for Clustered Data ONTAP 8.3.1

Progress
Total
Last
Progress Healthy Updated
--------- ------- --------

true

true

true

true

true

true

true

true

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

cluster2::>

17. Periodically issue the snapmirror

show

command until the relationship status changes to "Idle".

cluster2::> snapmirror show


Progress
Source
Destination Mirror Relationship
Total
Last
Path
Type Path
State
Status
Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- -------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true svm3:
DP
svm3-dr:
Snapmirrored Idle
true
2 entries were displayed.
cluster2::>

The relationship has completed initialization, meaning that the destination SVM is now a mirrored copy
of the source SVM.
18. Examine the status of the constituent relationships.
cluster2::> snapmirror show -expand
Progress
Source
Destination Mirror Relationship
Total
Last
Path
Type Path
State
Status
Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- -------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true svm3:
DP
svm3-dr:
Snapmirrored Idle
true
svm3:chn
DP
svm3-dr:chn Snapmirrored Idle
true
svm3:eng
DP
svm3-dr:eng Snapmirrored Idle
true
svm3:fin
DP
svm3-dr:fin Snapmirrored Idle
true
svm3:mfg
DP
svm3-dr:mfg Snapmirrored Idle
true
svm3:prodA DP
svm3-dr:prodA Snapmirrored Idle
true
svm3:proj1 DP
svm3-dr:proj1 Snapmirrored Idle
true
svm3:us
DP
svm3-dr:us
Snapmirrored Idle
true
9 entries were displayed.
cluster2::>

These are likewise now all "Idle".


19. Display a list of the volumes on cluster2.
cluster2::> vol show
Vserver
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----cluster2 MDV_CRS_5165c5f0174711e4b3b8005056990685_A aggr1_cluster2_01 online RW 20MB 18.79MB
6%
cluster2 MDV_CRS_5165c5f0174711e4b3b8005056990685_B aggr1_cluster2_01 online RW 20MB 18.89MB
5%
cluster2-01 vol0
aggr0
online
RW
7.17GB
4.23GB
40%
svm1-dr
svm1_svm1_vol01_mirror aggr1_cluster2_01 online DP 128.0MB 121.3MB 5%
svm1-dr
svm1_svm1_vol01_mirror1 aggr1_cluster2_01 online DP 128.0MB 121.4MB 5%
svm1-dr
svm1dr_root aggr1_cluster2_01 online RW
20MB
18.88MB
5%
svm3-dr
chn
aggr1_cluster2_01 online DP
1GB
972.5MB
5%
svm3-dr
eng
aggr1_cluster2_01 online DP
1GB
972.5MB
5%
svm3-dr
fin
aggr1_cluster2_01 online DP
1GB
972.5MB
5%
svm3-dr
mfg
aggr1_cluster2_01 online DP
1GB
972.5MB
5%
svm3-dr
prodA
aggr1_cluster2_01 online DP
1GB
972.5MB
5%
svm3-dr
proj1
aggr1_cluster2_01 online DP
1GB
972.5MB
5%
svm3-dr
svm3_root
aggr1_cluster2_01 online RW
20MB
18.85MB
5%
svm3-dr
us
aggr1_cluster2_01 online DP
1GB
972.5MB
5%
14 entries were displayed.
cluster2::>

You see here that svm3-dr has 8 volumes, which correspond to the 8 volumes on svm3. Also notice
the two MDV* volumes at the beginning of the output; there are special volumes that clustered Data
ONTAP uses to replicate the SVM DR configuration data from the source SVM to the destination SVM.
20. Display a list of the volume snapshots for svm3-dr.

62

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Note: Since this command output is lengthy, the following CLI examples will focus on just the
eng volume, but in your lab feel free to exclude the -volume eng portion of the command so you
can see the snaphots for all of svm3-dr's volumes.
cluster2::> snapshot show -vserver svm3-dr -volume eng
---Blocks--Vserver Volume
Snapshot
Size Total% Used%
-------- -------- ------------------------------------- -------- ------ ----eng
daily.2015-10-03_0010
168KB
0%
37%
daily.2015-10-04_0010
84KB
0%
23%
weekly.2015-10-04_0015
192KB
0%
40%
hourly.2015-10-04_1205
144KB
0%
33%
hourly.2015-10-04_1305
148KB
0%
34%
hourly.2015-10-04_1405
144KB
0%
33%
hourly.2015-10-04_1505
152KB
0%
35%
hourly.2015-10-04_1605
156KB
0%
35%
hourly.2015-10-04_1705
148KB
0%
34%
vserverdr.0.502f4455-6abf-11e5-b770-005056990685.2015-10-04_175125 0B 0% 0%
10 entries were displayed.
cluster2::> exit

Notice the "vserverdr" snapshot created by SnapMirror.


21. On cluster1, display the list of snapshots for svm3's volumes.
cluster1::> snapshot show -vserver svm3 -volume eng
---Blocks--Vserver Volume
Snapshot
Size Total% Used%
-------- -------- ------------------------------------- -------- ------ ----eng
daily.2015-10-03_0010
168KB
0%
34%
daily.2015-10-04_0010
84KB
0%
21%
weekly.2015-10-04_0015
192KB
0%
38%
hourly.2015-10-04_1205
144KB
0%
31%
hourly.2015-10-04_1305
148KB
0%
32%
hourly.2015-10-04_1405
144KB
0%
31%
hourly.2015-10-04_1505
152KB
0%
32%
hourly.2015-10-04_1605
156KB
0%
33%
hourly.2015-10-04_1705
148KB
0%
32%
vserverdr.0.502f4455-6abf-11e5-b770-005056990685.2015-10-04_175125 92KB 0%
22%
10 entries were displayed.
cluster1::>

The list of snapshots is the same on both the source and destination volumes.
22. On cluster2, initiate a SnapMirror update to transfer any changes on the source SVM since the last
transfer took place to the destination SVM.
cluster2::> snapmirror update -destination-path svm3-dr:
cluster2::>

23. Periodically view the status of the SnapMirror relationships until it goes idle.
cluster2::> snapmirror show
Progress
Source
Destination Mirror Relationship
Total
Last
Path
Type Path
State
Status
Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- -------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true svm3:
DP
svm3-dr:
Snapmirrored Idle
true
2 entries were displayed.
cluster2::>

24. View the status of the constituent relationships.


cluster2::> snapmirror show -expand

63

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Progress
Source
Destination Mirror Relationship
Total
Last
Path
Type Path
State
Status
Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- -------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true svm3:
DP
svm3-dr:
Snapmirrored Transferring 896KB true
svm3:chn
DP
svm3-dr:chn Snapmirrored Idle
true
svm3:eng
DP
svm3-dr:eng Snapmirrored Idle
true
svm3:fin
DP
svm3-dr:fin Snapmirrored Idle
true
svm3:mfg
DP
svm3-dr:mfg Snapmirrored Idle
true
svm3:prodA DP
svm3-dr:prodA Snapmirrored Idle
true
svm3:proj1 DP
svm3-dr:proj1 Snapmirrored Idle
true
svm3:us
DP
svm3-dr:us
Snapmirrored Idle
true
9 entries were displayed.
cluster2::>

25. Display again the list of svm3-dr's volume snapshots.


cluster2::> snapshot show -vserver svm3-dr -volume eng
---Blocks--Vserver Volume
Snapshot
Size Total% Used%
-------- -------- ------------------------------------- -------- ------ ----eng
daily.2015-10-03_0010
168KB
0%
37%
daily.2015-10-04_0010
84KB
0%
23%
weekly.2015-10-04_0015
192KB
0%
40%
hourly.2015-10-04_1205
144KB
0%
33%
hourly.2015-10-04_1305
148KB
0%
34%
hourly.2015-10-04_1405
144KB
0%
33%
hourly.2015-10-04_1505
152KB
0%
35%
hourly.2015-10-04_1605
156KB
0%
35%
hourly.2015-10-04_1705
148KB
0%
34%
vserverdr.0.502f4455-6abf-11e5-b770-005056990685.2015-10-04_175125 124KB 0%
30%
vserverdr.1.502f4455-6abf-11e5-b770-005056990685.2015-10-04_175646 0B 0% 0%
11 entries were displayed.
cluster2::>

Now there are two vserverdr* snapshots listed. After your first update, SnapMirror maintains 2 rolling
snapshots on the destination volume going forward.
26. On cluster1, look at the snapshots on the source volumes.
cluster1::> snapshot show -vserver svm3 -volume eng
---Blocks--Vserver Volume
Snapshot
Size Total% Used%
-------- -------- ------------------------------------- -------- ------ ----eng
daily.2015-10-03_0010
168KB
0%
37%
daily.2015-10-04_0010
84KB
0%
23%
weekly.2015-10-04_0015
192KB
0%
40%
hourly.2015-10-04_1205
144KB
0%
34%
hourly.2015-10-04_1305
148KB
0%
34%
hourly.2015-10-04_1405
144KB
0%
34%
hourly.2015-10-04_1505
152KB
0%
35%
hourly.2015-10-04_1605
156KB
0%
35%
hourly.2015-10-04_1705
152KB
0%
35%
vserverdr.1.502f4455-6abf-11e5-b770-005056990685.2015-10-04_175646 96KB 0%
25%
10 entries were displayed.
cluster1::>

Even after the first update, the source volumes continues to host a single rolling snapshot for
SnapMirror.
27. The Linux host rhe1l has svm3's root namespace volume NFS mounted at the start of the lab. Display
the /etc/fstab /entry that for this mount. (The /etc/fstab file lists the local disks and NFS filesystems that
should be automatically mounted at system boot time.)
[root@rhel1 ~]# grep svm3 /etc/fstab
svm3:/
/corp

64

Advanced Concepts for Clustered Data ONTAP 8.3.1

nfs

defaults

0 0

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

[root@rhel1 ~]#

28. Display the details of that existing mount.


[root@rhel1 ~]# df /corp
Filesystem
1K-blocks
svm3:/
19456

Used Available Use% Mounted on


128
19328
1% /corp

[root@rhel1 ~]#

Svm3's namespace root is mounted as /corp on rhel1.


29. List the contents of the /corp directory.
[root@rhel1 ~]# ls /corp
eng fin mfg
[root@rhel1 ~]#

You have no problem displaying the contents.


Next you initiate a cut-over. As mentioned in the introduction to this exercise, a cut-over is disruptive to
NFS clients in this initial release of SVM disaster recovery and so you should unmount the NFS volume
from the rhel1 client before the cut-over.
30. Unmount the /corp mount.
[root@rhel1 ~]# umount /corp
[root@rhel1 ~]#

31. On cluster2, quiesce any running snapmirror operations.


cluster2::> snapmirror quiesce -destination-path svm3-dr:
cluster2::>

32. Verify that the snapmirror relationship is quiesced.


cluster2::> snapmirror show
Source
Destination Mirror Relationship
Path
Type Path
State
Status
----------- ---- ------------ ------- -------------svm1:svm1_vol01
XDP svm1-dr:svm1_svm1_vol01_mirror1
Snapmirrored
Idle
svm3:
DP
svm3-dr:
Snapmirrored
Quiesced
cluster2::>

Progress
Total
Last
Progress Healthy Updated
--------- ------- --------

true

true

33. Break off the SnapMirror relationship.


cluster2::> snapmirror break -destination-path svm3-dr:
cluster2::>

34. Display the status of the SnapMirror relationships.


cluster2::> snapmirror show
Source
Destination Mirror Relationship
Path
Type Path
State
Status
----------- ---- ------------ ------- -------------svm1:svm1_vol01
XDP svm1-dr:svm1_svm1_vol01_mirror1
Snapmirrored
Idle
svm3:
DP
svm3-dr:
Broken-off
Idle
2 entries were displayed.

65

Advanced Concepts for Clustered Data ONTAP 8.3.1

Progress
Total
Last
Progress Healthy Updated
--------- ------- --------

true

true

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

cluster2::>

The relationship for svm3-dr is broken-off.


35. Examine the status of the svm3-dr SVM.
cluster2::> vserver show
Vserver
----------cluster2
cluster2-01
svm1-dr

Type
------admin
node
data

Subtype
---------default

Admin
State
---------running

Operational
State
----------running

Root
Volume
---------svm1dr_
root

svm3-dr

data

default

running

stopped

svm3_root

Aggregate
---------aggr1_
cluster2_
01
aggr1_
cluster2_
01

4 entries were displayed.


cluster2::>

It is administratively running but operationally stopped, as it should be since you have not cut over yet.
36. Examine the status of svm3-dr's LIFs.
cluster2::> net int show -vserver
(network interface show)
Logical
Status
Vserver
Interface Admin/Oper
----------- ---------- ---------svm3-dr
svm3_cifs_nfs_lif1
up/down
svm3_cifs_nfs_lif2
up/down
2 entries were displayed.

svm3-dr
Network
Current
Current Is
Address/Mask
Node
Port
Home
------------------ ------------- ------- ---192.168.0.143/24

cluster2-01

e0e

true

192.168.0.144/24

cluster2-01

e0c

true

cluster2::>

The LIFS are configured but down.


37. On cluster1, display the status of the SVMs.
cluster1::> vserver show
Vserver
----------cluster1
cluster1-01
cluster1-02
svm1

Type
------admin
node
node
data

Subtype
---------default

Admin
State
---------running

Operational
State
----------running

Root
Volume
---------svm1_root

svm2

data

default

running

running

svm2_root

svm3

data

default

running

running

svm3_root

Aggregate
---------aggr1_
cluster1_
01
aggr1_
cluster1_
02
aggr1_
cluster1_
01

6 entries were displayed.


cluster1::>

Svm3 is both administratively and operationally running.


38. Check the status of svm3's LIFs.
cluster1::> net int show -vserver
(network interface show)
Logical
Status
Vserver
Interface Admin/Oper
----------- ---------- ---------svm3

66

svm3
Network
Current
Current Is
Address/Mask
Node
Port
Home
------------------ ------------- ------- ----

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

svm3_cifs_nfs_lif1
up/up
svm3_cifs_nfs_lif2
up/up
2 entries were displayed.

192.168.0.143/24

cluster1-01

e0d

true

192.168.0.144/24

cluster1-01

e0e

true

cluster1::>

The LIFs are both up. If you compare the IP addresses on these LIFs with the ones you saw a couple of
steps back for svm3-dr you'll see that they are the same. This is because you specified the -identitypreserve true option when you established the SVM disaster recovery relationship at the beginning of
this exercise.
39. Stop svm3.
cluster1::> vserver stop -vserver svm3
[Job 1033] Job is queued: Vserver Stop.
[Job 1033] Job succeeded: DONE
cluster1::>

40. View svm3's status.


cluster1::> vserver show
Vserver
----------cluster1
cluster1-01
cluster1-02
svm1

Type
------admin
node
node
data

Subtype
---------default

Admin
State
---------running

Operational
State
----------running

Root
Volume
---------svm1_root

svm2

data

default

running

running

svm2_root

svm3

data

default

stopped

stopped

svm3_root

Aggregate
---------aggr1_
cluster1_
01
aggr1_
cluster1_
02
aggr1_
cluster1_
01

6 entries were displayed.


cluster1::>

The SVM is both administratively and operationally stopped.


41. Examine svm3's LIFs.
cluster1::> net int show -vserver svm3
(network interface show)
Logical
Status
Network
Current
Current
Vserver
Interface Admin/Oper Address/Mask
Node
Port
----------- ---------- ---------- ------------------ ------------- ------svm3
svm3_cifs_nfs_lif1 up/down 192.168.0.143/24 cluster1-01 e0d
svm3_cifs_nfs_lif2 up/down 192.168.0.144/24 cluster1-01 e0e
2 entries were displayed.

Is
Home
---true
true

cluster1::>

The LIFs are also down.


42. On cluster2, view the status of svm3-dr.
cluster2::> vserver show -vserver svm3-dr
Admin
Vserver
Type
Subtype
State
----------- ------- ---------- ---------cluster2
admin
cluster2-01 node
svm1-dr
data
default
running

67

Advanced Concepts for Clustered Data ONTAP 8.3.1

Operational
State
----------running

Root
Volume
---------svm1dr_
root

Aggregate
---------aggr1_
cluster2_
01

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

svm3-dr

data

default

running

stopped

svm3_root

aggr1_
cluster2_
01

4 entries were displayed.


cluster2::>

It's still administratively running but is operationally down.


43. Start svm3-dr.
cluster2::> vserver start -vserver svm3-dr
[Job 326] Job is queued: Vserver Start.
[Job 326] Job succeeded: DONE
cluster2::>

44. View svm3-dr's status again.


cluster2::> vserver show
Vserver
----------cluster2
cluster2-01
svm1-dr

Type
------admin
node
data

Subtype
---------default

Admin
State
---------running

Operational
State
----------running

Root
Volume
---------svm1dr_
root

svm3-dr

data

default

running

running

svm3_root

Aggregate
---------aggr1_
cluster2_
01
aggr1_
cluster2_
01

4 entries were displayed.


cluster2::>

It is now administratively and operationally running.


45. Examine the status of svm3-dr's LIFs.
cluster2::> net int show -vserver
(network interface show)
Logical
Status
Vserver
Interface Admin/Oper
----------- ---------- ---------svm3-dr
svm3_cifs_nfs_lif1
up/up
svm3_cifs_nfs_lif2
up/up
2 entries were displayed.

svm3-dr
Network
Current
Current Is
Address/Mask
Node
Port
Home
------------------ ------------- ------- ---192.168.0.143/24

cluster2-01

e0e

true

192.168.0.144/24

cluster2-01

e0c

true

cluster2::>

Both LIFs are up and operational.


46. Examine svm3-dr's volumes.
cluster2::> vol show -vserver svm3-dr
Vserver
Volume
Aggregate
State
--------- ------------ ------------ ---------svm3-dr
chn
aggr1_cluster2_01
online
svm3-dr
eng
aggr1_cluster2_01
online
svm3-dr
fin
aggr1_cluster2_01
online
svm3-dr
mfg
aggr1_cluster2_01
online
svm3-dr
prodA
aggr1_cluster2_01
online
svm3-dr
proj1
aggr1_cluster2_01
online
svm3-dr
svm3_root
aggr1_cluster2_01
online
svm3-dr
us
aggr1_cluster2_01

68

Advanced Concepts for Clustered Data ONTAP 8.3.1

Type
Size Available Used%
---- ---------- ---------- ----RW

1GB

972.5MB

5%

RW

1GB

972.5MB

5%

RW

1GB

972.5MB

5%

RW

1GB

972.5MB

5%

RW

1GB

972.5MB

5%

RW

1GB

972.5MB

5%

RW

20MB

18.85MB

5%

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

online

RW

1GB

972.5MB

5%

8 entries were displayed.


cluster2::>

The volumes are all present and writable.


47. On rhel1, mount up all /etc/fstab entries that are not currently mounted. Alternately, you can use the
command mount svm3:/ /corp to manually mount /corp.
[root@rhel1 ~]# mount -a
[root@rhel1 ~]#

48. View the details of the mount.


[root@rhel1 ~]# df /corp
Filesystem
1K-blocks
svm3:/
19456

Used Available Use% Mounted on


128
19328
1% /corp

[root@rhel1 ~]#

49. List the contents of the /corp directory.


[root@rhel1 ~]# ls /corp
eng fin mfg
[root@rhel1 ~]#

50. Change directory to /corp/mfg/chn.


[root@rhel1 ~]# cd /corp/mfg/chn
[root@rhel1 ~]#

51. List the directory contents.


[root@rhel1 ~]# ls
[root@rhel1 ~]#

The directory is empty.


52. Create a new volume named prodB and junction it into the namespace at /mfg/chn/prodB.
cluster2::> volume create -vserver svm3-dr -volume prodB -aggregate aggr1_cluster2_01
-space-guarantee volume -policy default -junction-path /mfg/chn/prodB
[Job 368] Job is queued: Create prodB.
[Job 368] Job succeeded: Successful
cluster2::>

53. on rhel1, list the directory's contents again.


[root@rhel1 ~]# ls
prodB
[root@rhel1 ~]#

54. cd to the prodB folder.


[root@rhel1 ~]# cd prodB
[root@rhel1 ~]#

55. Create a new file named file.txt.


[root@rhel1 ~]# touch file1.txt
[root@rhel1 ~]#

56. List the directory contents.


[root@rhel1 ~]# ls

69

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

file1.txt
[root@rhel1 ~]#

57. On cluster1, display the status of the SnapMirror relationships.


cluster1::> snapmirror show
Source
Destination Mirror Relationship
Path
Type Path
State
Status
----------- ---- ------------ ------- -------------cluster1://svm1/svm1_root
LS
cluster1://svm1/svm1_root_lsm1
Snapmirrored
Idle
cluster1://svm1/svm1_root_lsm2
Snapmirrored
Idle
2 entries were displayed.

Progress
Total
Last
Progress Healthy Updated
--------- ------- --------

true

true

cluster1::>

There is currently no relationship from svm3-dr to svm3.


58. Create a SnapMirror SVM disaster recovery relationship from the source SVM svm3-dr to the
destination SVM svm3.
cluster1::> snapmirror create -source-path svm3-dr: -destination-path svm3: -type DP
-throttle unlimited -identity-preserve true -schedule hourly
cluster1::>

59. Display the status of the SnapMirror relationships again.


cluster1::> snapmirror show
Progress
Source
Destination Mirror Relationship
Total
Last
Path
Type Path
State
Status
Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- -------svm3-dr:
DP
svm3:
Broken-off Idle
true
cluster1://svm1/svm1_root LS cluster1://svm1/svm1_root_lsm1 Snapmirrored Idle - true cluster1://svm1/svm1_root_lsm2 Snapmirrored Idle - true 3 entries were displayed.
cluster1::>

SnapMirror creates the relationship. Since there is an existing relationship for the two SVMs from when
it was going the other direction before it was broken off, the Mirror State shows as Broken-off here.
60. Re-sync the relationship.
cluster1::> snapmirror resync -destination-path svm3:
cluster1::>

61. Display the status of the SnapMirror relationships again.


cluster1::> snapmirror show
Progress
Source
Destination Mirror Relationship
Total
Last
Path
Type Path
State
Status
Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- -------svm3-dr:
DP
svm3:
Broken-off Transferring true
cluster1://svm1/svm1_root LS cluster1://svm1/svm1_root_lsm1 Snapmirrored Idle - true cluster1://svm1/svm1_root_lsm2 Snapmirrored Idle - true 3 entries were displayed.
cluster1::>

62. Periodically display the status of the constituent relationships until they all show Idle.
cluster1::> snapmirror show -expand

70

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Progress
Source
Destination Mirror Relationship
Total
Last
Path
Type Path
State
Status
Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- -------svm3-dr:
DP
svm3:
Broken-off Transferring 2.47MB
true
svm3-dr:chn DP
svm3:chn
Snapmirrored Idle
true
svm3-dr:eng DP
svm3:eng
Snapmirrored Idle
true
svm3-dr:fin DP
svm3:fin
Snapmirrored Idle
true
svm3-dr:mfg DP
svm3:mfg
Snapmirrored Idle
true
svm3-dr:prodA DP svm3:prodA
Snapmirrored Idle
true
svm3-dr:prodB DP svm3:prodB
Snapmirrored Idle
true
svm3-dr:proj1 DP svm3:proj1
Snapmirrored Idle
true
svm3-dr:us DP
svm3:us
Snapmirrored Idle
true
cluster1://svm1/svm1_root LS cluster1://svm1/svm1_root_lsm1 Snapmirrored Idle - true cluster1://svm1/svm1_root_lsm2 Snapmirrored Idle - true 10 entries were displayed.
cluster1::>

If you pay attention to the status of the relationship for the prodB volume while running these
commands (and if you are fast enough), you'll see it go from Uninitialized to Transferring to Idle while
the other relationships go from Broken-off to Re-synching to Idle.
63. View the status of the parent relationship.
cluster1::> snapmirror show
Source
Destination Mirror Relationship
Path
Type Path
State
Status
----------- ---- ------------ ------- -------------svm3-dr:
DP
svm3:
Snapmirrored
Idle
cluster1://svm1/svm1_root
LS
cluster1://svm1/svm1_root_lsm1
Snapmirrored
Idle
cluster1://svm1/svm1_root_lsm2
Snapmirrored
Idle
3 entries were displayed.

Progress
Total
Last
Progress Healthy Updated
--------- ------- --------

true

true

true

cluster1::>

Now start the procedure to cut-over from svm3-dr to svm3.


64. Quiesce the SnapMirror relationship.
cluster1::> snapmirror quiesce -destination-path svm3:
cluster1::>

65. Verify the relationship is quiesced.


cluster1::> snapmirror show
Source
Destination Mirror Relationship
Path
Type Path
State
Status
----------- ---- ------------ ------- -------------svm3-dr:
DP
svm3:
Snapmirrored
Quiesced
cluster1://svm1/svm1_root
LS cluster1://svm1/svm1_root_lsm1
Snapmirrored
Idle
cluster1://svm1/svm1_root_lsm2
Snapmirrored
Idle
3 entries were displayed.

Progress
Total
Last
Progress Healthy Updated
--------- ------- --------

true

true

true

cluster1::>

66. Break the SnapMirror relationship.


cluster1::> snapmirror break -destination-path svm3:

71

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

cluster1::>

67. On cluster2, display the status of the svm3-dr SVM.


cluster2:> vserver show
Vserver
----------cluster2
cluster2-01
svm1-dr

Type
------admin
node
data

Subtype
---------default

Admin
State
---------running

Operational
State
----------running

Root
Volume
---------svm1dr_
root

svm3-dr

data

default

running

running

svm3_root

Aggregate
---------aggr1_
cluster2_
01
aggr1_
cluster2_
01

4 entries were displayed.


cluster2::>

68. Stop the svm3-dr SVM.


cluster2::> vserver stop -vserver svm3-dr
[Job 328] Job is queued: Vserver Stop.
[Job 328] Job succeeded: DONE
cluster2::>

69. Display the SVM's status again.


cluster2:> vserver show
Vserver
----------cluster2
cluster2-01
svm1-dr

Type
------admin
node
data

Subtype
---------default

Admin
State
---------running

Operational
State
----------running

Root
Volume
---------svm1dr_
root

svm3-dr

data

default

stopped

stopped

svm3_root

Aggregate
---------aggr1_
cluster2_
01
aggr1_
cluster2_
01

4 entries were displayed.


cluster2::>

70. Display the status of svm3-dr's LIFs.


cluster2::> net int show -vserver
(network interface show)
Logical
Status
Vserver
Interface Admin/Oper
----------- ---------- ---------svm3-dr
svm3_cifs_nfs_lif1
up/down
svm3_cifs_nfs_lif2
up/down
2 entries were displayed.

svm3-dr
Network
Current
Current Is
Address/Mask
Node
Port
Home
------------------ ------------- ------- ---192.168.0.143/24

cluster2-01

e0e

true

192.168.0.144/24

cluster2-01

e0c

true

cluster2::>

The LIFs are down, as you would expect.


71. On cluster1, start the svm3 SVM.
cluster1::> vserver start -vserver svm3
[Job 1037] Job is queued: Vserver Start.
[Job 1037] Job succeeded: DONE
cluster1::>

72

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

72. Display the svm3 SVM's status.


cluster1::> vserver show
Vserver
----------cluster1
cluster1-01
cluster1-02
svm1

Type
------admin
node
node
data

Subtype
---------default

Admin
State
---------running

Operational
State
----------running

Root
Volume
---------svm1_root

svm2

data

default

running

running

svm2_root

svm3

data

default

running

running

svm3_root

Aggregate
---------aggr1_
cluster1_
01
aggr1_
cluster1_
02
aggr1_
cluster1_
01

6 entries were displayed.


cluster1::>

Svm3 is up and running.


73. Display the status of svm3's LIFs.
cluster1::> net int show -vserver
(network interface show)
Logical
Status
Vserver
Interface Admin/Oper
----------- ---------- ---------svm3
svm3_cifs_nfs_lif1
up/up
svm3_cifs_nfs_lif2
up/up
2 entries were displayed.

svm3
Network
Current
Current Is
Address/Mask
Node
Port
Home
------------------ ------------- ------- ---192.168.0.143/24

cluster1-01

e0d

true

192.168.0.144/24

cluster1-01

e0e

true

cluster1::>

Svm3's LIFs are both running and operational.


74. Examine svm3's volumes.
cluster1::> vol show -vserver svm3
Vserver
Volume
Aggregate
State
--------- ------------ ------------ ---------svm3
chn
aggr1_cluster1_01
online
svm3
eng
aggr1_cluster1_01
online
svm3
fin
aggr1_cluster1_01
online
svm3
mfg
aggr1_cluster1_01
online
svm3
prodA
aggr1_cluster1_01
online
svm3
prodB
aggr1_cluster1_01
online
svm3
proj1
aggr1_cluster1_01
online
svm3
svm3_root
aggr1_cluster1_01
online
svm3
us
aggr1_cluster1_01
online
9 entries were displayed.

Type
Size Available Used%
---- ---------- ---------- ----RW

1GB

972.5MB

5%

RW

1GB

972.5MB

5%

RW

1GB

972.5MB

5%

RW

1GB

972.5MB

5%

RW

1GB

972.5MB

5%

RW

20MB

972.5MB

5%

RW

1GB

972.5MB

5%

RW

20MB

18.84MB

5%

RW

1GB

972.5MB

5%

cluster1::>

75. On rhel, list the status of the /corp mount.


[root@rhel1 prodB]# df /corp
df: `/corp': Stale file handle
df: no filesystems processed

73

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

[root@rhel1 prodB]#

The file handle is stale because you did not unmount the NFS filesystem prior to the latest SVM DR cutover.
76. Change out of the /corp directory tree so you can unmount the NFS volume.
[root@rhel1 prodB] cd
[root@rhel1 ~]

77. Unmount /corp.


[root@rhel1 ~]# umount /corp
[root@rhel1 ~]#

78. Mount /corp again.


[root@rhel1 ~]# mount -a
[root@rhel1 ~]#

79. List the contents of /corp again.


[root@rhel1 ~]# ls /corp
eng fin mfg
[root@rhel1 ~]#

80. List the contents of the /corp/mfg/chn/prodB directory to see if the file you created on svm3-dr before
the last re-sync and cut-over is present.
[root@rhel1 ~]# ls /corp/mfg/chn/prodB
file1.txt
[root@rhel1 ~]#

Yes, the file is there. It's noteworthy that there was no extra work involved in replicating back
configuration changes that were made on svm3-dr (from creating and mounting a new volume) when it
was running.
81. On cluster2, re-sync the SnapMirror relationship.
cluster2::> snapmirror resync -destination-path svm3-dr:
cluster2::>

82. Periodically check the status of the SnapMirror relationship until it goes Idle.
cluster2::> snapmirror show
Progress
Source
Destination Mirror Relationship
Total
Last
Path
Type Path
State
Status
Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- -------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true svm3:
DP
svm3-dr:
Snapmirrored Idle
true
2 entries were displayed.
cluster2::>

At this point the SVM disaster recovery relationship is back to the state it was in before you initiated any
cutover operations.
This concludes this lab exercise.

3.8 Appendix: Additional Administrative Users and Roles


Clustered Data ONTAP supports the concept of administrative users with roles. Each of these users is associated
with a particular role that defines the commands that it can use when administering the cluster. Clustered Data

74

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

ONTAP provides a number of predefined roles that can be used; you can also create your own customized roles,
if required.
In System Manager, roles and users are grouped separately under the cluster and the SVM. If you use the CLI,
you will see roles and users together with the same commands.

3.8.1 Cluster-Scoped Users and Roles


In this section, you will look at the users and roles that apply to the whole cluster.
1.
2.
3.
4.

In your Chrome browser, click the browser tab for cluster1.


In the left pane, click the Cluster tab.
In the left pane, navigate to cluster1 > Configuration > Security > Roles.
The Roles pane shows a list of the predefined cluster-wide roles that come with clustered Data ONTAP.

2
4
3

Figure 3-45:
Next, take a look at the cluster-wide users.
5. In the left pane, select Users.
6. In the Users pane, click Add.

75

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6
5

Figure 3-46:
The Add User dialog box opens. Use this dialog box to create a new limited-permission administrative
user for the cluster.
7. Set the user name to intern, and the password to netapp123.
8. Click Add next to the User Login Methods pane.
9. Set the Application drop-down list to ssh, and the Role drop-down list to readonly.
10. Click OK.

76

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10

Figure 3-47:
The new user login method you just entered is displayed in the User Login Methods list.
11. Click Add at the bottom of the dialog box.

77

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11

Figure 3-48:
The Add User dialog box closes and you return to the System Manager window.
12. If Chrome prompts you to save the password for this site, click Nope.

12

Figure 3-49:
13. The newly created intern account is now included in the list of accounts displayed in the Users pane.

78

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13

Figure 3-50:
14. Start a new PuTTY session to cluster1, and log in as the user intern, using the password netapp123.
Try listing what commands are available. Observe that the volume create, or volume move commands,
amongst others, are not available to you because the readonly role you assigned to the intern
account prevents access to commands that modify the cluster configuration.

3.8.2 SVM Users and Roles


In this section, you will look at the users and roles that are local to a single SVM.
1.
2.
3.
4.

In your Chrome browser, click the browser tab for cluster1.


In the left pane, click the Storage Virtual Machines tab.
In the left pane, navigate to cluster1 > svm1 > Configuration > Security > Roles.
The Roles pane now shows a list of predefined SVM-specific roles. In the Roles pane, select the
vsadmin-backup entry.
5. Click Edit.

79

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1
5

Figure 3-51:
The Edit Role dialog box opens.
6. Scroll down the Role Attributes list to see the commands that are available to a user with this role. Note
that this role has full access to some commands, read-only access to others, and no access to the rest.
7. Click Cancel to discard any changes you might have made in this dialog box.

80

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 3-52:
The Edit Roles dialog box closes and focus returns to the System Manager window. Take a look at the
other roles for this SVM and observe how their permissions differ.
8. In the left pane, select Users.
9. In the Users pane, select the vsadmin user.
10. If you look at the User Login Methods area at the bottom of the Users pane, you can see that the
vsadmin user has the vsadmin role.

81

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10

Figure 3-53:
11. Open a PuTTY session and connect to cluster1. Try to log in to cluster1 as vsadmin with the password
Netapp1!.
login as: vsadmin
Using keyboard-interactive authentication.
Password:
Access denied
[email protected] password:

Remember that the user vsadmin is specifically for administering the SVM svm1. To manage an SVM
with delegated SVM-scoped administration, you must log in to the management LIF for the SVM; in this
case, svm1.
12.
13.
14.
15.
16.

82

Identify the management LIF for the svm1 SVM.


In Chrome, select the browser tab for cluster1.
Select the Cluster tab.
Navigate to cluster1 > svm1 > Configuration.
In the Network pane select the Network Interfaces tab.
In the network interface list, select the entry for svm1_admin_lif1 and observe it's assigned IP address.

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

12
15

13
14

16

Figure 3-54:
Tip: Alternatively, use the cluster management CLI and type network interface show when
logged in as the cluster administrator to obtain this IP address.
17. On this system, the management LIF for svm1 is named svm1-mgmt with the IP address
192.168.0.147. There is also a connection entry in PuTTY named cluster1-svm1. Using the cluster1svm1 connection entry in PuTTY, the vsadmin user, and the password Netapp1!, connect to svm1
over SSH.
login as: vsadmin
Using keyboard-interactive authentication.
Password:
svm1::>

18. As the vsadmin user, attempt to modify a network port or create a new aggregate by using the network
port modify command and the storage aggregate create command.
svm1::> network port modify
Error: "port" is not a recognized command
svm1::> storage aggregate create
Error: "storage" is not a recognized command

These commands are not available to you as the vsadmin user, because control of logical entities
inside svm1 is delegated to vsadmin, while network ports and storage aggregates are physical entities
controlled by the cluster administrator.
19. As the vsadmin user, run the volume new -aggregate ? command.
cluster1::> volume new -aggregate ?
<aggregate name>
Aggregate Name
cluster1::>

83

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Attention: You can create new volumes as the vsadmin user, but only on specific aggregates.
The reason is that when the svm1 SVM was set up, the cluster administrator configured svm1 to
allow volume creation on these aggregates. To view this list, run the vserver show svm1 -fields
aggr-list command.
cluster1::> vserver show -vserver svm1 -fields aggr-list
vserver aggr-list
------- ----------------------------------------------------------------------svm1
aggr1_cluster1_01,aggr1_cluster1_02,aggr2_cluster1_01,aggr2_cluster1_02
cluster1::>

20. As the vsadmin user, run the network

interface modify

command.

svm1::> network interface modify


Error: "modify" is not a recognized command

Attention: You cannot modify network interfaces as the vsadmin user. The vsadmin user
has the vsadmin role, which provides read-only access to the network interface command
directory.

3.9 Appendix: Active Directory Authentication Tunneling


To authorize cluster administrators by using Active Directory, you must set up an authentication tunnel through
a CIFS-enabled SVM. You must also create one or more cluster user accounts for the domain users. This
functionality requires that CIFS is licensed on the cluster.
This lab environment already has a CIFS-enabled SVM, which is svm1. Use svm1 to set up the authentication
tunnel.
Before you begin, verify your lab configuration.
1. Verify that no domain authentication tunnel currently exists.
cluster1::> security login domain-tunnel show
This table is currently empty.

2. After you verify that a domain authentication tunnel does not exist, verify that the CIFS-enabled SVM
(svm1) is a member of the appropriate domain, DEMO.NETAPP.COM.
cluster1::> vserver cifs show -vserver svm1
Vserver:
CIFS Server NetBIOS Name:
NetBIOS Domain/Workgroup Name:
Fully Qualified Domain Name:
Default Site Used by LIFs Without Site Membership:
Authentication Style:
CIFS Server Administrative Status:
CIFS Server Description:
List of NetBIOS Aliases:
cluster1::>

svm1
SVM1
DEMO
DEMO.NETAPP.COM
domain
up
-

3. After you verify that the CIFS-enabled SVM svm1 is a member of the appropriate domain, set up a
domain authentication tunnel.
cluster1::> security login domain-tunnel create -vserver svm1

4. With the authentication tunnel configured, a new authentication method is available to you, domain.
Use this new authentication method to create a new cluster administrator.
cluster1::> security login create -authmethod domain -username DEMO\Administrator
-application ssh

84

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5. You can now log in to the cluster as a domain administrator, using the DOMAIN\username syntax. Open
a new PuTTY session as described in the Before You Begin section. When prompted for a username
and password, enter DOMAIN\Administrator as the user name, and Netapp1! as the password.
login as: DEMO\Administrator
Using keyboard-interactive authentication.
Password:
cluster1::>

3.10 Automated Nondisruptive Upgrades


Clustered Data ONTAP 8.3 adds support for automated, nondisruptive software upgrades. These commands
bring the clustered Data ONTAP package into the cluster, perform validation of the cluster to verify that it is
prepared for the upgrade, and then perform the actual upgrade. Underneath, there are downloads, takeovers,
and givebacks still being performed, but the cluster infrastructure will drive the process. The administrator is able
to view the progress; pause, resume, or cancel an upgrade; and view the cluster update history. Go to the http://
support.netapp.com/ site to obtain the clustered Data ONTAP package.
Automated nondisruptive upgrades are available to update clustered Data ONTAP 8.3 to clustered Data ONTAP
8.3.x. The code to run the automated upgrades is in clustered Data ONTAP 8.3, so a traditional approach is
required to get from version 8.2 to version 8.3.
This lab examines the commands used to upgrade the cluster, but does not execute those commands. The
commands are executed in the cluster1 CLI.
The cluster image directory contains the commands and command subdirectories used to perform automated
nondisruptive upgrades. Examine the options that are available under this command directory.
cluster1::> cluster image
cluster1::cluster image> ?
cancel-update
package>
pause-update
resume-update
show
show-update-history
show-update-log
show-update-progress
update
validate
cluster1::cluster image>

Cancel an update
Manage the cluster image package repository
Pause an update
Resume an update
Display currently running image information
Display the update history
Display the update transaction log
Display the update progress
Manage an update
Validates the cluster's update eligibility

The cluster image package command directory contains the commands used to manage the software packages
that contain future versions of clustered Data ONTAP. Examine the options that are available under this directory.
cluster1::cluster image> package
cluster1::cluster image package> ?
delete
Remove a package from the cluster image package
repository
get
Fetch a package file from a URL into the cluster
image package repository
show
Display currently installed image information
show-repository
Display information about packages available in
the cluster image package repository
cluster1::cluster image package>

Use the cluster image update command to upgrade a cluster once a new package has been added to the cluster
package repository. Enter the cluster image command directory, and examine the parameters that are available
with the cluster image update command.
cluster1::cluster image package> ..
cluster1::cluster image> update ?
[-version] <text>
[[-nodes] <nodename>, ...]
[ -estimate-only [true] ]
[ -pause-after {none|all} ]
[ -ignore-validation-warning {true|false} ]

85

Update Version
Node
Estimate Only
Update Pause (default: none)
Ignore Validation (default:

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

[ -skip-confirmation {true|false} ]
[ -force-rolling [true] ]
[ -stabilize-minutes {1..60} ]

false)
Skip Confirmation (default:
false)
Force Rolling Update
Minutes to stabilize (default:
8)

cluster1::cluster image>

86

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4 Version History

87

Version

Date

Document Version History

1.0

October 2014

Insight 2014

1.0.1

December 2014

Updates for Lab on Demand

1.1

October 2015

Insight 2015

Advanced Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact
product and feature versions described in this document are supported for your specific environment.
The NetApp IMT defines product components and versions that can be used to construct configurations
that are supported by NetApp. Specific results depend on each customer's installation in accordance
with published specifications.

NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any
information or recommendations provided in this publication, or with respect to any results that may be obtained
by the use of the information or observance of any recommendations provided herein. The information in this
document is distributed AS IS, and the use of this information or the implementation of any recommendations or
techniques herein is a customers responsibility and depends on the customers ability to evaluate and integrate
them into the customers operational environment. This document and the information contained herein may be
used solely in connection with the NetApp products discussed in this document.

Go further, faster

2015 NetApp, Inc. All rights reserved. No portions of this presentation may be reproduced without prior written
consent of NetApp, Inc. Specifications are subject to change without notice. NetApp and the NetApp logo are
registered trademarks of NetApp, Inc. in the United States and/or other countries. All other brands or products are
trademarks or registered trademarks of their respective holders and should be treated as such.

You might also like