Adding A Second Controller To Create An HA Pair
Adding A Second Controller To Create An HA Pair
A controller module must already be installed, configured, and operating in clustered Data ONTAP 8.2 or later.
This controller module is referred to as the existing controller module; the examples in this procedure have the console
prompt existing_ctlr>.
The controller module that is being added is referred to as the new controller module; the examples in this procedure have
the console prompt new_ctlr>.
The new controller module must be received from NetApp as part of the upgrade kit.
This procedure does not apply to moving a controller module from a preexisting system or a system that was previously in
an HA pair. However, if you populate the new controller module with PCIe cards from existing inventory at the customer
site, check that they are compatible and supported by the new controller module.
Your system must have an empty slot available for the new controller module when upgrading to a single-chassis HA pair (a
HA pair in which both controller modules reside in the same chassis.
Note: This configuration is not supported on all systems.
You must have rack space and cables for the new controller module when upgrading to a dual-chassis HA pair (an HA pair
in which the controller modules reside in separate chassis.
Note: This configuration is not supported on all systems.
Each controller module must be connected to the management network through its e0a port or, if your system has one, the
e0M port (wrench port).
215-07711_D0
February 2015
11.
12.
13.
14.
15.
16.
Netbooting and setting up Data ONTAP on the new controller module on page 14
Installing licenses for the new controller module on page 19
Verifying storage failover on both controller modules and setting cluster HA on page 19
Installing the firmware after adding a second controller module on page 20
Setting up Storage Encryption on the new controller module on page 20
Verifying the configuration with the Config Advisor on page 23
1. Ensure that your system has enough available disks or partitions for the new controller module.
You need to identify unassigned disks or spare disks with available partitions that you can assign to the new controller
module.
Two parity disks and one data disk, plus one spare, are required for root aggregate creation.
Note: You must set the auto_assign option to off on the existing controller module before adding any new disks.
If your system uses root-data partitioning, determine the system spare disks and available partition space:
a. Identify the unassigned disks:
storage disk show
clust01::> storage disk show
Usable
Disk
Container
Container
Disk
Size
Shelf Bay Type
Type
Name
Owner
---------- ---------- ----- --- ------- ----------- --------- -------1.0.0
0
0
SAS
unassigned 1.0.1
408.2GB
0
1
SAS
shared
1.0.2
408.2GB
0
2
SAS
shared
clust01-01
1.0.3
408.2GB
0
3
SAS
shared
clust01-01
1.0.4
408.2GB
0
4
SAS
shared
clust01-01
1.0.5
408.2GB
0
5
SAS
shared
clust01-01
1.0.6
408.2GB
0
6
SAS
shared
clust01-01
1.0.7
408.2GB
0
7
SAS
shared
clust01-01
.
.
.
.
Disk
----2.3.9
3.0.6
3.5.1
.
.
.
.
Type
----BSAS
BSAS
BSAS
RPM
----7200
7200
7200
Checksum
--------advanced_zoned
block
block
Local
Local
Data
Root
Usable Usable
------ -------0B
73.89GB
0B
0B
354.3GB 413.8GB
Physical
Size
-----931.5GB
828.0GB
828.0GB
Status
------zeroed
offline
zeroed
If the system has 12 internal disks, five partitions are required for root aggregate creation and a sixth is required as a
spare.
If the system has 24 internal disks, 10 partitions are required for root aggregate creation and two disks are required as
spares.
If the system is an All Flash FAS (AFF) system, 22 partitions are required for the root aggregate creation and two
disks are required for spares.
Then...
Determine where the aggregates for the existing node are located by entering the following
command:
storage aggregate show
Note: If you do not have enough free disks for your system, you need to add more
storage. Contact technical support for more information.
b.
c.
Remove ownership on disks that do not have aggregates on them by entering the following
command:
storage disk removeowner disk_name
d.
Not enough spares for the new
controller module on a system with
root-data partitioning
Repeat the previous step for as many disks that you need for the new node.
Either:
Identify disk partitions that aren't part of existing aggregates, so you can use them when
assigning disks.
3. Ensure that you have cables ready for the following connections:
Cluster connections
If you are creating a two-node switchless cluster, you need two cables to connect the controller modules. Otherwise, you
need a minimum of four cables, two for each controller module connection to the cluster network switch. Other systems
(like the FAS80xx series) have defaults of either four or six cluster connections.
4. Ensure that you have a serial port console available for the controller modules.
5. Review the Site Requirements Guide on the NetApp Support Site or the Hardware Universe and gather all IP addresses and
other network parameters for the new controller module.
You must enter the commands in the steps below in the nodeshell. For more information about the nodeshell, see the Clustered
Data ONTAP System Administration Guide for Cluster Administrators.
Steps
1. Enter the following command and note the key IDs on all disk drives that are using Storage Encryption:
disk encrypt show
Example
Locked?
Yes
Yes
Yes
Yes
Yes
Yes
2. Enter the following command and note all the necessary certificate files (client.pem, client_private.pem, and
ip_address_key_server_CA.pem) that have been installed:
keymgr list cert
Later in the procedure you need to install these same certificate files on the new partner controller module.
3. Enter the following command to identify the IP address of the key servers:
key_manager show
All external key management servers associated with the storage system are listed. Later in the procedure you need to add
these same key servers on the new partner controller module.
Example
The following command displays all external key management servers associated with the storage system:
4. Enter the following command and check that the key IDs listed match those shown by the disk encrypt show command
in step 1:
key_manager query
Example
The following command checks the status of all key management servers linked to the storage system and displays
additional information:
storage-system> key_manager query
Key server 172.18.99.175 reports 4 keys.
Key tag
-------storage-system
storage-system
storage-system
storage-system
Key ID
------080CF0C80...
080CF0C80...
080CF0C80...
080CF0C80...
5. Back up all data on all aggregates using standard methods for your site.
6. Enter the following command to reset the authentication key on the drives using Storage Encryption to their Manufacturing
System ID (MSID):
disk encrypt rekey 0x0 *
7. Examine the CLI command output to ensure that there are no disk encrypt rekey failures.
For detailed information about port, LIF, and network configuration in clustered Data ONTAP, see the Clustered Data ONTAP
Network Management Guide on the the NetApp Support Site.
Steps
3. How you proceed depends on which version of Data ONTAP your system is running:
Then....
For each port that you identified as a cluster port, use the network port modify
command to modify the role to cluster and set the MTU to the default value of 9000.
network port modify -node local -port port_name -role
cluster -mtu 9000 -ipspace Cluster
b.
Data ONTAP 8.3 and later
If the port roles are not set to Cluster, complete the following substeps:
a.
Remove them to the correct role by entering the following command for each incorrect
port role:
network port broadcast-domain remove-ports -ipspace Default
-broadcast-domain Default -ports node_name:port_name
Note: Your domain name may be different than that shown in the example command
above.
b.
Add the port to the Cluster domain by entering the following command:
network port broadcast-domain add-ports -ipspace Cluster broadcast-domain Cluster -ports node_name:port_name
c.
Modify the MTU of the Cluster broadcast domain by entering the following command:
network port broadcast-domain modify -broadcast-domain
Cluster -ipspace Cluster -mtu Cluster 9000
4. For each cluster port, use the network interface modify command to change the home port of any data LIFs on that
port to a data port.
Example
The following is an example command for changing a home port of a data LIF to a data port:
cluster1::> network interface modify -lif datalif1 -vserver vs1 -home-port e1b
5. For each LIF modified in the previous step, use the network interface revert command to revert it to its new home
port.
Example
The following is an example command for reverting the LIF datalif1 to its new home port of e1b:
cluster1::> network interface revert -lif datalif1 -vserver vs1
6. If your system is a switched cluster, use the network interface create command to create cluster LIFs on the cluster
ports.
Important: Cluster interface creation for the existing node in a two-node switchless cluster system is completed after
cluster setup is completed via netboot, on the new controller module.
Example
The following is an example command for creating a cluster LIF on one of the node's cluster ports. The -auto parameter
configures the LIF to use a link-local IP address:
cluster1::> network interface create -vserver Cluster -lif clus1 -role cluster -home-node
node0 -home-port e1a -auto true
7. If you are creating a two-node switchless cluster, enable the switchless-cluster networking mode:
a. Issue the following command at either node's prompt to change to the advanced privilege level:
set -privilege advanced
You can respond y when prompted to continue into advanced mode. The advanced mode prompt appears (*>).
b. Enable switchless-cluster mode:
network options switchless-cluster modify -enabled true
You must have an HTTP server that you can access with the system before and after adding the new controller module.
You must download the necessary system files for your platform and version of Data ONTAP from the the NetApp Support
Site at mysupport.netapp.com.
Both controller modules in the HA pair must run the same version of Data ONTAP.
Steps
1. Download and extract the file used for performing the netboot of your system:
a. Download the appropriate netboot.tgz file for your platform from the NetApp Support Site to a web-accessible
directory.
b. Change to the web-accessible directory.
c. Extract the contents of the netboot.tgz file to the target directory by entering the following command:
tar -zxvf netboot.tgz
2. Download the image.tgz file from the NetApp Support Site to the web-accessible directory.
Your directory listing should contain the following file and directory:
image.tgz netboot/
1. Enter the following command for the existing node, either ha or mcc:
storage failover modify -mode ha_state -node existing_node_name
If you are prompted to continue the halt procedure, enter y when prompted and wait until the system stops at the LOADER
prompt.
Attention: You must perform a clean system shutdown before replacing system components to avoid losing unwritten data
In a FAS22xx or a FAS25xx system, the NVMEM LED is marked with a battery symbol and is located on the
controller module to the left of the label showing the MAC address.
This LED blinks if there is unwritten data in the NVRAM. If this LED is flashing red after you enter the halt
command, reboot your system and try halting it again.
In a 32xx system, the NVMEM LED is located on the controller module to the right of the network ports, marked with
a battery symbol.
This LED blinks if there is unwritten data in the NVRAM. If this LED is flashing red after you enter the halt
command, reboot your system and try halting it again.
In a 62xx, the NVRAM LED is located on the controller module to the right of the network ports, marked with a
battery symbol.
This LED shows blinks if there is unwritten data in the NVRAM. If this LED is flashing red after you enter the halt
command, reboot your system and try halting it again.
In a FAS80xx, the NVRAM LED is located on the controller module to the right of the network ports, marked with a
battery symbol.
This LED shows blinks if there is unwritten data in the NVRAM. If this LED is flashing green after you enter the
halt command, reboot your system and try halting it again.
Then...
AC power supplies
Unplug the power cords from the power source, and then remove the power cords.
Then...
DC power supplies
Remove the power at the DC source, and remove the DC wires, if necessary.
1. If you have an I/O expansion module (IOXM) module in your system and are creating a single-chassis HA pair, you must
uncable and remove the IOXM.
You can then use the empty bay for the new controller module. However, the new configuration will not have the extra I/O
provided by the IOXM.
2. Physically install the new controller module and, if necessary, install additional fans:
If you are adding a controller
module...
To an empty bay to create a singlechassis HA pair and the system
belongs to one of the following
platforms:
31xx
6210
6220
FAS22xx
FAS25xx
32xx
FAS8020
FAS8040
FAS8060
a.
Install three additional fans in the chassis to cool the new controller module:
i.
Remove the bezel by using both hands to hold it by the openings on each side, and
then pull the bezel away from the chassis until it releases from the four ball studs on
the chassis frame.
ii.
Remove the blank plate that covers the bay that will contain the new fans.
iii.
Install the fans as described in the Replacing a fan module document for your system
on the NetApp Support Site at mysupport.netapp.com.
b.
Remove the blank plate in the rear of the chassis that covers the empty bay that will
contain the new controller module.
c.
a.
Remove the blank plate in the rear of the chassis that covers the empty bay that will
contain the new controller module.
b.
FAS32xx
FAS62xx
FAS8080
Then...
Directly connect the cluster ports on the existing controller module to the corresponding
cluster ports on the new controller module.
A switched cluster
Connect the cluster ports on each controller to the ports on the cluster network switches
identified in substep b.
10
Then...
a.
b.
c.
d.
e.
Bind the cables to the cable management device with the hook and loop strap.
Then...
a.
b.
You must have the SFP+ modules for the CNA ports, or, if the ports are set to a 10-GbE personality, you can use Twinax cables.
Steps
1. Boot to Maintenance mode on the new node, if it is not in Maintenance mode, by entering halt to go to the LOADER
prompt:
If you are running Data ONTAP 8.2 and earlier, enter boot_ontap, press Ctrl-C when prompted to got to the boot menu,
and then select Maintenance mode from the menu.
If you are running Data ONTAP 8.2.1 and later, enter boot_ontap maint at the loader prompt. If you are prompted to
continue with the boot, enter y to continue.
2. On the existing controller module console, check how the ports are currently configured:
system node hardware unified-connect show
Adapter
-------
Current Current
Pending
Mode
Type
Mode
------- --------- -------
Pending
Type
-------
Configuring and cabling CNA ports (FAS80xx , FAS2552 and FAS2554 systems only)
Status
------
11
f-a
f-a
f-a
...
0e
0f
0g
fc
fc
cna
online
online
online
4. If the current SFP+ module does not match the desired use, replace it with the correct SFP+ module.
5. If the current configuration does not match the existing node's configuration, enter one of the following commands to change
the configuration as needed:
If the desired use is for...
FC initiator
FC target
Ethernet
Note: If you changed the port personality, it will take effect when the new node is booted. Make sure that you verify the
settings after the boot.
Verifying and setting the HA state of the controller module and chassis
You must verify the HA state of the chassis and controller modules, and, if necessary, update the state to indicate that the system
is in an HA pair or MetroCluster configuration. If your system is in a MetroCluster configuration, you must have Data ONTAP
8.3 installed. If you have a FAS20xx, 31xx or 60xx system, you can skip this task.
Steps
1. If you are not already in Maintenance mode, boot to Maintenance mode by entering halt to go to the LOADER prompt:
If you are running Data ONTAP 8.2 and earlier, enter boot_ontap, and press Ctrl-C when prompted to got to the boot
menu, and then select Maintenance mode from the menu.
If you are running Data ONTAP 8.2.1 and later, enter boot_ontap maint at the loader prompt. If you are prompted to
continue with the boot, enter y to continue.
2. In Maintenance mode, enter the following command from the existing controller module to display the HA state of the new
controller module and chassis:
ha-config show
In an HA pair
ha
mcc
Stand alone
non-ha
3. If the displayed system state of the controller does not match your system configuration, set the HA state for the controller
module by entering the following command:
ha-config modify controller [ha | non-ha]
12
In an HA pair
Stand-alone
4. If the displayed system state of the chassis does not match your system configuration, set the HA state for the chassis by
entering the following command:
ha-config modify chassis [ha | non-ha]
If your system is...
In an HA pair
Stand-alone
Note the system ID. You need it when you assign disks to the new node.
6. Repeat these steps on the new controller module.
7. Exit Maintenance mode by entering the following command:
halt
8. If you are using root-data partitioning, set the partner system ID for each controller module:
a. On the existing controller module, set the partner system ID to that of the new controller module:
setenv partner-sysid sysID_of_new_controller
b. On the new controller module, set the partner system ID to that of the existing controller module:
setenv partner-sysid sysID_of_existing_controller
You must have made sure that there are enough spares, unassigned disks, or assigned disks that are not part of an existing
aggregate available that were identified in Preparing for the upgrade on page 2.
About this task
13
The system ID of the new controller module was identified in Verifying and setting the HA state of the controller module
and chassis on page 12.
Example
For example, the following command assigns the root partition of disk 2.3.9 to the new controller module:
storage disk assign -disk 2.3.9 -root true -sysid 1873758094 -force true
In the following example, the system ID of the new controller module is 1873758094.
3. Assign the same container disk from Step 2 to the new controller module:
storage disk assign -disk container_disk_name -sysid new_controller_sysID -force true
Example
For example, the following command assigns container disk 2.3.9 to the new controller module:
storage disk assign -disk 2.3.9 -sysid 1873758094 -force true
The system ID of the new controller module was identified in Verifying and setting the HA state of the controller module
and chassis on page 12.
Note: Available spare disks and partitions were identified in Preparing for the upgrade on page 2.
Example
For example, the following command assigns the data partition of disk 3.5.1 to the new controller module:
storage disk assign -disk 3.5.1 -data true -sysid 1873758094 -force true
In the following example, the system ID of the new controller module is 1873758094.
5. Repeat the preceding steps until all required partitions have been assigned to the new controller module.
6. Verify that the disk assignments are correct by examining the output from the storage disk show -partitionownership command and correcting as needed.
7. Exit advanced mode:
set -privilege admin
This procedure includes initializing disks. The amount of time you need to initialize the disks depends on the size of the disks.
If your system does not use disk partitioning, the system automatically assigns two disks to the new controller module.
14
Steps
1. If you are running Data ONTAP 8.2.x or earlier, set the following boot environment variable at the LOADER prompt
(LOADER>, LOADER-A>, or LOADER-B>) on the target node console:
setenv bootarg.init.boot_clustered true
Note: This step applies only to systems running clustered Data ONTAP.
2. Enter the following commands at the LOADER prompt to configure the IP address of the new controller module:
If DHCP is...
Available
Not available
4. Select the Install new software first option from the displayed menu.
This menu option downloads and installs the new Data ONTAP image to the boot device. If you are prompted to continue
the procedure, enter y when prompted.
5. Enter y when prompted regarding non-disruptive upgrade or replacement of the software.
6. Enter the path to the image.tgz file when you see the following prompt:
What is the URL for the package?
http://path_to_web-accessible_directory/image.tgz
7. Enter n to skip the backup recovery when you see the following prompt:
Example
****************************************************************
*
Restore Backup Configuration
*
* This procedure only applies to storage controllers that
*
* are configured as an HA pair.
*
*
*
* Choose Yes to restore the "varfs" backup configuration
*
* from the SSH server. Refer to the Boot Device Replacement *
* guide for more details.
*
* Choose No to skip the backup recovery and return to the
*
* boot menu.
*
15
****************************************************************
Do you want to restore the backup configuration
now? {y|n} n
9. If necessary, select the option to Clean configuration and initialize all disks after the node has booted.
Because you are configuring a new controller and the new controller's disks are empty, you can respond y when the system
warns you that this will erase all disks.
Note: The amount of time needed to initialize disks will depend on the size of your disks and configuration.
10. Your next step depends on the version of Data ONTAP you are running:
16
Description
8.2.x or earlier
The Cluster Setup wizard starts after the disks are initialized.
a.
b.
Description
8.3 or later
The Node Setup wizard starts after the disks are initialized. Complete the following substeps:
a.
Enter the node management LIF information on the console as shown in the following
example:
Welcome to node setup.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question
clarified,
"back" - if you want to change previously answered
questions, and
"exit" or "quit" - if you want to quit the cluster
setup wizard.
Any changes you made before quitting will be saved.
To accept a default or omit a question, do not enter a
value.
This system will send event messages and weekly reports
to NetApp
Technical Support. To disable this feature, enter
"autosupport
modify -support disable" within 24 hours. Enabling
AutoSupport can
significantly speed problem determination and resolution
should
a problem occur on your system.
For further information on AutoSupport, please see:
http://support.netapp.com/autosupport/
Type yes to confirm and continue{yes}: yes
Enter the node management interface
Enter the node management interface
10.98.230.86
Enter the node management interface
Enter the node management interface
10.98.224.1
A node management interface on port
10.98.230.86 has been created.
port [e0M]:
IP address:
netmask: 255.255.240.0
default gateway:
e0c with IP address
c.
Manually start the Cluster Setup wizard by entering the following command at the prompt:
cluster setup
11. With the Cluster Setup wizard running, join the node to the cluster:
join
Example
17
Example
Welcome to the cluster setup wizard.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the cluster setup wizard.
Any changes you made before quitting will be saved.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
Do you want to create a new cluster or join an existing cluster? {create, join}: join
After the node reboots, the Cluster Setup wizard displays Welcome to node setup and prompts you to complete the node
setup.
13. Log into the node, and enter the cluster setup and then enter join when prompted to join the cluster.
Example
Do you want to create a new cluster or join an existing cluster? {create, join}:join
The following is an example command for creating a cluster LIF on one of the node's cluster ports. The -auto parameter
configures the LIF to use a link-local IP address:
cluster1::> network interface create -vserver Cluster -lif clus1 -role cluster -home-node
node0 -home-port e1a -auto true
16. After setup is complete, verify that the node is healthy and eligible to participate in the cluster:
cluster show
Example
The following example shows a cluster after the second node (cluster1-02) has been joined to it:
18
Eligibility
-----------true
true
You can access the Cluster Setup wizard to change any of the values you entered for the admin Storage Virtual Machine
(SVM) or node SVM by using the cluster setup command.
For detailed information about licensing, see the knowledgebase article 3013749: Data ONTAP 8.2 Licensing Overview and
References on the NetApp Support Site and the Clustered Data ONTAP System Administration Guide for Cluster
Administrators.
Steps
1. If necessary, obtain license keys for the new node on the NetApp Support Site in the My Support section under Software
licenses.
If the site does not have the license keys you need, contact your sales or support representative.
2. Issue the following command to install each license key:
system license add -license-code license_key
Node
Partner
-------------- -------------old-ctlr
new-ctlr
new-ctlr
old-ctlr
2 entries were displayed.
19
Node
Partner
-------------- -------------old-ctlr
new-ctlr
new-ctlr
old-ctlr
2 entries were displayed.
3. If the cluster has only two nodes, enter the following command to enable cluster HA:
cluster ha modify -configured true
Cluster high availability (HA) must be configured in a cluster if it contains only two nodes and it differs from the HA
provided by storage failover. You can skip this step if the cluster has more than two nodes.
1. Log in to the NetApp Support Site, select the most current version of firmware for your system from those listed at
support.netapp.com/NOW/cgi-bin/fw, and then follow the instructions for downloading and installing the new firmware.
2. If your system uses an RLM, select the most current version of firmware for your RLM from those listed at
support.netapp.com/NOW/download/tools/rlm_fw, and then follow the instructions for downloading and installing the new
firmware.
This procedure includes steps that are performed on both the existing controller module and the new controller module. Be sure
to enter the command on the correct system.
You must enter the commands in the steps below in the nodeshell. For more information about the nodeshell, see the Clustered
Data ONTAP System Administration Guide for Cluster Administrators.
Steps
1. On the existing controller module, enter the following commands to verify that the key server is still available:
20
key_manager status
key_manager query
Example
The following command checks the status of all key management servers linked to the storage system:
storage-system> key_manager status
Key server
Status
172.18.99.175
Server is responding
The following command checks the status of all key management servers linked to the storage system and displays
additional information:
storage-system> key_manager query
Key server 172.18.99.175 is responding.
Key server 172.18.99.175 reports 4 keys.
Key tag
-------storage-system
storage-system
storage-system
storage-system
Key ID
------080CF0C80...
080CF0C80...
080CF0C80...
080CF0C80...
2. On the new controller module, complete the following steps to install the same SSL certificates that are on the existing
controller module:
a. Copy the certificate files to a temporary location on the storage system.
b. Install the public certificate of the storage system by entering the following command at the storage system prompt:
keymgr install cert /path/client.pem
c. Install the private certificate of the storage system by entering the following command at the storage system prompt:
keymgr install cert /path/client_private.pem
d. Install the public certificate of the key management server by entering the following command at the storage system
prompt:
keymgr install cert /path/key_management_server_ipaddress_CA.pem
e. If you are linking multiple key management servers to the storage system, repeat the preceding steps for each public
certificate of each key management server.
3. On the new controller module, run the Storage Encryption setup wizard to set up and install the key servers.
You must install the same key servers that are installed on the existing controller module.
a. Enter the following command at the storage system prompt:
key_manager setup
21
172.22.192.192
q
yes
4. On the existing controller module, enter the applicable command to restore authentication keys either from all linked key
management servers or from a specific one:
5. On the existing controller module, rekey all of the disks by entering the following command at the prompt:
key_manager rekey -keytag key_tag
key_tag is the key tag name specified in the setup wizard in step 3.
22
Copyright information
Copyright 19942015 NetApp, Inc. All rights reserved. Printed in the U.S.
No part of this document covered by copyright may be reproduced in any form or by any meansgraphic, electronic, or
mechanical, including photocopying, recording, taping, or storage in an electronic retrieval systemwithout prior written
permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE, WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice. NetApp assumes no
responsibility or liability arising from the use of products described herein, except as expressly agreed to in writing by NetApp.
The use or purchase of this product does not convey a license under any patent rights, trademark rights, or any other intellectual
property rights of NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents, or pending applications.
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to restrictions as set forth in
subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.277-7103 (October 1988)
and FAR 52-227-19 (June 1987).
Trademark information
NetApp, the NetApp logo, Go Further, Faster, ASUP, AutoSupport, Campaign Express, Cloud ONTAP, clustered Data ONTAP,
Customer Fitness, Data ONTAP, DataMotion, Fitness, Flash Accel, Flash Cache, Flash Pool, FlashRay, FlexArray, FlexCache,
FlexClone, FlexPod, FlexScale, FlexShare, FlexVol, FPolicy, GetSuccessful, LockVault, Manage ONTAP, Mars, MetroCluster,
MultiStore, NetApp Insight, OnCommand, ONTAP, ONTAPI, RAID DP, SANtricity, SecureShare, Simplicity, Simulate
ONTAP, Snap Creator, SnapCopy, SnapDrive, SnapIntegrator, SnapLock, SnapManager, SnapMirror, SnapMover, SnapProtect,
SnapRestore, Snapshot, SnapValidator, SnapVault, StorageGRID, Tech OnTap, Unbound Cloud, and WAFL are trademarks or
registered trademarks of NetApp, Inc., in the United States, and/or other countries. A current list of NetApp trademarks is
available on the web at http://www.netapp.com/us/legal/netapptmlist.aspx.
Cisco and the Cisco logo are trademarks of Cisco in the U.S. and other countries. All other brands or products are trademarks or
registered trademarks of their respective holders and should be treated as such.
23
24