0% found this document useful (0 votes)
314 views

Adding A Second Controller To Create An HA Pair

Adding a Second Controller to Create an HA Pair

Uploaded by

Purushothama Gn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
314 views

Adding A Second Controller To Create An HA Pair

Adding a Second Controller to Create an HA Pair

Uploaded by

Purushothama Gn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

For clustered environments only

Adding a second controller to create an HA pair


Upgrading a stand-alone controller module to a two-node cluster in an HA pair is a multistep process involving both hardware
and software changes that must be performed in the proper order.
Before you begin

A controller module must already be installed, configured, and operating in clustered Data ONTAP 8.2 or later.
This controller module is referred to as the existing controller module; the examples in this procedure have the console
prompt existing_ctlr>.
The controller module that is being added is referred to as the new controller module; the examples in this procedure have
the console prompt new_ctlr>.

The new controller module must be received from NetApp as part of the upgrade kit.
This procedure does not apply to moving a controller module from a preexisting system or a system that was previously in
an HA pair. However, if you populate the new controller module with PCIe cards from existing inventory at the customer
site, check that they are compatible and supported by the new controller module.

NetApp Hardware Universe

Your system must have an empty slot available for the new controller module when upgrading to a single-chassis HA pair (a
HA pair in which both controller modules reside in the same chassis.
Note: This configuration is not supported on all systems.

You must have rack space and cables for the new controller module when upgrading to a dual-chassis HA pair (an HA pair
in which the controller modules reside in separate chassis.
Note: This configuration is not supported on all systems.

Each controller module must be connected to the management network through its e0a port or, if your system has one, the
e0M port (wrench port).

About this task

This procedure does not apply to systems operating in 7-Mode.


This procedure can take over an hour, with additional time needed to initialize the disks. The time to initialize the disks depends
on the size of the disks.
Steps

1. Preparing for the upgrade on page 2


2. Preparing to add a controller module when using Storage Encryption on page 4
3. Preparing cluster ports on the existing controller module on page 5
4. Preparing the netboot server on page 7
5. Setting the mode on the existing controller module on page 8
6. Shutting down the existing controller on page 8
7. Installing and cabling the new controller module on page 9
8. Configuring and cabling CNA ports (FAS80xx , FAS2552 and FAS2554 systems only) on page 11
9. Verifying and setting the HA state of the controller module and chassis on page 12
10. Assigning disks to the new controller using root-data partitioning on page 13

215-07711_D0

February 2015

Copyright 2015 NetApp, Inc. All rights reserved.


Web: www.netapp.com Feedback: [email protected]

11.
12.
13.
14.
15.
16.

Netbooting and setting up Data ONTAP on the new controller module on page 14
Installing licenses for the new controller module on page 19
Verifying storage failover on both controller modules and setting cluster HA on page 19
Installing the firmware after adding a second controller module on page 20
Setting up Storage Encryption on the new controller module on page 20
Verifying the configuration with the Config Advisor on page 23

Preparing for the upgrade


Before upgrading to an HA pair, you must make sure that your system meets all requirements and that you have all the
necessary information.
Steps

1. Ensure that your system has enough available disks or partitions for the new controller module.
You need to identify unassigned disks or spare disks with available partitions that you can assign to the new controller
module.

If your system uses root-data partitioning, check disk ownership:


storage disk show -ownership

Two parity disks and one data disk, plus one spare, are required for root aggregate creation.
Note: You must set the auto_assign option to off on the existing controller module before adding any new disks.

Find a Storage Management Guide for your version of Data ONTAP 8

If your system uses root-data partitioning, determine the system spare disks and available partition space:
a. Identify the unassigned disks:
storage disk show
clust01::> storage disk show
Usable
Disk
Container
Container
Disk
Size
Shelf Bay Type
Type
Name
Owner
---------- ---------- ----- --- ------- ----------- --------- -------1.0.0
0
0
SAS
unassigned 1.0.1
408.2GB
0
1
SAS
shared
1.0.2
408.2GB
0
2
SAS
shared
clust01-01
1.0.3
408.2GB
0
3
SAS
shared
clust01-01
1.0.4
408.2GB
0
4
SAS
shared
clust01-01
1.0.5
408.2GB
0
5
SAS
shared
clust01-01
1.0.6
408.2GB
0
6
SAS
shared
clust01-01
1.0.7
408.2GB
0
7
SAS
shared
clust01-01
.
.
.
.

b. Determine the disks with usable root and data space:


storage aggregate show-spare-disks
clust01::> storage aggregate show-spare-disks
.
.
.
Original Owner: existing_ctlr
Pool0
Partitioned Spares
2

Adding a second controller to create an HA pair

Disk
----2.3.9
3.0.6
3.5.1
.
.
.
.

Type
----BSAS
BSAS
BSAS

RPM
----7200
7200
7200

Checksum
--------advanced_zoned
block
block

Local
Local
Data
Root
Usable Usable
------ -------0B
73.89GB
0B
0B
354.3GB 413.8GB

Physical
Size
-----931.5GB
828.0GB
828.0GB

Status
------zeroed
offline
zeroed

Note: Look for usable space in both the


Local Data Usable
and Local Root Usable columns for available partition space.

If the system has 12 internal disks, five partitions are required for root aggregate creation and a sixth is required as a
spare.

If the system has 24 internal disks, 10 partitions are required for root aggregate creation and two disks are required as
spares.

If the system is an All Flash FAS (AFF) system, 22 partitions are required for the root aggregate creation and two
disks are required for spares.

2. How you proceed depends on the results of the previous step.


If the output showed...

Then...

Enough spare disks available for the


new controller module

Go to the next step.

Not enough spares for the new


controller module on a system with no
root-data partitioning

Complete the following substeps:


a.

Determine where the aggregates for the existing node are located by entering the following
command:
storage aggregate show
Note: If you do not have enough free disks for your system, you need to add more
storage. Contact technical support for more information.

b.

If disk ownership automatic assignment is on, turn it off:


storage disk option modify -node node_name -autoassign off

c.

Remove ownership on disks that do not have aggregates on them by entering the following
command:
storage disk removeowner disk_name

d.
Not enough spares for the new
controller module on a system with
root-data partitioning

Repeat the previous step for as many disks that you need for the new node.

Either:

Add more storage to the system.

Identify disk partitions that aren't part of existing aggregates, so you can use them when
assigning disks.

3. Ensure that you have cables ready for the following connections:

Cluster connections

Preparing for the upgrade

If you are creating a two-node switchless cluster, you need two cables to connect the controller modules. Otherwise, you
need a minimum of four cables, two for each controller module connection to the cluster network switch. Other systems
(like the FAS80xx series) have defaults of either four or six cluster connections.

HA interconnect connections, if the system is in a dual-chassis HA pair

Storage connections to the disk shelves

4. Ensure that you have a serial port console available for the controller modules.
5. Review the Site Requirements Guide on the NetApp Support Site or the Hardware Universe and gather all IP addresses and
other network parameters for the new controller module.

Preparing to add a controller module when using Storage Encryption


If the existing controller module is configured for Storage Encryption, you must gather information from the system and rekey
the self-encrypting disks (SEDs) before adding the new controller module.
About this task

You must enter the commands in the steps below in the nodeshell. For more information about the nodeshell, see the Clustered
Data ONTAP System Administration Guide for Cluster Administrators.
Steps

1. Enter the following command and note the key IDs on all disk drives that are using Storage Encryption:
disk encrypt show
Example

The command displays the status of each self-encrypting disk:


storage-system> disk encrypt show
Disk
Key ID
0c.00.1
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
0c.00.0
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
0c.00.3
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
0c.00.4
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
0c.00.2
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
0c.00.5
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3

Locked?
Yes
Yes
Yes
Yes
Yes
Yes

2. Enter the following command and note all the necessary certificate files (client.pem, client_private.pem, and
ip_address_key_server_CA.pem) that have been installed:
keymgr list cert

Later in the procedure you need to install these same certificate files on the new partner controller module.
3. Enter the following command to identify the IP address of the key servers:
key_manager show

All external key management servers associated with the storage system are listed. Later in the procedure you need to add
these same key servers on the new partner controller module.
Example

The following command displays all external key management servers associated with the storage system:

Adding a second controller to create an HA pair

storage-system> key_manager show


172.18.99.175

4. Enter the following command and check that the key IDs listed match those shown by the disk encrypt show command
in step 1:
key_manager query
Example

The following command checks the status of all key management servers linked to the storage system and displays
additional information:
storage-system> key_manager query
Key server 172.18.99.175 reports 4 keys.
Key tag
-------storage-system
storage-system
storage-system
storage-system

Key ID
------080CF0C80...
080CF0C80...
080CF0C80...
080CF0C80...

5. Back up all data on all aggregates using standard methods for your site.
6. Enter the following command to reset the authentication key on the drives using Storage Encryption to their Manufacturing
System ID (MSID):
disk encrypt rekey 0x0 *

7. Examine the CLI command output to ensure that there are no disk encrypt rekey failures.

Preparing cluster ports on the existing controller module


Before installing the new controller module, you must configure cluster ports on the existing controller module so they can
provide cluster communication with the new controller module. If you are creating a two-node switchless cluster (with no
cluster-network switches), you must enable the switchless-cluster networking mode.
About this task

For detailed information about port, LIF, and network configuration in clustered Data ONTAP, see the Clustered Data ONTAP
Network Management Guide on the the NetApp Support Site.
Steps

1. Determine which ports should be used as the node's cluster ports.


For a list of the default port roles for your platform, see the Hardware Universe at hwu.netapp.com. The Installation and
Setup Instructions for your platform on the NetApp Support Site at mysupport.netapp.com also have information about the
ports for cluster network connections.
2. For each cluster port, identify the port roles by entering the following command:
network port show

3. How you proceed depends on which version of Data ONTAP your system is running:

Preparing cluster ports on the existing controller module

If your system is running...

Then....

Data ONTAP 8.2.x and earlier

Complete the following substeps:


a.

For each port that you identified as a cluster port, use the network port modify
command to modify the role to cluster and set the MTU to the default value of 9000.
network port modify -node local -port port_name -role
cluster -mtu 9000 -ipspace Cluster

b.
Data ONTAP 8.3 and later

Go to the next step.

If the port roles are not set to Cluster, complete the following substeps:
a.

Remove them to the correct role by entering the following command for each incorrect
port role:
network port broadcast-domain remove-ports -ipspace Default
-broadcast-domain Default -ports node_name:port_name
Note: Your domain name may be different than that shown in the example command
above.

b.

Add the port to the Cluster domain by entering the following command:
network port broadcast-domain add-ports -ipspace Cluster broadcast-domain Cluster -ports node_name:port_name

c.

Modify the MTU of the Cluster broadcast domain by entering the following command:
network port broadcast-domain modify -broadcast-domain
Cluster -ipspace Cluster -mtu Cluster 9000

4. For each cluster port, use the network interface modify command to change the home port of any data LIFs on that
port to a data port.
Example

The following is an example command for changing a home port of a data LIF to a data port:
cluster1::> network interface modify -lif datalif1 -vserver vs1 -home-port e1b

5. For each LIF modified in the previous step, use the network interface revert command to revert it to its new home
port.
Example

The following is an example command for reverting the LIF datalif1 to its new home port of e1b:
cluster1::> network interface revert -lif datalif1 -vserver vs1

6. If your system is a switched cluster, use the network interface create command to create cluster LIFs on the cluster
ports.
Important: Cluster interface creation for the existing node in a two-node switchless cluster system is completed after
cluster setup is completed via netboot, on the new controller module.
Example

The following is an example command for creating a cluster LIF on one of the node's cluster ports. The -auto parameter
configures the LIF to use a link-local IP address:

Adding a second controller to create an HA pair

cluster1::> network interface create -vserver Cluster -lif clus1 -role cluster -home-node
node0 -home-port e1a -auto true

7. If you are creating a two-node switchless cluster, enable the switchless-cluster networking mode:
a. Issue the following command at either node's prompt to change to the advanced privilege level:
set -privilege advanced

You can respond y when prompted to continue into advanced mode. The advanced mode prompt appears (*>).
b. Enable switchless-cluster mode:
network options switchless-cluster modify -enabled true

c. Return to the admin privilege level:


set -privilege admin

Preparing the netboot server


When you are ready to prepare the netboot server, you must download the correct Data ONTAP netboot image from the NetApp
Support Site to the netboot server and know its IP address.
Before you begin

You must have an HTTP server that you can access with the system before and after adding the new controller module.

You must have access to the NetApp Support Site at mysupport.netapp.com.


This enables you to download the necessary system files.

About this task

You must download the necessary system files for your platform and version of Data ONTAP from the the NetApp Support
Site at mysupport.netapp.com.
Both controller modules in the HA pair must run the same version of Data ONTAP.

Steps

1. Download and extract the file used for performing the netboot of your system:
a. Download the appropriate netboot.tgz file for your platform from the NetApp Support Site to a web-accessible
directory.
b. Change to the web-accessible directory.
c. Extract the contents of the netboot.tgz file to the target directory by entering the following command:
tar -zxvf netboot.tgz

Your directory listing should contain the following directory:


netboot/

2. Download the image.tgz file from the NetApp Support Site to the web-accessible directory.
Your directory listing should contain the following file and directory:
image.tgz netboot/

3. Determine the IP address of the existing controller module.


This address is referred to later in this procedure as ip-address-of-existing controller.

Preparing the netboot server

4. Ping ip-address-of-existing controller to ensure that the address is reachable.

Setting the mode on the existing controller module


You must use the storage failover modify command to set the mode on the existing controller module. The mode value is
enabled later after you reboot the controller module.
Step

1. Enter the following command for the existing node, either ha or mcc:
storage failover modify -mode ha_state -node existing_node_name

Shutting down the existing controller


You must perform a clean shutdown of the existing controller to ensure that all data has been written to disk. You must also
disconnect the power supplies.
Steps

1. Enter the following command from the existing controller's prompt:


halt local

If you are prompted to continue the halt procedure, enter y when prompted and wait until the system stops at the LOADER
prompt.
Attention: You must perform a clean system shutdown before replacing system components to avoid losing unwritten data

in the NVRAM or NVMEM:

In a FAS22xx or a FAS25xx system, the NVMEM LED is marked with a battery symbol and is located on the
controller module to the left of the label showing the MAC address.
This LED blinks if there is unwritten data in the NVRAM. If this LED is flashing red after you enter the halt
command, reboot your system and try halting it again.

In a 32xx system, the NVMEM LED is located on the controller module to the right of the network ports, marked with
a battery symbol.
This LED blinks if there is unwritten data in the NVRAM. If this LED is flashing red after you enter the halt
command, reboot your system and try halting it again.

In a 62xx, the NVRAM LED is located on the controller module to the right of the network ports, marked with a
battery symbol.
This LED shows blinks if there is unwritten data in the NVRAM. If this LED is flashing red after you enter the halt
command, reboot your system and try halting it again.

In a FAS80xx, the NVRAM LED is located on the controller module to the right of the network ports, marked with a
battery symbol.
This LED shows blinks if there is unwritten data in the NVRAM. If this LED is flashing green after you enter the
halt command, reboot your system and try halting it again.

2. If you are not already grounded, properly ground yourself.


3. Turn off the power supplies and disconnect the power, using the correct method for your system and power-supply type:

If your system uses..

Then...

AC power supplies

Unplug the power cords from the power source, and then remove the power cords.

Adding a second controller to create an HA pair

If your system uses..

Then...

DC power supplies

Remove the power at the DC source, and remove the DC wires, if necessary.

Installing and cabling the new controller module


You must physically install the new controller module in the chassis and cable it.
Steps

1. If you have an I/O expansion module (IOXM) module in your system and are creating a single-chassis HA pair, you must
uncable and remove the IOXM.
You can then use the empty bay for the new controller module. However, the new configuration will not have the extra I/O
provided by the IOXM.
2. Physically install the new controller module and, if necessary, install additional fans:
If you are adding a controller
module...
To an empty bay to create a singlechassis HA pair and the system
belongs to one of the following
platforms:

31xx

6210

6220

To an empty bay to create a singlechassis HA pair and the system


belongs to one of the following
platforms:

FAS22xx

FAS25xx

32xx

FAS8020

FAS8040

FAS8060

Installing and cabling the new controller module

Then perform these steps...

a.

Install three additional fans in the chassis to cool the new controller module:
i.

Remove the bezel by using both hands to hold it by the openings on each side, and
then pull the bezel away from the chassis until it releases from the four ball studs on
the chassis frame.

ii.

Remove the blank plate that covers the bay that will contain the new fans.

iii.

Install the fans as described in the Replacing a fan module document for your system
on the NetApp Support Site at mysupport.netapp.com.

b.

Remove the blank plate in the rear of the chassis that covers the empty bay that will
contain the new controller module.

c.

Gently push the controller module halfway into the chassis.

a.

Remove the blank plate in the rear of the chassis that covers the empty bay that will
contain the new controller module.

b.

Gently push the controller module halfway into the chassis.


To prevent the controller module from automatically booting, do not fully seat it in the
chassis until later in this procedure.

If you are adding a controller


module...

Then perform these steps...

In a separate chassis from its HA


partner to create a dual-chassis HA
pair when the existing configuration is
in a controller-IOX module
configuration.

Install the new system in the rack or system cabinet.

FAS32xx

FAS62xx

FAS8080

3. Cable the HA interconnect if you have a dual-chassis HA pair.


4. Except for CNA ports and cluster network ports, cable the ports on the new controller module, including the HA
interconnect if you have a dual-chassis HA pair. Also, connect the controller module to any storage assigned to the HA pair.
If the new controller has CNA ports, do not cable them until you verify their configuration as described later in this
procedure.
See the systems Installation and Setup Instructions, the Clustered Data ONTAP High-Availability Configuration Guide for
your version of Data ONTAP, and, if applicable, the Universal SAS and ACP Cabling Guide.
5. Cable the cluster network connections, as necessary:
a. Identify the ports on the controller module for the cluster connections.
These ports were identified in Step 1 of Preparing cluster ports on the existing controller on page 5.
b. If you are configuring a switched cluster, identify the ports that you will use on the cluster network switches.
See the Clustered Data ONTAP Switch Setup Guide for Cisco Switches, NetApp 10G Cluster-Mode Switch Installation
Guide or NetApp 1G Cluster-Mode Switch Installation Guide, depending on what switches you are using.
c. Connect cables to the cluster ports:
If the cluster is...

Then...

A two-node switchless cluster

Directly connect the cluster ports on the existing controller module to the corresponding
cluster ports on the new controller module.

A switched cluster

Connect the cluster ports on each controller to the ports on the cluster network switches
identified in substep b.

6. Power up the existing controller module.


7. Depending on your configuration, power up the new controller module and interrupt the boot process:

10

If the new controller module is...

Then...

In the same chassis as the existing


controller module

a.

Push the controller module firmly into the bay.


When fully seated, the controller module receives power and automatically boots.

b.

Interrupt the boot process by pressing Ctrl-C.

c.

Tighten the thumb screw on the cam handle.

d.

Reinstall the cable management device.

e.

Bind the cables to the cable management device with the hook and loop strap.

Adding a second controller to create an HA pair

If the new controller module is...

Then...

In a separate chassis from the existing


controller module

a.

Turn on the power supplies on the new controller module.

b.

Interrupt the boot process by pressing Ctrl-C.

The system displays the LOADER prompt (LOADER>, LOADER-A>, or LOADER-B>).


Note: If there is no LOADER prompt, record the error message and contact technical support. If the system displays the
boot menu, reboot and attempt to interrupt the boot process again.

Configuring and cabling CNA ports (FAS80xx , FAS2552 and FAS2554


systems only)
If you are adding a controller module to a FAS80xx , FAS2552 or FAS2554 system, you must check the configuration of the
CNA ports on the new controller module and, if necessary, change the defaults to match the CNA port configuration of the
existing controller module.
Before you begin

You must have the SFP+ modules for the CNA ports, or, if the ports are set to a 10-GbE personality, you can use Twinax cables.
Steps

1. Boot to Maintenance mode on the new node, if it is not in Maintenance mode, by entering halt to go to the LOADER
prompt:

If you are running Data ONTAP 8.2 and earlier, enter boot_ontap, press Ctrl-C when prompted to got to the boot menu,
and then select Maintenance mode from the menu.

If you are running Data ONTAP 8.2.1 and later, enter boot_ontap maint at the loader prompt. If you are prompted to
continue with the boot, enter y to continue.

2. On the existing controller module console, check how the ports are currently configured:
system node hardware unified-connect show

The system displays output similar to the following example:


node_name::> system node hardware unified-connect show
Current Current
Pending Pending
Node Adapter Mode
Type
Mode
Type
Status
---- ------- ------- --------- ------- -----------f-a
0e
fc
initiator online
f-a
0f
fc
initiator online
f-a
0g
cna
target
online
...

3. On the console of the new node, display the port settings:


ucadmin show

The system displays output similar to the following example:


*> ucadmin show
Node
----

Adapter
-------

Current Current
Pending
Mode
Type
Mode
------- --------- -------

Pending
Type
-------

Configuring and cabling CNA ports (FAS80xx , FAS2552 and FAS2554 systems only)

Status
------

11

f-a
f-a
f-a
...

0e
0f
0g

fc
fc
cna

initiator initiator target


-

online
online
online

4. If the current SFP+ module does not match the desired use, replace it with the correct SFP+ module.
5. If the current configuration does not match the existing node's configuration, enter one of the following commands to change
the configuration as needed:
If the desired use is for...

Then enter the following command...

FC initiator

ucadmin modify -t initiator adapter_name

FC target

ucadmin modify -t target adapter_name

Ethernet

ucadmin modify -m cna adapter_name

Note: If you changed the port personality, it will take effect when the new node is booted. Make sure that you verify the
settings after the boot.

6. Cable the port.

Verifying and setting the HA state of the controller module and chassis
You must verify the HA state of the chassis and controller modules, and, if necessary, update the state to indicate that the system
is in an HA pair or MetroCluster configuration. If your system is in a MetroCluster configuration, you must have Data ONTAP
8.3 installed. If you have a FAS20xx, 31xx or 60xx system, you can skip this task.
Steps

1. If you are not already in Maintenance mode, boot to Maintenance mode by entering halt to go to the LOADER prompt:

If you are running Data ONTAP 8.2 and earlier, enter boot_ontap, and press Ctrl-C when prompted to got to the boot
menu, and then select Maintenance mode from the menu.

If you are running Data ONTAP 8.2.1 and later, enter boot_ontap maint at the loader prompt. If you are prompted to
continue with the boot, enter y to continue.

2. In Maintenance mode, enter the following command from the existing controller module to display the HA state of the new
controller module and chassis:
ha-config show

The HA state should be the same for all components.


If your system is...

The HA state for all components should be...

In an HA pair

ha

In a MetroCluster running clustered


Data ONTAP

mcc

Stand alone

non-ha

3. If the displayed system state of the controller does not match your system configuration, set the HA state for the controller
module by entering the following command:
ha-config modify controller [ha | non-ha]

12

Adding a second controller to create an HA pair

If your system is...

Issue the following command...

In an HA pair

ha-config modify controller ha

In a MetroCluster running clustered


Data ONTAP

ha-config modify controller mcc

Stand-alone

ha-config modify controller non-ha

4. If the displayed system state of the chassis does not match your system configuration, set the HA state for the chassis by
entering the following command:
ha-config modify chassis [ha | non-ha]
If your system is...

Issue the following command...

In an HA pair

ha-config modify chassis ha

In a MetroCluster running clustered


Data ONTAP

ha-config modify chassis mcc

Stand-alone

ha-config modify chassis non-ha

5. Retrieve the system ID for the current node:


sysconfig

Note the system ID. You need it when you assign disks to the new node.
6. Repeat these steps on the new controller module.
7. Exit Maintenance mode by entering the following command:
halt

8. If you are using root-data partitioning, set the partner system ID for each controller module:
a. On the existing controller module, set the partner system ID to that of the new controller module:
setenv partner-sysid sysID_of_new_controller

b. On the new controller module, set the partner system ID to that of the existing controller module:
setenv partner-sysid sysID_of_existing_controller

Assigning disks to the new controller using root-data partitioning


For systems using root-data partitioning, you must assign root partitions and data partitions to the new controller module before
you complete the initial configuration of the new controller module through netboot.
Before you begin

You must have made sure that there are enough spares, unassigned disks, or assigned disks that are not part of an existing
aggregate available that were identified in Preparing for the upgrade on page 2.
About this task

These steps are performed on the existing controller module.


Steps

1. Enter advanced mode on the existing controller module:


set -privilege advanced

Assigning disks to the new controller using root-data partitioning

13

Enter y when you are prompted.


2. Assign a root partition belonging to the container disk assigned in the previous step to the new controller module:
storage disk assign -disk disk_name -root true -sysid new_controller_sysID -force true

The system ID of the new controller module was identified in Verifying and setting the HA state of the controller module
and chassis on page 12.
Example

For example, the following command assigns the root partition of disk 2.3.9 to the new controller module:
storage disk assign -disk 2.3.9 -root true -sysid 1873758094 -force true

In the following example, the system ID of the new controller module is 1873758094.
3. Assign the same container disk from Step 2 to the new controller module:
storage disk assign -disk container_disk_name -sysid new_controller_sysID -force true
Example

For example, the following command assigns container disk 2.3.9 to the new controller module:
storage disk assign -disk 2.3.9 -sysid 1873758094 -force true

4. Assign a spare data partition to the new controller module:


storage disk assign -disk disk_name -data true -sysid new_controller_sysID -force true

The system ID of the new controller module was identified in Verifying and setting the HA state of the controller module
and chassis on page 12.
Note: Available spare disks and partitions were identified in Preparing for the upgrade on page 2.
Example

For example, the following command assigns the data partition of disk 3.5.1 to the new controller module:
storage disk assign -disk 3.5.1 -data true -sysid 1873758094 -force true

In the following example, the system ID of the new controller module is 1873758094.
5. Repeat the preceding steps until all required partitions have been assigned to the new controller module.
6. Verify that the disk assignments are correct by examining the output from the storage disk show -partitionownership command and correcting as needed.
7. Exit advanced mode:
set -privilege admin

Netbooting and setting up Data ONTAP on the new controller module


You must perform a specific sequence of steps to boot and install the operating system on the new controller module.
About this task

This procedure includes initializing disks. The amount of time you need to initialize the disks depends on the size of the disks.
If your system does not use disk partitioning, the system automatically assigns two disks to the new controller module.

Find a Storage Management Guide for your version of Data ONTAP 8

14

Adding a second controller to create an HA pair

Steps

1. If you are running Data ONTAP 8.2.x or earlier, set the following boot environment variable at the LOADER prompt
(LOADER>, LOADER-A>, or LOADER-B>) on the target node console:
setenv bootarg.init.boot_clustered true
Note: This step applies only to systems running clustered Data ONTAP.

2. Enter the following commands at the LOADER prompt to configure the IP address of the new controller module:
If DHCP is...

Then enter the following command...

Available

ifconfig e0M -auto

Not available

Enter the following command:


ifconfig e0M -addr=filer_addr -mask=netmask -gw=gateway dns=dns_addr -domain=dns_domain
filer_addr is the IP address of the storage system.
netmask is the network mask of the storage system.
gateway is the gateway for the storage system.
dns_addr is the IP address of a name server on your network.
dns_domain is the Domain Name System (DNS) domain name. If you use this optional
parameter, you do not need a fully qualified domain name in the netboot server URL; you
need only the servers host name.
Note: Other parameters might be necessary for your interface. For details, use the
help ifconfig command at the LOADER prompt.

3. At the LOADER prompt, enter the following command:


netboot http://path_to_web-accessible_directory/netboot/kernel

4. Select the Install new software first option from the displayed menu.
This menu option downloads and installs the new Data ONTAP image to the boot device. If you are prompted to continue
the procedure, enter y when prompted.
5. Enter y when prompted regarding non-disruptive upgrade or replacement of the software.
6. Enter the path to the image.tgz file when you see the following prompt:
What is the URL for the package?
http://path_to_web-accessible_directory/image.tgz

7. Enter n to skip the backup recovery when you see the following prompt:
Example
****************************************************************
*
Restore Backup Configuration
*
* This procedure only applies to storage controllers that
*
* are configured as an HA pair.
*
*
*
* Choose Yes to restore the "varfs" backup configuration
*
* from the SSH server. Refer to the Boot Device Replacement *
* guide for more details.
*
* Choose No to skip the backup recovery and return to the
*
* boot menu.
*

Netbooting and setting up Data ONTAP on the new controller module

15

****************************************************************
Do you want to restore the backup configuration
now? {y|n} n

8. Reboot by entering y when you see the following prompt:


The node must be rebooted to start using the newly installed software. Do you want to
reboot now? {y|n}y

9. If necessary, select the option to Clean configuration and initialize all disks after the node has booted.
Because you are configuring a new controller and the new controller's disks are empty, you can respond y when the system
warns you that this will erase all disks.
Note: The amount of time needed to initialize disks will depend on the size of your disks and configuration.

10. Your next step depends on the version of Data ONTAP you are running:

16

If you are running Data ONTAP...

Description

8.2.x or earlier

The Cluster Setup wizard starts after the disks are initialized.
a.

Enter the node management information, as prompted by the wizard.

b.

Go to the next step.

Adding a second controller to create an HA pair

If you are running Data ONTAP...

Description

8.3 or later

The Node Setup wizard starts after the disks are initialized. Complete the following substeps:
a.

Enter the node management LIF information on the console as shown in the following
example:
Welcome to node setup.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question
clarified,
"back" - if you want to change previously answered
questions, and
"exit" or "quit" - if you want to quit the cluster
setup wizard.
Any changes you made before quitting will be saved.
To accept a default or omit a question, do not enter a
value.
This system will send event messages and weekly reports
to NetApp
Technical Support. To disable this feature, enter
"autosupport
modify -support disable" within 24 hours. Enabling
AutoSupport can
significantly speed problem determination and resolution
should
a problem occur on your system.
For further information on AutoSupport, please see:
http://support.netapp.com/autosupport/
Type yes to confirm and continue{yes}: yes
Enter the node management interface
Enter the node management interface
10.98.230.86
Enter the node management interface
Enter the node management interface
10.98.224.1
A node management interface on port
10.98.230.86 has been created.

port [e0M]:
IP address:
netmask: 255.255.240.0
default gateway:
e0c with IP address

This node has its management address assigned and is


ready for cluster setup.
To complete cluster setup after all nodes are ready,
download and run the System Setup utility from the
NetApp Support Site and use it to discover the configured
nodes.
For System Setup, this node's management address is:
10.98.230.86
b.

Manually enter the admin login ID when prompted to do so.

c.

Manually start the Cluster Setup wizard by entering the following command at the prompt:
cluster setup

11. With the Cluster Setup wizard running, join the node to the cluster:
join
Example

Netbooting and setting up Data ONTAP on the new controller module

17

Example
Welcome to the cluster setup wizard.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the cluster setup wizard.
Any changes you made before quitting will be saved.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
Do you want to create a new cluster or join an existing cluster? {create, join}: join

12. Respond yes when prompted to set storage failover to HA mode.


Example
Non-HA mode, Reboot node to activate HA
Warning: Ensure that the HA partner has started disk initialization before
rebooting this node to enable HA.
Do you want to reboot now to set storage failover (SFO) to HA mode?
{yes, no} [yes]: yes
Rebooting now

After the node reboots, the Cluster Setup wizard displays Welcome to node setup and prompts you to complete the node
setup.
13. Log into the node, and enter the cluster setup and then enter join when prompted to join the cluster.
Example
Do you want to create a new cluster or join an existing cluster? {create, join}:join

14. Respond to the remaining prompts as appropriate for your site.


See the Clustered Data ONTAP Software Setup Guide for your version of Data ONTAP for additional details.
15. If the system is a two-node switchless cluster configuration, create the cluster interfaces on the existing node using the
network interface create command to create cluster LIFs on the cluster ports.
Example

The following is an example command for creating a cluster LIF on one of the node's cluster ports. The -auto parameter
configures the LIF to use a link-local IP address:
cluster1::> network interface create -vserver Cluster -lif clus1 -role cluster -home-node
node0 -home-port e1a -auto true

16. After setup is complete, verify that the node is healthy and eligible to participate in the cluster:
cluster show
Example

The following example shows a cluster after the second node (cluster1-02) has been joined to it:

18

Adding a second controller to create an HA pair

cluster1::> cluster show


Node
Health
--------------------- ------cluster1-01
true
cluster1-02
true

Eligibility
-----------true
true

You can access the Cluster Setup wizard to change any of the values you entered for the admin Storage Virtual Machine
(SVM) or node SVM by using the cluster setup command.

Installing licenses for the new controller module


You must add licenses for the new controller module for any Data ONTAP services that require standard (node-locked) licenses.
For features with standard licenses, each node in the cluster must have its own key for the feature.
About this task

For detailed information about licensing, see the knowledgebase article 3013749: Data ONTAP 8.2 Licensing Overview and
References on the NetApp Support Site and the Clustered Data ONTAP System Administration Guide for Cluster
Administrators.
Steps

1. If necessary, obtain license keys for the new node on the NetApp Support Site in the My Support section under Software
licenses.
If the site does not have the license keys you need, contact your sales or support representative.
2. Issue the following command to install each license key:
system license add -license-code license_key

The license_key is 28 digits in length.


Repeat this step for each required standard (node-locked) license.

Verifying storage failover on both controller modules and setting cluster


HA
You must ensure that storage failover is enabled on both controller modules. If you have a two-node cluster, you must separately
enable cluster HA.
Steps

1. Verify that HA mode is enabled by entering the following command:


storage failover show
Example

The output should be similar to the following:

Node
Partner
-------------- -------------old-ctlr
new-ctlr
new-ctlr
old-ctlr
2 entries were displayed.

Installing licenses for the new controller module

Possible State Description


-------- ------------------------------------true
Connected to new-ctlr
true
Connected to old-ctlr

19

2. If storage failover is not enabled, enable it and verify that it is enabled:


a. Enter the following command:
storage failover modify -enabled true -node existing-node-name

The single command enables storage failover on both controller modules.


b. Verify that storage failover is enabled by entering the following command:
storage failover show
Example

The output should be similar to the following:

Node
Partner
-------------- -------------old-ctlr
new-ctlr
new-ctlr
old-ctlr
2 entries were displayed.

Possible State Description


-------- ------------------------------------true
Connected to new-ctlr
true
Connected to old-ctlr

3. If the cluster has only two nodes, enter the following command to enable cluster HA:
cluster ha modify -configured true

Cluster high availability (HA) must be configured in a cluster if it contains only two nodes and it differs from the HA
provided by storage failover. You can skip this step if the cluster has more than two nodes.

Installing the firmware after adding a second controller module


After adding the controller module, you must install the latest firmware on the new controller module to ensure that the
controller module and remote management device function properly with Data ONTAP.
Steps

1. Log in to the NetApp Support Site, select the most current version of firmware for your system from those listed at
support.netapp.com/NOW/cgi-bin/fw, and then follow the instructions for downloading and installing the new firmware.
2. If your system uses an RLM, select the most current version of firmware for your RLM from those listed at
support.netapp.com/NOW/download/tools/rlm_fw, and then follow the instructions for downloading and installing the new
firmware.

Setting up Storage Encryption on the new controller module


If the existing system used Storage Encryption, you must configure the new controller module for Storage Encryption, including
installing and setting up the key managers, certificates, and servers.
About this task

This procedure includes steps that are performed on both the existing controller module and the new controller module. Be sure
to enter the command on the correct system.
You must enter the commands in the steps below in the nodeshell. For more information about the nodeshell, see the Clustered
Data ONTAP System Administration Guide for Cluster Administrators.
Steps

1. On the existing controller module, enter the following commands to verify that the key server is still available:
20

Adding a second controller to create an HA pair

key_manager status
key_manager query
Example

The following command checks the status of all key management servers linked to the storage system:
storage-system> key_manager status
Key server
Status
172.18.99.175
Server is responding

The following command checks the status of all key management servers linked to the storage system and displays
additional information:
storage-system> key_manager query
Key server 172.18.99.175 is responding.
Key server 172.18.99.175 reports 4 keys.
Key tag
-------storage-system
storage-system
storage-system
storage-system

Key ID
------080CF0C80...
080CF0C80...
080CF0C80...
080CF0C80...

2. On the new controller module, complete the following steps to install the same SSL certificates that are on the existing
controller module:
a. Copy the certificate files to a temporary location on the storage system.
b. Install the public certificate of the storage system by entering the following command at the storage system prompt:
keymgr install cert /path/client.pem

c. Install the private certificate of the storage system by entering the following command at the storage system prompt:
keymgr install cert /path/client_private.pem

d. Install the public certificate of the key management server by entering the following command at the storage system
prompt:
keymgr install cert /path/key_management_server_ipaddress_CA.pem

e. If you are linking multiple key management servers to the storage system, repeat the preceding steps for each public
certificate of each key management server.
3. On the new controller module, run the Storage Encryption setup wizard to set up and install the key servers.
You must install the same key servers that are installed on the existing controller module.
a. Enter the following command at the storage system prompt:
key_manager setup

b. Complete the steps in the wizard to configure Storage Encryption.


Example

The following example shows how to configure Storage Encryption:

Setting up Storage Encryption on the new controller module

21

storage-system*> key_manager setup


Found client certificate file client.pem.
Registration successful for client.pem.
Found client private key file client_private.pem.
Is this file protected by a passphrase? [no]:
Registration successful for client_private.pem.
Enter the IP address for a key server, 'q' to quit:
Enter the IP address for a key server, 'q' to quit:
Enter the TCP port number for kmip server [6001] :

172.22.192.192
q

You will now be prompted to enter a key tag name. The


key tag name is used to identify all keys belonging to this
Data ONTAP system. The default key tag name is based on the
system's hostname.
Would you like to use <storage-system> as the default key tag name? [yes]:
Registering 1 key servers...
Found client CA certificate file 172.22.192.192_CA.pem.
Registration successful for 172.22.192.192_CA.pem.
Registration complete.
You will now be prompted for a subset of your network configuration
setup. These parameters will define a pre-boot network environment
allowing secure connections to the registered key server(s).
Enter
Enter
Enter
Enter

network interface: e0a


IP address: 172.16.132.165
netmask:
255.255.252.0
gateway: 172.16.132.1

Do you wish to enter or generate a passphrase for the system's


encrypting drives at this time? [yes]: yes
Would you like the system to autogenerate a passphrase? [yes]:

yes

Key ID: 080CDCB20000000001000000000000003FE505B0C5E3E76061EE48E02A29822C


Make sure that you keep a copy of your passphrase, key ID, and key tag
name in a secure location in case it is ever needed for recovery purposes.
Should the system lock all encrypting drives at this time? yes
Completed rekey on 4 disks: 4 successes, 0 failures, including 0 unknown key and 0
authentication failures.
Completed lock on 4 disks: 4 successes, 0 failures, including 0 unknown key and 0
authentication failures.

4. On the existing controller module, enter the applicable command to restore authentication keys either from all linked key
management servers or from a specific one:

key_manager restore -all

key_manager restore -key_server key_server_ip_address

5. On the existing controller module, rekey all of the disks by entering the following command at the prompt:
key_manager rekey -keytag key_tag
key_tag is the key tag name specified in the setup wizard in step 3.

22

Adding a second controller to create an HA pair

Verifying the configuration with the Config Advisor


The Config Advisor utility verifies that the controller modules are properly configured for failover. This utility checks licenses,
network configurations, options, and so on, and provides output that shows when error conditions occur.
Steps

1. Go to the Config Advisor page on the NetApp Support Site at support.netapp.com/NOW/download/tools/config_advisor/.


2. Use the links and information on the page to download and run the tool.

Copyright information
Copyright 19942015 NetApp, Inc. All rights reserved. Printed in the U.S.
No part of this document covered by copyright may be reproduced in any form or by any meansgraphic, electronic, or
mechanical, including photocopying, recording, taping, or storage in an electronic retrieval systemwithout prior written
permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE, WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice. NetApp assumes no
responsibility or liability arising from the use of products described herein, except as expressly agreed to in writing by NetApp.
The use or purchase of this product does not convey a license under any patent rights, trademark rights, or any other intellectual
property rights of NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents, or pending applications.
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to restrictions as set forth in
subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.277-7103 (October 1988)
and FAR 52-227-19 (June 1987).

Trademark information
NetApp, the NetApp logo, Go Further, Faster, ASUP, AutoSupport, Campaign Express, Cloud ONTAP, clustered Data ONTAP,
Customer Fitness, Data ONTAP, DataMotion, Fitness, Flash Accel, Flash Cache, Flash Pool, FlashRay, FlexArray, FlexCache,
FlexClone, FlexPod, FlexScale, FlexShare, FlexVol, FPolicy, GetSuccessful, LockVault, Manage ONTAP, Mars, MetroCluster,
MultiStore, NetApp Insight, OnCommand, ONTAP, ONTAPI, RAID DP, SANtricity, SecureShare, Simplicity, Simulate
ONTAP, Snap Creator, SnapCopy, SnapDrive, SnapIntegrator, SnapLock, SnapManager, SnapMirror, SnapMover, SnapProtect,
SnapRestore, Snapshot, SnapValidator, SnapVault, StorageGRID, Tech OnTap, Unbound Cloud, and WAFL are trademarks or
registered trademarks of NetApp, Inc., in the United States, and/or other countries. A current list of NetApp trademarks is
available on the web at http://www.netapp.com/us/legal/netapptmlist.aspx.
Cisco and the Cisco logo are trademarks of Cisco in the U.S. and other countries. All other brands or products are trademarks or
registered trademarks of their respective holders and should be treated as such.

Verifying the configuration with the Config Advisor

23

How to send comments about documentation and receive


update notification
You can help us to improve the quality of our documentation by sending us your feedback. You can receive automatic
notification when production-level (GA/FCS) documentation is initially released or important changes are made to existing
production-level documents.
If you have suggestions for improving this document, send us your comments by email to [email protected]. To help
us direct your comments to the correct division, include in the subject line the product name, version, and operating system.
If you want to be notified automatically when production-level documentation is released or important changes are made to
existing production-level documents, follow Twitter account @NetAppDoc.
You can also contact us in the following ways:

24

NetApp, Inc., 495 East Java Drive, Sunnyvale, CA 94089 U.S.

Telephone: +1 (408) 822-6000

Fax: +1 (408) 822-4501

Support telephone: +1 (888) 463-8277

Adding a second controller to create an HA pair

You might also like