Clustered Data ONTAP 82 Network Management Guide
Clustered Data ONTAP 82 Network Management Guide
Clustered Data ONTAP 82 Network Management Guide
NetApp, Inc.
495 East Java Drive
Sunnyvale, CA 94089
U.S.
Table of Contents | 3
Contents
Understanding the network configuration ................................................. 7
Networking components of a cluster ........................................................................... 7
Network cabling guidelines ......................................................................................... 8
Network configuration during setup (cluster administrators only) ............................. 9
Network configuration after setup ............................................................................. 12
Table of Contents | 5
Creating a DNS load balancing zone ............................................................ 63
Adding or removing a LIF from a load balancing zone ................................ 64
How automatic LIF rebalancing works ..................................................................... 66
Enabling or disabling automatic LIF rebalancing ......................................... 66
Combining load balancing methods in an SVM accessible in a multiprotocol
environment ......................................................................................................... 68
Ports
Physical ports: Network interface cards (NICs) and HBAs provide physical (Ethernet and
Fibre Channel) connections to the physical networks (management and data networks).
Virtual ports: VLANs and interface groups (ifgrps) constitute the virtual ports. While interface
groups treat several physical ports as a single port, VLANs subdivide a physical port into
multiple separate ports.
Logical interfaces
A logical interface (LIF) is an IP address and is associated with attributes such as failover rule
lists, firewall rules. A LIF communicates over the network through the port (physical or virtual) it
is currently bound to.
Different types of LIFs in a cluster are data LIFs, cluster-management LIFs, node-management
LIFs, intercluster LIFs, and cluster LIFs. The ownership of the LIFs depends on the Storage
Virtual Machine (SVM) where the LIF resides. Data LIFs are owned by data SVMs, nodemanagement and cluster LIFs are owned by node SVMs, and cluster-management LIFs are
owned by the admin SVMs.
Routing groups
A routing group is a routing table. Each LIF is associated with a routing group and uses only the
routes of that group. Multiple LIFs can share a routing group. Each routing group needs a
minimum of one route to access clients outside the defined subnet.
DNS zones
DNS zone can be specified during the LIF creation, providing a name for the LIF to be exported
through the cluster's DNS server. Multiple LIFs can share the same name, allowing the DNS load
balancing feature to distribute IP addresses for the name according to load. SVMs can have
multiple DNS zones.
The following diagram illustrates how the different networking components are associated in a 4node cluster:
Vserver 2
Routing group 1
Routing group 2
DNS Zone 1
Data
LIF
Data
LIF
ClusterMgmt
LIF
DNS Zone 2
Data
LIF
Data
LIF
Data
LIF
Data
LIF
Data
LIF
ClusterMgmt
LIF
ClusterMgmt
LIF
Cluster Mgmt
LIF
Data network
VLAN
VLAN
Interface Group
port
port
port
node1
port
Nodemgmt
LIF
port
port
node2
port
Cluster
LIF
port
Nodemgmt
LIF
VLAN
VLAN
port
Interface Group
port
node3
port
Cluster
LIF
port
Nodemgmt
LIF
port
node4
port
Cluster
LIF
port
Cluster
LIF
port
Nodemgmt
LIF
Cluster network
Management network
Owned by Data Vserver
For more information about the basic cluster concepts and SVMs, see the Clustered Data ONTAP
System Administration Guide for Cluster Administrators.
FC
& FCoE
Management Network
Data Network
SAN
FC & FCoE
System Manager
Multiprotocol NAS
NFS, CIFS, iSCSI, FCoE
Note: Apart from these networks, there is a separate network for ACP (Alternate Control Path)
that enables Data ONTAP to manage and control a SAS disk shelf storage subsystem. ACP uses a
separate network (alternate path) from the data path. For more information about ACP, see the
Clustered Data ONTAP Physical Storage Management Guide.
Each node should be connected to three distinct networksone for management, one for data
access, and one for intracluster communication. The management and data networks can be
logically separated.
For setting up the cluster interconnect and the management network by using the supported Cisco
switches, see the Clustered Data ONTAP Switch Setup Guide for Cisco Switches.
For setting up the cluster interconnect and the management network by using the NetApp
switches, see the CN1601 and CN1610 Switch Setup and Configuration Guide.
A cluster can be created without data network connections, but must include a cluster
interconnect connection.
There should always be two cluster connections to each node, but nodes on FAS22xx systems
may be configured with a single 10-GbE cluster port.
You can have more than one data network connection to each node for improving the client (data)
traffic flow.
LIFs and management LIFs configured for IPv6, you can modify the LIFs after the cluster has
been configured and is running. It is not possible to configure an IPv6-only cluster, because cluster
LIFs and intercluster LIFs only support IPv4.
Networking configuration during the Cluster setup
You can perform the initial setup of the admin SVM by using the Cluster Setup wizard. During the
setup, you can either select the default values or customize your setup. The default values for setup
are generated by using the zero configuration networking mechanism.
For setting up the...
Cluster network
Admin SVM
Storage resources
Data LIFs
Naming services
For more information about the setup process using the Cluster Setup and Vserver Setup wizards, see
the Clustered Data ONTAP Software Setup Guide.
Related concepts
If you are an SVM administrator, you can perform the following tasks:
View LIFs
View routing groups
Create, modify, and manage DNS hosts table entries
13
A port aggregate containing two or more physical ports that act as a single trunk port.
An interface group can be single-mode, multimode, or dynamic multimode.
VLAN
A virtual port that receives and sends VLAN-tagged (IEEE 802.1Q standard) traffic.
VLAN port characteristics include the VLAN ID for the port. The underlying
physical port or interface group ports are considered VLAN trunk ports, and the
connected switch ports must be configured to trunk the VLAN IDs.
Note: The underlying physical port or interface group ports for a VLAN port can
The ports used by administrators to connect to and manage a node. These ports
can be VLAN-tagged virtual ports where the underlying physical port is used for
other traffic. The default port for node management differs depending on
hardware platform.
Some platforms have a dedicated management port (e0M). The role of such a
port cannot be changed, and these ports cannot be used for data traffic.
Cluster ports
The ports used for intracluster traffic only. By default, each node has two cluster
ports on 10-GbE ports enabled for jumbo frames.
Note: In some cases, nodes on FAS22xx systems are configured with a single
10-GbE cluster port.
The ports used for data traffic. These ports are accessed by NFS, CIFS, FC, and
iSCSI clients for data requests. Each node has a minimum of one data port.
You can create VLANs and interface groups on data ports. VLANs and interface
groups have the data role by default, and the port role cannot be modified.
Intercluster
ports
Note: For single-node clusters, including Data ONTAP Edge systems, there are no cluster ports or
intercluster ports.
Related concepts
A physically secure, dedicated network to connect the cluster ports on all nodes in the cluster
Note: The cluster ports on the nodes should be configured on a high-speed, high-bandwidth
network, and the MTU should be set to 9000 bytes.
For each hardware platform, the default role for each port is defined as follows:
Platform
Cluster ports
Node management
port
Data ports
FAS2220
e0a, e0b
e0M
FAS2240
e1a, e1b
e0M
32xx
e1a, e2a
e0M
62xx
e0c, e0e
e0M
FAS80xx
e0a, e0c
e0M
FDvM200
NA
e0a
If your hardware platform is not listed in this table, refer to Hardware Universe at hwu.netapp.com
for information on default port roles for all hardware platforms.
Switch
Switch
e0a fails
e0a
e1a
a0a
e0a
e1a
a0a
In a static multimode interface group, all interfaces in the interface group are active and share a
single MAC address.
This logical aggregation of interfaces allows for multiple individual connections to be distributed
among the interfaces in the interface group. Each connection or session uses one interface within
the interface group and has a reduced likelihood of sharing that single interface with other
connections. This effectively allows for greater aggregate throughput, although each individual
connection is limited to the maximum throughput available in a single port.
When you use the round-robin load balancing scheme, all sessions are distributed across available
links on a packet-by-packet basis, and are not bound to a particular interface from the interface
group.
For more information about this scheme, see the Round-robin load balancing.
Static multimode interface groups can recover from a failure of up to "n-1" interfaces, where n is
the total number of interfaces that form the interface group.
If a port fails or is unplugged in a static multimode interface group, the traffic that was traversing
that failed link is automatically redistributed to one of the remaining interfaces. If the failed or
disconnected port is restored to service, traffic is automatically redistributed among all active
interfaces, including the newly restored interface.
Static multimode interface groups can detect a loss of link, but they cannot detect a loss of
connectivity to the client or switch misconfigurations that might impact connectivity and
performance.
A static multimode interface group requires a switch that supports link aggregation over multiple
switch ports.
The switch is configured so that all ports to which links of an interface group are connected are
part of a single logical port. Some switches might not support link aggregation of ports
configured for jumbo frames. For more information, see your switch vendor's documentation.
Several load balancing options are available to distribute traffic among the interfaces of a static
multimode interface group.
The following figure is an example of a static multimode interface group. Interfaces e0a, e1a, e2a,
and e3a are part of the a1a multimode interface group. All four interfaces in the a1a multimode
interface group are active.
Switch
Several technologies exist that enable traffic in a single aggregated link to be distributed across
multiple physical switches. The technologies used to enable this capability vary among networking
products. Static multimode interface groups in Data ONTAP conform to the IEEE 802.3 standards. If
a particular multiple switch link aggregation technology is said to interoperate with or conform to the
IEEE 802.3 standards, it should operate with Data ONTAP.
The IEEE 802.3 standard states that the transmitting device in an aggregated link determines the
physical interface for transmission. Therefore, Data ONTAP is only responsible for distributing
outbound traffic, and cannot control how inbound frames arrive. If you want to manage or control the
transmission of inbound traffic on an aggregated link, that transmission must be modified on the
directly connected network device.
Dynamic multimode interface group
Dynamic multimode interface groups implement Link Aggregation Control Protocol (LACP) to
communicate group membership to the directly attached switch. LACP enables you to detect the loss
of link status and the inability of the node to communicate with the direct-attached switch port.
Dynamic multimode interface group implementation in Data ONTAP complies with IEEE 802.3 AD
(802.1 AX). Data ONTAP does not support Port Aggregation Protocol (PAgP), which is a
proprietary link aggregation protocol from Cisco.
A dynamic multimode interface group requires a switch that supports LACP.
Data ONTAP implements LACP in nonconfigurable active mode that works well with switches that
are configured in either active or passive mode. Data ONTAP implements the long and short LACP
Dynamic multimode interface groups should be configured to use the port-based, IP-based,
MAC-based, or round robin load balancing methods.
In a dynamic multimode interface group, all interfaces must be active and share a single MAC
address.
The following figure is an example of a dynamic multimode interface group. Interfaces e0a, e1a, e2a,
and e3a are part of the a1a multimode interface group. All four interfaces in the a1a dynamic
multimode interface group are active.
Switch
IP address load balancing works in the same way for both IPv4 and IPv6 addresses.
Round-robin load balancing
You can use round-robin for load balancing multimode interface groups. You should use the roundrobin option for load balancing a single connection's traffic across multiple links to increase single
connection throughput. However, this method might cause out-of-order packet delivery.
If the remote TCP endpoints do not handle TCP reassembly correctly or lack enough memory to
store out-of-order packets, they might be forced to drop packets. Therefore, this might result in
unnecessary retransmissions from the storage controller.
Port-based load balancing
You can equalize traffic on a multimode interface group based on the transport layer (TCP/UDP)
ports by using the port-based load balancing method.
The port-based load balancing method uses a fast hashing algorithm on the source and destination IP
addresses along with the transport layer port number.
All the ports in an interface group must be physically located on the same storage system, but do
not need to be on the same network adapter in the storage system.
There can be a maximum of 16 physical interfaces in an interface group.
There can be a maximum of 4 physical interfaces if the interface group is made up of 10-GbE
ports.
A port that is already a member of an interface group cannot be added to another interface group.
All ports in an interface group must have the same port role (data).
Cluster ports and node management ports cannot be included in an interface group.
A port to which a LIF is already bound cannot be added to an interface group.
You cannot add or remove an interface group if there is a LIF bound to the interface group.
An interface group can be moved to the administrative up and down settings, but the
administrative settings of the underlying physical ports cannot be changed.
Interface groups cannot be created over VLANs or other interface groups.
In static multimode and dynamic multimode (LACP) interface groups, the network ports used
must have identical port characteristics. Some switches allow media types to be mixed in
interface groups. However, the speed, duplex, and flow control should be identical.
The network ports should belong to network adapters of the same model. Support for hardware
features such as TSO, LRO, and checksum offloading varies for different models of network
adapters. If all ports do not have identical support for these hardware features, the feature might
be disabled for the interface group.
Note: Using ports with different physical characteristics and settings can have a negative
In a single-mode interface group, you can select the active port or designate a port as nonfavored
by executing the ifgrp command from the nodeshell.
While creating a multimode interface group, you can specify any of the following load balancing
methods:
mac: Network traffic is distributed on the basis of MAC addresses.
ip: Network traffic is distributed on the basis of IP addresses.
sequential: Network traffic is distributed as it is received.
port: Network traffic is distributed on the basis of the transport layer (TCP/UDP) ports.
If a multimode interface group is configured and IPv6 is enabled on the storage system, the
switch must also have the proper configuration. Improper configuration might result in the
duplicate address detection mechanism for IPv6 incorrectly detecting a duplicate address and
displaying error messages.
Step
1. Use the network port ifgrp create command to create an interface group.
Interface groups must be named using the syntax a<number><letter>. For example, a0a, a0b, a1c,
and a2a are valid interface group names.
For more information about this command, see the man pages.
The following example shows how to create an interface group named a0a with a distribution
function of ip and a mode of multimode:
cluster1::> network port ifgrp create -node cluster1-01 -ifgrp a0a distr-func ip -mode multimode
To remove a port from an interface group, it must not be hosting any LIFs.
About this task
1. Depending on whether you want to add or remove network ports from an interface group, enter
the following command:
If you want to...
For more information about these commands, see the man pages.
Example
The following example shows how to add ports e0c to an interface group named a0a:
cluster1::> network port ifgrp add-port -node cluster1-01 -ifgrp a0a port e0c
The following example shows how to remove port e0d from an interface group named a0a:
cluster1::> network port ifgrp remove-port -node cluster1-01 -ifgrp
a0a -port e0d
Step
1. Use the network port ifgrp delete command to delete an interface group.
For more information about this command, see the man pages.
Example
The following example shows how to delete an interface group named a0b:
cluster1::> network port ifgrp delete -node cluster1-01 -ifgrp a0b
Related tasks
For example, in this figure, if a member of VLAN 10 on Floor 1 sends a frame for a member of
VLAN 10 on Floor 2, Switch 1 inspects the frame header for the VLAN tag (to determine the
VLAN) and the destination MAC address. The destination MAC address is not known to Switch 1.
Therefore, the switch forwards the frame to all other ports that belong to VLAN 10, that is, port 4 of
Switch 2 and Switch 3. Similarly, Switch 2 and Switch 3 inspect the frame header. If the destination
MAC address on VLAN 10 is known to either switch, that switch forwards the frame to the
destination. The end-station on Floor 2 then receives the frame.
Switch ports
End-station MAC addresses
Protocol
In this figure, VLAN 10 (Engineering), VLAN 20 (Marketing), and VLAN 30 (Finance) span three
floors of a building. If a member of VLAN 10 on Floor 1 wants to communicate with a member of
VLAN 10 on Floor 3, the communication occurs without going through the router, and packet
flooding is limited to port 1 of Switch 2 and Switch 3 even if the destination MAC address to Switch
2 and Switch 3 is not known.
Advantages of VLANs
VLANs provide a number of advantages, such as ease of administration, confinement of broadcast
domains, reduced broadcast traffic, and enforcement of security policies.
VLANs provide the following advantages:
VLANs enable logical grouping of end-stations that are physically dispersed on a network.
When users on a VLAN move to a new physical location but continue to perform the same job
function, the end-stations of those users do not need to be reconfigured. Similarly, if users change
their job functions, they need not physically move: changing the VLAN membership of the endstations to that of the new team makes the users' end-stations local to the resources of the new
team.
VLANs reduce the need to have routers deployed on a network to contain broadcast traffic.
Flooding of a packet is limited to the switch ports that belong to a VLAN.
Confinement of broadcast domains on a network significantly reduces traffic.
By confining the broadcast domains, end-stations on a VLAN are prevented from listening to or
receiving broadcasts not intended for them. Moreover, if a router is not connected between the
VLANs, the end-stations of a VLAN cannot communicate with the end-stations of the other
VLANs.
You cannot bring down the base interface that is configured to receive tagged and untagged traffic.
You must bring down all VLANs on the base interface before you bring down the interface.
However, you can delete the IP address of the base interface.
Creating a VLAN
You can create a VLAN for maintaining separate broadcast domains within the same network
domain by using the network port vlan create command. You cannot create a VLAN from
an existing VLAN.
Before you begin
You must contact your network administrator to check if the following requirements are met:
The switches deployed in the network either comply with IEEE 802.1Q standards or have a
vendor-specific implementation of VLANs.
For supporting multiple VLANs, an end-station is statically configured to belong to one or more
VLANs.
When you configure a VLAN over a port for the first time, the port might go down resulting in a
temporary disconnection of the network. However, the subsequent VLAN additions do not affect
the port.
Step
The following example shows how to create a VLAN e1c-80 attached to network port e1c on the
node cluster1-01:
cluster1::> network port vlan create -node cluster1-01 -vlan-name
e1c-80
Deleting a VLAN
You might have to delete a VLAN before removing a NIC from its slot. When you delete a VLAN, it
is automatically removed from all failover rules and groups that use it.
Before you begin
Before removing a NIC from its slot, you have to delete all the physical ports and their associated
VLANs.
Step
The following example shows how to delete VLAN e1c-80 from network port e1c on the node
cluster1-01:
cluster1::> network port vlan delete -node cluster1-01 -vlan-name
e1c-80
The administrative settings of either the 10-GbE or the 1-GbE network interfaces.
The values that you can set for duplex mode and port speed are referred to as administrative
settings. Depending on network limitations, the administrative settings can differ from the
operational settings (that is, the duplex mode and speed that the port actually uses).
The administrative settings of the underlying physical ports in an interface group.
Note: Use the -up-admin parameter (available at advanced privilege level) to modify the
Step
1. Use the network port modify command to modify the attributes of a network port.
Note: You should set the flow control of cluster ports to none. By default, the flow control is
set to full.
Example
The following example shows how to disable the flow control on port e0b by setting it to none:
cluster1::> network port modify -node cluster1-01 -port e0b flowcontrol-admin none
All the LIFs hosted on the NIC ports must have been migrated or deleted.
All the NICs ports must not be the home ports of any LIFs.
You must have advanced privileges to delete the ports from a NIC.
Steps
1. Use the network port delete command to delete the ports from the NIC.
For more information about removing a NIC, see the Moving or replacing a NIC in Data ONTAP
8.1 operating in Cluster-Mode document.
2. Use the network port show to verify that the ports have been deleted.
3. Repeat step 1, if the output of the network port show command still shows the deleted port.
Related information
Although most of the IPv6 features have been implemented in clustered Data ONTAP 8.2, you
should familiarize yourself with the unsupported features of IPv6 as well. You can enable IPv6 on
the cluster before configuring various networking components with IPv6 addresses.
For detailed explanations about various IPv6 address states, address auto-configuration, and the
neighbor discovery features of IPv6, see the relevant RFCs.
Related information
All the nodes in the cluster must be running clustered Data ONTAP 8.2.
About this task
1. Use the network options ipv6 modify command to enable IPv6 on the cluster.
2. Use the network options ipv6 show command to verify that IPv6 is enabled in the cluster.
While configuring SAN protocols such as FC on a LIF, it will be associated with a WWPN.
For more information about configuring WWPN to LIFs while using the FC protocol, see the
Clustered Data ONTAP SAN Administration Guide.
The following figure illustrates the port hierarchy in a clustered Data ONTAP system:
LIF
LIF
LIF
LIF
LIF
VLAN
VLAN
Port
Port
LIF
LIF
LIF
Port
LIF
LIF
LIF
VLAN
VLAN
Interface
group
Interface
group
Port
Port
LIF
LIF
LIF
Port
The LIF that provides a dedicated IP address for managing a particular node and
gets created at the time of creating or joining the cluster. These LIFs are used for
system maintenance, for example, when a node becomes inaccessible from the
cluster. Node-management LIFs can be configured on either node-management or
data ports.
The node-management LIF can fail over to other data or node-management ports
on the same node.
The LIF that provides a single management interface for the entire cluster.
Cluster-management LIFs can be configured on node-management or data ports.
Cluster LIF
The LIF that is used for intracluster traffic. Cluster LIFs can be configured only
on cluster ports.
The LIF can fail over to any node-management or data port in the cluster. It
cannot fail over to cluster or intercluster ports.
These interfaces can fail over between cluster ports on the same node, but they
cannot be migrated or failed over to a remote node. When a new node joins a
cluster, IP addresses are generated automatically. However, if you want to assign
IP addresses manually to the cluster LIFs, you must ensure that the new IP
addresses are in the same subnet range as the existing cluster LIFs.
Data LIF
The LIF that is associated with a Storage Virtual Machine (SVM) and is used for
communicating with clients. Data LIFs can be configured only on data ports.
You can have multiple data LIFs on a port. These interfaces can migrate or fail
over throughout the cluster. You can modify a data LIF to serve as an SVM
management LIF by modifying its firewall policy to mgmt.
For more information about SVM management LIFs, see the Clustered Data
ONTAP System Administration Guide for Cluster Administrators.
Sessions established to NIS, LDAP, Active Directory, WINS, and DNS servers
use data LIFs.
Intercluster
LIF
The LIF that is used for cross-cluster communication, backup, and replication.
Intercluster LIFs can be configured on data ports or intercluster ports. You must
create an intercluster LIF on each node in the cluster before a cluster peering
relationship can be established.
These LIFs can fail over to data or intercluster ports on the same node, but they
cannot be migrated or failed over to another node in the cluster.
Related concepts
Characteristics of LIFs
LIFs with different roles have different characteristics. A LIF role determines the kind of traffic that
is supported over the interface, along with the failover rules that apply, the firewall restrictions that
are in place, the security, the load balancing, and the routing behavior for each LIF.
Compatibility with port roles and port types
Data LIF
Primary
traffic types
Cluster LIF
NFS server,
Intracluster
CIFS server,
NIS client,
Active
Directory,
LDAP,
WINS, DNS
client and
server, iSCSI
and FC server
Nodemanagement
LIF
Cluster
management
LIF
Intercluster LIF
SSH server,
SSH server,
Cross-cluster
HTTPS
HTTPS server replication
server, NTP
client, SNMP,
AutoSupport
client, DNS
client, loading
code updates
Compatible
Data
with port roles
Cluster
Nodemanagement,
data
Data
Intercluster, data
Compatible
with port
types
No interface
group or
VLAN
All
All
All
All
Notes
Data LIF
Cluster LIF
Nodemanagement
LIF
SAN LIFs
cannot fail
over. These
LIFs also do
not support
load
balancing.
Unauthenticat
ed,
unencrypted;
essentially an
internal
Ethernet "bus"
of the cluster.
All network
ports in the
cluster role in
a cluster
should have
the same
physical
characteristics
(speed).
In new nodemanagement
LIFs, the
default value
of the use-
Cluster
management
LIF
Intercluster LIF
Traffic flowing
over intercluster
LIFs is not
encrypted.
failovergroup
parameter is
disabled.
The usefailovergroup
parameter can
be set to either
systemdefined or
enabled.
Security
Data LIF
Cluster LIF
Nodemanagement
LIF
Cluster
management
LIF
Intercluster LIF
Require
private IP
subnet?
No
Yes
No
No
No
Require
secure
network?
No
Yes
No
No
Yes
Default
Very
firewall policy restrictive
Completely
open
Medium
Medium
Very restrictive
Is firewall
Yes
customizable?
No
Yes
Yes
Yes
Default
behavior
Data LIF
Cluster LIF
Nodemanagement
LIF
Cluster
management
LIF
Intercluster LIF
Includes all
data ports on
home node as
well as one
alternate node
Must stay on
node and uses
any available
cluster port
Default is
none, must
stay on the
same port on
the node
Default is
failover group
of all data
ports in the
entire cluster
Must stay on
node, uses any
available
intercluster port
No
Yes
Yes
Yes
Cluster LIF
Nodemanagement
LIF
Cluster
management
LIF
Intercluster LIF
When any of
the primary
traffic types
require access
to a different
IP subnet
When
administrator
is connecting
from another
IP subnet
When other
intercluster LIFs
are on a different
IP subnet
Rare
Rare
When nodes of
another cluster
have their
intercluster LIFs
in different IP
subnets
Is
Yes
customizable?
Routing
Data LIF
When is a
default route
needed?
Never
When is static
host route to a
specific server
needed?
Cluster LIF
Never
To have one
of the traffic
types listed
under nodemanagement
LIF go
through a data
LIF rather
than a nodemanagement
LIF. This
requires a
corresponding
firewall
change.
Nodemanagement
LIF
Cluster
management
LIF
Intercluster LIF
Rare
Rare
Rare
Cluster LIF
Nodemanagement
LIF
Cluster
management
LIF
Intercluster LIF
Automatic
LIF
rebalancing
Yes
If enabled,
LIFs
automatically
migrate to
other failover
ports based on
load provided
no CIFS or
NFSv4
connections
are on them.
Yes
No
No
No
DNS: use as
DNS server?
Yes
No
No
No
No
DNS: export
as zone?
Yes
No
No
No
No
Cluster
network
traffic is
automatically
distributed
across cluster
LIFs based on
load.
LIF limits
There are limits on each type of LIF that you should consider when planning your network. You
should also be aware of the effect of the number of LIFs in your cluster environment.
The maximum number of data LIFs that can be supported on a node is 262. You can create additional
cluster, cluster-management, and intercluster LIFs, but creating these LIFs requires a reduction in the
number of data LIFs.
There is no imposed limit on the number of LIFs supported by a physical port, with the exception of
Fibre Channel LIFs. The LIFs per node limits provides a practical limit to the number of LIFs per
port that can be configured.
LIF type
Minimum
Maximum
Data LIFs
1 per SVM
Cluster LIFs
2 per node
Increased client-side
resiliency and
availability when
configured across the
NICs of the cluster
Increased granularity for
load balancing
N/A
Increased cluster-side
bandwidth if configured on
an additional NIC
Negligible
N/A
Negligible
Intercluster LIFs
N/A
Increased intercluster
bandwidth if configured on
an additional NIC
0 without cluster
peering
1 per node if
cluster peering is
enabled
In data LIFs used for file services, the default data protocol options are NFS and CIFS.
In node-management LIFs, the default data protocol option is set to none and the firewall
policy option is automatically set to mgmt.
You can use such a LIF as a Storage Virtual Machine (SVM) management LIF. For more
information about using an SVM management LIF to delegate SVM management to SVM
administrators, see the Clustered Data ONTAP System Administration Guide for Cluster
Administrators.
In cluster LIFs the default data protocol option is set to none and the firewall policy
option is automatically set to cluster
You use FlexCache to enable caching to a 7-Mode volume that exists outside the cluster.
Caching within the cluster is enabled by default and does not require this parameter to be set. For
information about caching a FlexVol volume outside the cluster, see the Data ONTAP Storage
Management Guide for 7-Mode.
FC LIFs can be configured only on FC ports. iSCSI LIFs cannot coexist with any other protocols.
For more information about configuring the SAN protocols, see the Clustered Data ONTAP SAN
Administration Guide.
NAS and SAN protocols cannot coexist on the same LIF.
The firewall policy option associated with a LIF is defaulted to the role of the LIF except
for an SVM management LIF.
For example, the default firewall policy option of a data LIF is data. For more information
about firewall policies, see the Clustered Data ONTAP System Administration Guide for Cluster
Administrators.
Avoid configuring LIFs with addresses in the 192.168.1/24 and 192.168.2/24 subnets. Doing so
might cause the LIFs to conflict with the private iWARP interfaces and prevent the LIFs from
coming online after a node reboot or LIF migration.
Multicast addresses
Multicast addresses begin with FF.
Link-local addresses
Link-local addresses always begin with FE80. With the 64-bit interface identifier, the prefix
for link-local addresses is always FE80::/64
IPv4-compatible addresses
0:0:0:0:0:0:w.x.y.z or ::w.x.y.z (where w.x.y.z is the dotted decimal representation of a public
IPv4 address)
IPv4-mapped addresses
Creating a LIF
A LIF is an IP address associated with a physical port. If there is any component failure, a LIF can
fail over to or be migrated to a different physical port, thereby continuing to communicate with the
cluster.
Before you begin
The underlying physical network port must have been configured to the administrative up status.
You should have considered the guidelines for creating LIFs: Guidelines for creating LIFs on
page 39
If you want to create LIFs with IPv6 addresses, you should have considered the guidelines for
assigning IPv6 addresses: Guidelines for assigning IPv6 addresses for LIFs on page 40
You can create both IPv4 and IPv6 LIFs on the same network port.
You cannot assign NAS and SAN protocols to a LIF.
The supported protocols are CIFS, NFS, FlexCache, iSCSI, and FCP.
home-node is the node to which the LIF returns when the network interface revert
command is run on the LIF.
home-port is the port or interface group to which the LIF returns when the network interface
revert command is run on the LIF.
The data-protocol option must be specified when the LIF is created, and cannot be modified
later.
If you specify none as the value for the data-protocol option, the LIF does not support any
data protocol.
A cluster LIF should not be on the same subnet as a management LIF or a data LIF.
Steps
3. Use the network interface show command to verify that LIF has been created successfully.
Example
Network
Current
Current Is
Address/Mask
Node
Port
Home
------------------ ------------- ------- ---192.0.2.3/24
node-1
e1a
true
192.0.2.12/24
192.0.2.13/24
192.0.2.68/24
node-1
node-1
node-1
e0a
e0b
e1a
true
true
true
192.0.2.14/24
192.0.2.15/24
192.0.2.69/24
node-2
node-2
node-2
e0a
e0b
e1a
true
true
true
192.0.2.17/24
192.0.2.18/24
192.0.2.68/24
node-3
node-3
node-3
e0a
e0b
e1a
true
true
true
192.0.2.20/24
192.0.2.21/24
192.0.2.70/24
node-4
node-4
node-4
e0a
e0b
e1a
true
true
true
192.0.2.145/30
node-4
e1c
true
Example
The following example demonstrates data LIFs named datalif3 and datalif4 configured with IPv4
and IPv6 addresses respectively:
Network
Current
Current Is
Address/Mask
Node
Port
Home
------------------ ------------- ------- ---192.0.2.3/24
node-1
e1a
true
192.0.2.12/24
192.0.2.13/24
192.0.2.68/24
node-1
node-1
node-1
e0a
e0b
e1a
true
true
true
192.0.2.14/24
192.0.2.15/24
192.0.2.69/24
node-2
node-2
node-2
e0a
e0b
e1a
true
true
true
192.0.2.17/24
192.0.2.18/24
192.0.2.68/24
node-3
node-3
node-3
e0a
e0b
e1a
true
true
true
192.0.2.20/24
192.0.2.21/24
192.0.2.70/24
node-4
node-4
node-4
e0a
e0b
e1a
true
true
true
192.0.2.145/30
node-4
e1c
true
192.0.2.146/30
2001::2/64
node-3
node-3
e0c
e0c
true
true
4. Use the network ping command to verify that the configured IPv4 addresses are reachable.
5. Use the ping6 command (available for the nodeshell) to verify that the IPv6 addresses are
reachable.
All the name mapping and host-name resolution services, such as DNS, NIS, LDAP, and Active
Directory, must be reachable from the data, cluster-management, and node-management LIFs of
the cluster.
Related concepts
Modifying a LIF
You can modify a LIF by changing the attributes such as the home node or the current node,
administrative status, IP address, netmask, failover policy, and the firewall policy. You can also
To modify a data LIF with NAS protocols to also serve as an SVM management LIF, you must
modify the data LIF's firewall policy to mgmt.
You cannot modify the data protocols used by a LIF.
To modify the data protocols used by a LIF, you must delete and re-create the LIF.
You cannot modify either the home node or the current node of a node-management LIF.
Do not specify the home node when modifying the home port of a cluster LIF.
To modify the address family of a LIF from IPv4 to IPv6, you must do the following:
Use the colon notation for the IPv6 address.
Add a new value for the -netmask-length parameter.
You cannot modify the auto-configured link-local IPv6 addresses.
You cannot change the routing group of a LIF belonging to the IPv4 address family to a routing
group assigned to an IPv6 LIF.
Steps
The following example shows how to modify a LIF datalif1 that is located on the SVM vs0. The
LIF's IP address is changed to 172.19.8.1 and its network mask is changed to 255.255.0.0.
cluster1::> network interface modify -vserver vs0 -lif
datalif1 -address 172.19.8.1 -netmask 255.255.0.0 -auto-revert
true
2. Use the network ping command to verify that the IPv4 addresses are reachable.
3. Use the ping6 command to verify that the IPv6 addresses are reachable.
The ping6 command is available from the node shell.
Migrating a LIF
You might have to migrate a LIF to a different port on the same node or a different node within the
cluster, if the port is either faulty or requires maintenance. Migrating a LIF is similar to LIF failover,
The destination node and ports must be operational and must be able to access the same network
as the source port.
Failover groups must have been set up for the LIFs.
You must migrate LIFs hosted on the ports belonging to a NIC to other ports in the cluster, before
removing the NIC from the node.
You must execute the command for migrating a cluster LIF from the node where the cluster LIF
is hosted.
You can migrate a node-management LIF to any data or node-management port on the home
node, even when the node is out of quorum.
For more information about quorum, see the Clustered Data ONTAP System Administration
Guide for Cluster Administrators.
Note: A node-management LIF cannot be migrated to a remote node.
You cannot migrate iSCSI LIFs from one node to another node.
To work around this problem, you must create an iSCSI LIF on the destination node. For
information about guidelines for creating an iSCSI LIF, see the Clustered Data ONTAP SAN
Administration Guide.
VMware VAAI copy offload operations fail when you migrate the source or the destination LIF.
For more information about VMware VAAI, see the Clustered Data ONTAP File Access and
Protocols Management Guide.
Step
1. Depending on whether you want to migrate a specific LIF or all the LIFs, perform the appropriate
action:
If you want to migrate...
A specific LIF
Example
The following example shows how to migrate a LIF named datalif1 on the SVM vs0 to the
port e0d on node0b:
cluster1::> network interface migrate -vserver vs0 -lif datalif1 dest-node node0b -dest-port e0d
If you administratively bring the home port of a LIF to the up state before setting the automatic
revert option, the LIF is not returned to the home port.
The node-management LIF does not automatically revert unless the value of the auto revert
option is set to true.
Cluster LIFs always revert to their home ports regardless of the value of the auto revert
option.
Step
1. Depending on whether you want to revert a LIF to its home port manually or automatically,
perform one of the following steps:
If you want to revert a LIF to its
home port...
Manually
Automatically
Related tasks
Deleting a LIF
You can delete an LIF that is not required.
Before you begin
Delete a LIF
Example
cluster1::> network interface delete -vserver vs1 -lif mgmtlif2
2. Use the network interface show command to confirm that the LIF is deleted and the routing
group associated with the LIF is not deleted.
Related tasks
System-defined failover groups: Failover groups that automatically manage LIF failover targets
on a per-LIF basis.
This is the default failover group for data LIFs in the cluster.
For example, when the value of the failover-group option is system-defined, the system
will automatically manage the LIF failover targets for that LIF, based on the home node or port of
the LIF.
Note: All the network ports should be assigned correct port roles, and all the network ports of
User-defined failover groups: Customized failover groups that can be created when the systemdefined failover groups do not meet your requirements.
For a system with ports of the same role connected to multiple subnets, each LIF requires a userdefined failover group with a failover group for each subnet.
You can create a failover group consisting of all 10-GbE ports that enables LIFs to fail over only
to the high-bandwidth ports.
Clusterwide failover group: Failover group that consists of all the data ports in the cluster.
This is the default failover group for the cluster-management LIFs only.
For example, when the value of the failover-group option is cluster-wide, every data port
in the cluster will be defined as the failover targets for that LIF.
Failover group
system-defined
(default)
cluster
home node
Failover group
system-defined
(default)
node management
home node
user-defined
node management or
data
home node
cluster-wide (default)
Cluster management
LIF
system-defined
any node
node management or
data
data
user-defined
Data LIF
system-defined
(default)
user-defined
Intercluster LIF
system-defined
(default)
intercluster
user-defined
intercluster or data
home node
Related concepts
If you have LIFs in different VLANs or broadcast domains, you must configure failover groups
for each VLAN or broadcast domain.
You must then configure the LIFs hosted on a particular VLAN or broadcast domain to subscribe
to the corresponding failover group.
Failover groups do not apply in a SAN iSCSI or FC environment.
Step
For deleting an entire failover group, the failover group must not be used by any LIF.
Step
1. Depending on whether you want to remove a port from a failover group or delete a failover
group, complete the applicable step:
If you want to...
network interface failover-groups delete -failovergroup failover_group_name -node node_name -port port
Note: If you delete all ports from the failover group, the failover group is
also deleted.
Example
The following example shows how to delete port e1e from the failover group named failovergroup_2:
cluster1::> network interface failover-groups delete -failovergroup failover-group_2 -node cluster1-01 -port e1e
The values of the following parameters in the network interface modify command together
determine the failover behavior of LIFs:
-failover-policy: Enables you to specify the order in which the network ports are chosen
during a LIF failover and enables you to prevent a LIF from failing over.
This parameter can have one of the following values:
nextavail (default): Enables a LIF to fail over to the next available port, preferring a port on
system-defined - specifies that the LIF uses the implicit system-defined failover behavior
Step
1. Depending on whether you want a LIF to fail over, complete the appropriate action:
If you want to...
A routing table. Each LIF is associated with one routing group and uses only the
routes of that group. Multiple LIFs can share a routing group.
Note: For backward compatibility, if you want to configure a route per LIF, you
A defined route between a LIF and a specific destination IP address; the route can
use a gateway IP address.
If you want to segregate the data LIFs from the management LIFs, you must create different routing
groups for each kind of LIFs.
The following rules apply when creating routing groups:
The routing group and the associated LIFs should be in the same subnet.
All LIFs sharing a routing group must be on the same IP subnet.
All next-hop gateways must be on that same IP subnet.
A Storage Virtual Machine (SVM) can have multiple routing groups, but a routing group belongs
to only one SVM
The routing group name must be unique in the cluster and should not contain more than 64
characters.
You can create a maximum of 256 routing groups per node.
Step
You must have modified the LIFs that are using the routing group to use a different routing
group.
You must have deleted the routes within the routing group.
Step
1. Use the network routing-groups delete command as shown in the following example to
delete a routing group.
For more information about this command, see the man page.
Example
cluster1::> network routing-groups delete -vserver vs1 -routing-group
d192.0.2.165/24
Related tasks
In all installations, routes are not needed for routing groups of cluster LIFs.
Note: You should only modify routes for routing groups of node-management LIFs when you are
logged into the console.
Steps
2. Create a route within the routing group by using the network routing-groups route
create command.
Example
The following example shows how to create a route for a LIF named mgmtif2. The routing group
uses the destination IP address 0.0.0.0, the network mask 255.255.255.0, and the gateway IP
address 192.0.2.1.
cluster1::> network routing-groups route create -vserver vs0 -routinggroup d192.0.2.166/24 -destination 0.0.0.0/0 -gateway 192.0.2.1 metric 10
Related tasks
1. Use the network routing-groups route delete command to delete a static route.
For more information about this command, see the appropriate man page.
Example
The following example deletes a static route associated with a LIF named mgmtif2 from routing
group d192.0.2.167/24. The static route has the destination IP address 192.40.8.1.
cluster1::> network routing-groups route delete -vserver vs0 -routinggroup d192.0.2.167/24 -destination 0.0.0.0/0
For more information, see the man pages for the vserver services dns hosts commands.
Related concepts
For more information, see the man pages for the vserver services dns commands.
Related concepts
61
Different client connections use different bandwidth; therefore, LIFs can be migrated based on
the load capacity.
When new nodes are added to the cluster, LIFs can be migrated to the new ports.
You might want to modify the automatic load balancing weights, for example, if a cluster has both
10-GbE and 1-GbE data ports, the 10-GbE ports can be assigned a higher weight so that it is returned
more frequently when any request is received.
Step
and 100.
Related tasks
You must have configured the DNS forwarder on the site-wide DNS server to forward all requests
for the load balancing zone to the configured LIFs.
For more information about configuring DNS load balancing using conditional forwarding, see the
knowledge base article How to set up DNS load balancing in Cluster-Mode on the NetApp Support
Site.
About this task
Any data LIF can respond to DNS queries for a DNS load balancing zone name.
A DNS load balancing zone must have a unique name in the cluster, and the zone name must
meet the following requirements:
Step
1. Use the network interface create command with the zone_domain_name option to
create a DNS load balancing zone.
For more information about the command, see the man pages.
Example
The following example demonstrates how to create a DNS load balancing zone named
storage.company.com while creating the LIF lif1:
cluster1::> network interface create -vserver vs0 -lif lif1 role data -home-node node1
-home-port e0c -address 192.0.2.129 -netmask 255.255.255.128 dns-zone storage.company.com
Related tasks
All the LIFs in a load balancing zone should belong to the same SVM.
Note: A LIF can be a part of only one DNS load balancing zone.
Failover groups for each subnet must have been set up, if the LIFs belong to different subnets.
A LIF that is in the administrative down status is temporarily removed from the DNS load
balancing zone.
When the LIF returns to the administrative up status, the LIF is added automatically to the DNS
load balancing zone.
Load balancing is not supported for LIFs hosted on a VLAN port.
Step
1. Depending on whether you want to add or remove a LIF, perform the appropriate action:
If you want to...
Related tasks
Steps
1. To enable or disable automatic LIF rebalancing on a LIF, use the network interface modify
command.
Example
The following example shows how to enable automatic LIF rebalancing on a LIF and also restrict
the LIF to fail over only to the ports in the failover group failover-group_2:
cluster1::*>network interface modify -vserver vs1 -lif data1 failover-policy priority -failover-group failover-group_2 -allow-lbmigrate true
Related tasks
You must have configured the DNS site-wide server to forward all DNS requests for NFS and CIFS
traffic to the assigned LIFs.
For more information about configuring DNS load balancing using conditional forwarding, see the
knowledge base article How to set up DNS load balancing in Cluster-Mode on The NetApp Support
Site.
About this task
You should not create separate DNS load balancing zones for each protocol when the following
conditions are true:
Because automatic LIF rebalancing can be used only with NFSv3 connections, you should create
separate DNS load balancing zones for CIFS and NFSv3 clients. Automatic LIF rebalancing on the
zone used by CIFS clients is disabled automatically, as CIFS connections cannot be nondisruptively
migrated. This enables the NFS connections to take advantage of automatic LIF rebalancing.
Steps
1. Use the network interface modify command to create a DNS load balancing zone for the
NFS connections and assign LIFs to the zone.
Example
The following example shows how to create a DNS load balancing zone named nfs.company.com
and assign LIFs named lif1, lif2, and lif3 to the zone:
cluster1::> network interface modify -vserver vs0 -lif lif1..lif3 dns-zone nfs.company.com
2. Use the network interface modify command to create a DNS load balancing zone for the
CIFS connections and assign LIFs to the zone.
The following example shows how to create a DNS load balancing zone named
cifs.company.com and assign LIFs named lif4, lif5, and lif6 to the zone.
cluster1::> network interface modify -vserver vs0 -lif lif4..lif6 dns-zone cifs.company.com
3. Use the set -privilege advanced command to log in at the advanced privilege level.
Example
The following example shows how to enter the advanced privilege mode:
cluster1::> set -privilege advanced
Warning: These advanced commands are potentially dangerous; use them
only when directed to do so by technical support.
Do you want to continue? {y|n}: y
4. Use the network interface modify command to enable automatic LIF rebalancing on the
LIFs that are configured to serve NFS connections.
Example
The following example shows how to enable automatic LIF rebalancing on LIFs named lif1, lif2,
and lif3 in the DNS load balancing zone created for NFS connections.
cluster1::*> network interface modify -vserver vs0 -lif lif1..lif3 allow-lb-migrate true
Note: Because automatic LIF rebalancing is disabled for CIFS, automatic LIF rebalancing
should not be enabled on the DNS load balancing zone that is configured for CIFS connections.
Result
NFS clients can mount by using nfs.company.com and CIFS clients can map CIFS shares by using
cifs.company.com. All new client requests are directed to a LIF on a less-utilized port. Additionally,
the LIFs on nfs.company.com are migrated dynamically to different ports based on the load.
Related information
Data ONTAP supports IPv6 (RFC 2465), TCP (RFC 4022), UDP (RFC 4113), and ICMP (RFC
2466) MIBs, which show both IPv4 and IPv6 data, are supported.
Data ONTAP also provides a short cross-reference between object identifiers (OIDs) and object short
names in the traps.dat file.
Note: The latest versions of the Data ONTAP MIBs and traps.dat files are available online on
the NetApp Support Site. However, the versions of these files on the web site do not necessarily
correspond to the SNMP capabilities of your Data ONTAP version. These files are provided to
help you evaluate SNMP features in the latest Data ONTAP version.
Related information
In new installations of Data ONTAP, SNMPv1 and SNMPv2c are disabled by default.
SNMPv1 and SNMPv2c are enabled when a SNMP community is created.
Data ONTAP supports read-only communities.
By default, a firewall data policy has SNMP service set to deny. Create a new data policy with
SNMP service set to allow when creating an SNMP user for a data SVM.
You can create SNMP communities for the SNMPv1 and SNMPv2c users for both the admin
SVM and the data SVM.
Steps
1. Use the system snmp community add command to create an SNMP community.
Example
The following example shows how you can create an SNMP community in the admin SVM:
cluster1::> system snmp community add -type ro -community-name comty1
2. Use the vserver option of the system snmp community add to create a SNMP community
in the SVM.
Example
The following example shows how you can create an SNMP community in the data SVM, vs0:
cluster1::> system snmp community add -type ro -community-name comty2
-vserver vs0
3. Use the system snmp community show command to verify that the communities have been
created.
Example
The following example shows different communities created for SNMPv1 and SNMPv2c:
comty1
vs0
ro comty2
2 entries were displayed.
4. Use the firewall policy show -service snmp command to verify if SNMP is allowed as
a service in data firewall policy.
Example
The following example shows that the snmp service is allowed in the data firewall policy:
cluster1::> firewall policy show -service snmp
(system services firewall policy show)
Policy
Service
---------------- ---------cluster
snmp
data
snmp
intercluster
snmp
mgmt
snmp
4 entries were displayed.
Action IP-List
------ -------------------allow
0.0.0.0/0
allow
0.0.0.0/0, ::/0
allow
0.0.0.0/0
allow
0.0.0.0/0, ::/0
5. Optional: If the value of the snmp service is deny, use the system services firewall
policy create to create a data firewall policy with the value of the snmp service as allow.
Example
The following example shows how you can create a new data firewall policy data1 with value of
the snmp service as allow, and verify if this has been created successfully:
cluster1::> system services firewall policy create -policy data1 -service snmp -action
allow -ip-list 0.0.0.0/0
cluster1::> firewall policy
(system services firewall
Policy
Service
---------------- ---------cluster
snmp
data
snmp
intercluster
snmp
mgmt
snmp
data1
snmp
5 entries were displayed.
0.0.0.0/0
0.0.0.0/0, ::/0
allow
0.0.0.0/0
allow
0.0.0.0/0, ::/0
allow
0.0.0.0/0, ::/0
The following example shows how the created data firewall policy data1 can be assigned to a LIF
datalif1:
cluster1::> network interface modify -vserver vs1 -lif datalif1 firewall-policy data1
Result
The SNMPv3 user can log in from the SNMP manager by using the user name and password and run
the SNMP utility commands.
Command-line option
Description
engineID
-e EngineID
securityName
-u Name
Command-line option
Description
authProtocol
-a {MD5 | SHA}
authKey
-A PASSPHRASE
securityLevel
-l {authNoPriv | AuthPriv |
noAuthNoPriv}
privProtocol
-x { none | des}
privPassword
-X password
The following output shows the SNMPv3 user running the snmpwalk command:
The following output shows the SNMPv3 user running the snmpwalk command:
$ snmpwalk -v 3 -u snmpv3user1 -a MD5 -A password1!
192.0.2.62 .1.3.6.1.4.1.789.1.5.8.1.2
enterprises.789.1.5.8.1.2.1028 = "vol0"
enterprises.789.1.5.8.1.2.1032 = "vol0"
enterprises.789.1.5.8.1.2.1038 = "root_vs0"
enterprises.789.1.5.8.1.2.1042 = "root_vstrap"
enterprises.789.1.5.8.1.2.1064 = "vol1"
-l authNoPriv
The following output shows the SNMPv3 user running the snmpwalk command:
-l noAuthNoPriv 192.0.2.62 .
=
=
=
=
=
"vol0"
"vol0"
"root_vs0"
"root_vstrap"
"vol1"
SNMP traps
SNMP traps capture system monitoring information that is sent as an asynchronous notification from
the SNMP agent to the SNMP manager. There are three types of SNMP traps: standard, built-in, and
user-defined. User-defined traps are not supported in clustered Data ONTAP.
A trap can be used to check periodically for operational thresholds or failures that are defined in the
MIB. If a threshold is reached or a failure is detected, the SNMP agent sends a message (trap) to the
traphosts alerting them of the event.
Standard
SNMP traps
These traps are defined in RFC 1215. There are five standard SNMP traps that are
supported by Data ONTAP: coldStart, warmStart, linkDown, linkUp, and
authenticationFailure.
Note: The authenticationFailure trap is disabled by default. You must use the
system snmp authtrap command to enable the trap. See the man pages for
more information.
Built-in
SNMP traps
Built-in traps are predefined in Data ONTAP and are automatically sent to the
network management stations on the traphost list if an event occurs. These traps,
such as diskFailedShutdown, cpuTooBusy, and volumeNearlyFull, are defined in
the custom MIB.
Each built-in trap is identified by a unique trap code.
Configuring traphosts
You can configure the traphost (SNMP manager) to receive notifications (SNMP trap PDUs) when
SNMP traps are generated. You can specify either the hostname or the IP address (IPv4 or IPv6) of
the SNMP traphost.
Before you begin
DNS must be configured on the cluster for resolving the traphost names.
IPv6 should be enabled on the cluster to configure SNMP traphosts with IPv6 addresses.
1. Use the system snmp traphost add command to add SNMP traphosts.
Note: Traps can only be sent when at least one SNMP management station is specified as a
traphost.
Example
The following example illustrates addition of a new SNMP traphost named yyy.example.com:
Example
cluster1::> system snmp traphost add -peer-address yyy.example.com
Example
The following example demonstrates how an IPv6 address can be added to configure a traphost:
cluster1::> system snmp traphost add -peer-address
2001:0db8:1:1:209:6bff:feae:6d67
security snmpusers
Delete a traphost
Display the events for which SNMP traps (built- event route show
in) are generated
Note: You use the snmp-support parameter
to view the SNMP-related events. You use the
instance parameter to view the corrective
action for an event.
Display a list of SNMP trap history records,
event snmphistory show
which are event notifications that have been sent
to SNMP traps
Delete an SNMP trap history record
For more information about the system snmp and event commands, see the man pages.
After removing a NIC from a node, use the network port show command to verify that all the
NIC has been removed from its slot.
Step
Node name
Port name
Port role (cluster, data, or node-management)
Link status (up or down)
MTU setting
Autonegotiation setting (true or false)
Duplex mode and operational status (half or full)
Port speed setting and operational status (1 gigabit or 10 gigabits per second)
The port's interface group, if applicable
The port's VLAN tag information, if applicable
If data for a field is not available (as, for example, the operational duplex and speed for an
inactive port would not be availalbe), the field is listed as undef.
The following example displays information about all network ports in a cluster containing four
nodes:
cluster1::> network port show
Node
Port
------ -----node1
e0a
e0b
e0c
e0d
e1a
e1b
e1c
e1d
node2
e0a
e0b
e0c
e0d
e1a
e1b
e1c
e1d
node3
e0a
e0b
e0c
e0d
e1a
e1b
e1c
e1d
node4
e0a
e0b
e0c
e0d
e1a
e1b
e1c
e1d
Auto-Negot Duplex
Speed (Mbps)
Role
Link
MTU Admin/Oper Admin/Oper Admin/Oper
------------ ---- ----- ----------- ---------- -----------cluster
cluster
data
data
node-mgmt
data
data
data
up
up
up
up
up
down
down
down
1500
1500
1500
1500
1500
1500
1500
1500
true/true
true/true
true/true
true/true
true/true
true/true
true/true
true/true
full/full
full/full
full/full
full/full
full/full
full/half
full/half
full/half
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/10
auto/10
auto/10
cluster
cluster
data
data
node-mgmt
data
data
data
up
up
up
up
up
down
down
down
1500
1500
1500
1500
1500
1500
1500
1500
true/true
true/true
true/true
true/true
true/true
true/true
true/true
true/true
full/full
full/full
full/full
full/full
full/full
full/half
full/half
full/half
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/10
auto/10
auto/10
cluster
cluster
data
data
node-mgmt
data
data
data
up
up
up
up
up
down
down
down
1500
1500
1500
1500
1500
1500
1500
1500
true/true
true/true
true/true
true/true
true/true
true/true
true/true
true/true
full/full
full/full
full/full
full/full
full/full
full/half
full/half
full/half
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/10
auto/10
auto/10
cluster
cluster
data
data
node-mgmt
data
data
data
up
up
up
up
up
down
down
down
1500
1500
1500
1500
1500
1500
1500
1500
true/true
true/true
true/true
true/true
true/true
true/true
true/true
true/true
full/full
full/full
full/full
full/full
full/full
full/half
full/half
full/half
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/10
auto/10
auto/10
You can get all available information by specifying the -instance parameter or get only the
required fields by specifying the fields parameter.
Step
1. Use the network port vlan show command to view information about VLANs.
To customize the output, you can enter one or more optional parameters. For more information
about this command, see the man page.
Example
cluster1::> network port
Network
Node
VLAN Name Port
------ --------- ------cluster1-01
a0a-10
a0a
a0a-20
a0a
a0a-30
a0a
a0a-40
a0a
a0a-50
a0a
cluster1-02
a0a-10
a0a
a0a-20
a0a
a0a-30
a0a
a0a-40
a0a
a0a-50
a0a
vlan show
Network
VLAN ID MAC Address
-------- ----------------10
20
30
40
50
02:a0:98:06:10:b2
02:a0:98:06:10:b2
02:a0:98:06:10:b2
02:a0:98:06:10:b2
02:a0:98:06:10:b2
10
20
30
40
50
02:a0:98:06:10:ca
02:a0:98:06:10:ca
02:a0:98:06:10:ca
02:a0:98:06:10:ca
02:a0:98:06:10:ca
Example
The following example displays information about all interface groups in the cluster:
cluster1::> network port ifgrp show
Port
Distribution
Node
IfGrp
Function
MAC Address
-------- ---------- ------------ ----------------cluster1-01
a0a
ip
02:a0:98:06:10:b2
cluster1-02
a0a
sequential
02:a0:98:06:10:ca
cluster1-03
a0a
port
02:a0:98:08:5b:66
cluster1-04
a0a
mac
02:a0:98:08:61:4e
4 entries were displayed.
Active
Ports
Ports
------- ------------------full
e7a, e7b
full
e7a, e7b
full
e7a, e7b
full
e7a, e7b
You might to have the view the information about a LIF in the following scenarios:
Step
1. To view the LIF information, use the network interface show command.
You can get all available information by specifying the -instance parameter or get only the
required fields by specifying the fields parameter. If data for a field is not available, the field is
listed as undef..
Example
The following example displays general information about all LIFs in a cluster:
vs1::> network interface show
Logical
Status
Vserver
Interface Admin/Oper
----------- ---------- ---------example
lif1
up/up
Network
Current
Current Is
Address/Mask
Node
Port
Home
------------------ ------------- ------- ---192.0.2.129/22
node-01
e0d
false
node
cluster_mgmt up/up
192.0.2.3/20
node-02
e0c
false
node-01
clus1
up/up
192.0.2.65/18
node-01
clus2
up/up
192.0.2.66/18
node-01
mgmt1
up/up
192.0.2.1/20
node-01
e0a
true
e0b
true
e0c
true
e0a
true
e0b
true
e0d
true
node-02
clus1
up/up
192.0.2.67/18
node-02
clus2
up/up
192.0.2.68/18
node-02
mgmt2
up/up
192.0.2.2/20
node-02
vs1
d1
up/up
192.0.2.130/21
node-01
e0d
false
d2
up/up
192.0.2.131/21
node-01
data3
up/up
192.0.2.132/20
node-02
e0d
true
e0c
true
vs1
data1
data
nfs,cifs,fcache
node-1
e0c
node-3
e0c
up
false
10.72.34.39
255.255.192.0
18
d10.72.0.0/18
up
nextavail
data
false
system-defined
xxx.example.com
-
Related tasks
1. Depending on the information that you want to view, enter the applicable command:
If you want to display information about...
Static routes
Routing groups
Note: You can get all available information by specifying the -instance parameter.
show
Gateway
Metric
--------------- -----172.17.176.1
20
Related tasks
1. To view the host name entries in the hosts table of the admin SVM, enter the following
command:
vserver services dns hosts show
Example
The following sample output shows the hosts table:
cluster1::> vserver services dns hosts show
Vserver
Address
Hostname
Aliases
lnx219-36
10.72.219.37
lnx219-37
lnx219-37.example.com
Example
The following example shows the output of the vserver services dns show command:
cluster1::> vserver services dns show
Name
Vserver
State
Domains
Servers
--------------- --------- -------------------------------------------------cluster1
enabled
xyz.company.com
192.56.0.129,
192.56.0.130
vs1
enabled
xyz.company.com
192.56.0.129,
192.56.0.130
vs2
enabled
xyz.company.com
192.56.0.129,
192.56.0.130
vs3
enabled
xyz.company.com
192.56.0.129,
192.56.0.130
4 entries were displayed.
You can view information about the physical ports in the failover group, including the links status of
the port, using the network port show command.
The following example displays information about all failover groups on a two-node cluster and
shows the link status of the physical port :
cluster1::> network interface failover-groups show
Failover
Group
Node
Port
------------------- ----------------- ---------clusterwide
node-02
e0c
node-01
e0c
node-01
e0d
fg1
node-02
e0c
node-01
e0c
5 entries were displayed.
cluster1::> network port show -port
node
port link
-------- ---- ---node-01 e0c up
Related tasks
By viewing the failover targets for a LIF, you can check for the following:
Steps
1. Use the failover option of the network interface show command to view the failover
targets of a LIF.
Example
The following example displays information about the failover targets of the different LIFs in the
cluster:
cluster1::> network interface show -failover
Logical
Home
Failover
Failover
Vserver Interface
Node:Port
Policy
Group
-------- --------------- --------------------- ----------------------------vs1
clus1
node1:e0a
nextavail
systemdefined
clus2
node1:e0b
nextavail
systemdefined
mgmt1
node1:e1a
nextavail
systemdefined
Failover Targets: node1:e1a
vs2
clus1
node2:e0a
nextavail
systemdefined
clus2
node2:e0b
nextavail
systemdefined
mgmt1
node2:e1a
nextavail
systemdefined
Failover Targets: node2:e1a
mgmt2
node2:e1a
nextavail
systemdefined
Failover Targets: node2:e1a
vs3
clus1
node3:e0a
nextavail
systemdefined
clus2
node3:e0b
nextavail
systemdefined
mgmt1
node3:e1a
nextavail
systemdefined
Failover Targets: node3:e1a
vs4
clus1
node4:e0a
nextavail
systemdefined
clus2
node4:e0b
nextavail
systemdefined
mgmt1
node4:e1a
nextavail
systemdefined
Failover Targets: ie3070-4:e1a
The following example shows the failover targets of each LIF in the cluster:
cluster1::> net int show -fields failover-targets
(network interface show)
vserver lif failover-targets
-------- ----------------------------------------------------------------------------------------------------------------------------------------------vs1 clus1
vs1 clus2
vs1 mgmt1
node1:e1a
vs2 clus1
vs2 clus2
vs2 mgmt1
node2:e1a
vs2 mgmt2
node2:e1a
vs3 clus1
vs3 clus2
vs3 mgmt1
node3:e1a
vs4 clus1
vs4 clus2
vs4 mgmt1
node4:e1a
1. Depending on the LIFs and details that you want to view, perform the appropriate action:
Example
The following example shows the details of all the LIFs in the load balancing zone
storage.company.com:
cluster1::> net int show -vserver vs0 -dns-zone storage.company.com
Logical
Status
Is
Vserver
Interface Admin/Oper
Home
----------- ---------- ------------vs0
lif3
up/up
true
lif4
up/up
true
lif5
up/up
true
lif6
up/up
true
lif7
up/up
true
lif8
up/up
true
6 entries were displayed.
Network
Current
Current
Address/Mask
Node
Port
ndeux-11
e0c
10.98.224.23/20
ndeux-21
e0c
10.98.239.65/20
ndeux-11
e0c
10.98.239.66/20
ndeux-11
e0c
10.98.239.63/20
ndeux-21
e0c
10.98.239.64/20
ndeux-21
e0c
The following example shows the DNS zone details of the LIF lif1:
cluster1::> network interface show -lif data3 -fields dns-zone
Vserver lif
dns-zone
------- ----- -------vs0
data3 storage.company.com
The following example shows the list of all LIFs in the cluster and their corresponding DNS
zones:
cluster1::> network interface show -fields dns-zone
Vserver lif
dns-zone
Related tasks
Finding a busy or overloaded node because you can view the number of clients that are being
serviced by each node.
Determining why a particular client's access to a volume is slow.
You can view details about the node that the client is accessing and then compare it with the node
on which the volume resides. If accessing the volume requires traversing the cluster network,
clients might experience decreased performance because of the remote access to the volume on an
oversubscribed remote node.
Verifying that all nodes are being used equally for data access.
Finding clients that have an unexpectedly high number of connections.
Verifying if certain expected clients do not have connections to a node.
Step
1. Use the network connections active show-clients command to display a count of the
active connections by client on a node.
For more information about this command, see the man pages.
Step
17
8
UDP
TCP
14
10
UDP
TCP
18
4
node2
node3
The network connections active services command is useful in the following scenarios:
Verifying that all nodes are being used for the appropriate services and that the load balancing for
that service is working.
Verifying that no other services are being used.
Step
1. Use the network connections active show-services command to display a count of the
active connections by service on a node.
For more information about this command, see the man pages.
Example
cluster1::> network connections active show-services
Node
Service
Count
--------- --------- -----node0
mount
3
nfs
14
nlm_v4
4
cifs_srv
3
port_map
18
rclopcp
27
node1
cifs_srv
3
rclopcp
16
node2
rclopcp
13
node3
cifs_srv
1
rclopcp
17
The network connections active show-lifs command is useful in the following scenarios:
Step
1. Use the network connections active show-lifs command to display a count of active
connections for each LIF by SVM and node.
For more information about this command, see the man pages.
Example
cluster1::> network connections active show-lifs
Node
Vserver Name Interface Name Count
-------- ------------ --------------- -----node0
vs0
datalif1
3
vs0
cluslif1
6
vs0
cluslif2
5
node1
vs0
datalif2
3
vs0
cluslif1
3
vs0
cluslif2
5
node2
vs1
datalif2
1
vs1
cluslif1
5
vs1
cluslif2
3
node3
vs1
datalif1
1
vs1
cluslif1
2
Verifying that individual clients are using the correct protocol and service on the correct node.
If a client is having trouble accessing data using a certain combination of node, protocol, and
service, you can use this command to find a similar client for configuration or packet trace
comparison.
Step
1. Use the network connections active command to display the active connections in a
cluster.
For more information about this command, see the man pages.
Example
cluster1::> network connections active show -node node0
Vserver
Name
------node0
node0
node0
node0
node0
node0
node0
node0
node0
node0
node0
node0
node0
node0
Interface
Name:Local Port
---------------cluslif1:7070
cluslif1:7070
cluslif2:7070
cluslif2:7070
cluslif1:7070
cluslif1:7070
cluslif2:7070
cluslif2:7070
cluslif1:7070
cluslif1:7070
cluslif2:7070
cluslif2:7070
cluslif1:7070
cluslif1:7070
Remote
IP Address:Port
----------------192.0.2.253:48621
192.0.2.253:48622
192.0.2.252:48644
192.0.2.250:48646
192.0.2.245:48621
192.0.2.245:48622
192.0.2.251:48644
192.0.2.251:48646
192.0.2.248:48621
192.0.2.246:48622
192.0.2.252:48644
192.0.2.250:48646
192.0.2.254:48621
192.0.2.253:48622
Protocol/Service
---------------UDP/rclopcp
UDP/rclopcp
UDP/rclopcp
UDP/rclopcp
UDP/rclopcp
UDP/rclopcp
UDP/rclopcp
UDP/rclopcp
UDP/rclopcp
UDP/rclopcp
UDP/rclopcp
UDP/rclopcp
UDP/rclopcp
UDP/rclopcp
The network connections listening show command is useful in the following scenarios:
Verifying that the desired protocol or service is listening on a LIF if client connections to that LIF
fail consistently.
Verifying that a UDP/rclopcp listener is opened at each cluster LIF if remote data access to a
volume on one node through a LIF on another node fails.
Verifying that a UDP/rclopcp listener is opened at each cluster LIF if SnapMirror transfers
between two nodes in the same cluster fail.
Verifying that a TCP/ctlopcp listener is opened at each intercluster LIF if SnapMirror transfers
between two nodes in different clusters fail.
Step
1. Use the network connections listening show command to display the listening
connections.
Example
cluster1::> network connections listening show
Server Name Interface Name:Local Port Protocol/Service
------------ -------------------------- ----------------node0
cluslif1:7700
UDP/rclopcp
node0
cluslif2:7700
UDP/rclopcp
node1
cluslif1:7700
UDP/rclopcp
node1
cluslif2:7700
UDP/rclopcp
node2
cluslif1:7700
UDP/rclopcp
node2
cluslif2:7700
UDP/rclopcp
node3
cluslif1:7700
UDP/rclopcp
node3
cluslif2:7700
UDP/rclopcp
8 entries were displayed.
Test whether your node can reach other hosts on network ping
your network
Trace the route that the IPv4 packets take to a
network node.
network traceroute
nodeshell.
For more information about these commands, see the appropriate man pages.
You can execute the CDP commands only from the nodeshell.
CDP is supported for all port roles.
CDP advertisements are sent and received by ports that are configured with the LIFs and in the
up state.
CDP must be enabled on both the transmitting and receiving devices for sending and receiving
CDP advertisements.
CDP advertisements are sent at regular intervals, and you can configure the time interval
involved.
When IP addresses are changed at the storage system side, the storage system sends the updated
information in the next CDP advertisement.
Note: Sometimes when LIFs are changed on the node, the CDP information is not updated at
the receiving device side (for example, a switch). If you encounter such a problem, you should
configure the network interface of the node to the down status and then to the up status.
For an interface group that hosts VLANs, all the LIFs configured on the interface group and the
VLANs are advertised on each of the network ports.
For packets with MTU size equal to or greater than 1,500 bytes, only the number of LIFs that can
fit into a 1500 MTU-sized packet is advertised.
Some Cisco switches always send CDP packets that are tagged on VLAN 1 if the native (default)
VLAN of a trunk is anything other than 1. Data ONTAP only supports CDP packets that are
untagged, both for sending and receiving. This result in storage platforms running Data ONTAP
being visible to Cisco devices (using the "show cdp neighbors" command), and only the Cisco
devices that send untagged CDP packets are visible to Data ONTAP.
When the cdpd.enable option is set to on, CDPv1 is enabled on all physical ports of the node from
which the command is run. Starting from Data ONTAP 8.2, CDP is enabled by default. If you change
the value of the cdpd.enable option to off, the cluster network traffic might not be optimized.
Step
1. To enable or disable CDP, enter the following command from the nodeshell:
options cdpd.enable {on|off}
onEnables CDP
offDisables CDP
Step
1. To configure the hold time, enter the following command from the nodeshell:
options cdpd.holdtime holdtime
neighboring CDP-compliant devices. You can enter values ranging from 10 seconds to 255
seconds.
The value of the cdpd.interval option applies to both the nodes of an HA pair.
Step
1. To configure the interval for sending CDP advertisements, enter the following command from the
nodeshell:
options cdpd.interval interval
interval is the time interval after which CDP advertisements should be sent. The default
interval is 60 seconds. The time interval can be set between the range of 5 seconds and 900
seconds.
1. Depending on whether you want to view or clear the CDP statistics, complete the following step:
If you want to...
cdpd show-stats
cdpd zero-stats
9116
| Csum Errors:
| Unsupported
| Malformed:
| Mem alloc
| Cache overflow:
| Other
| Xmit fails:
| No
| Other
TRANSMIT
Packets:
4557
hostname:
0
Packet truncated:
0
errors:
0
This output displays the total packets that are received from the last time the statistics were
cleared.
The following command clears the CDP statistics:
system1> cdpd zero-stats
The following output shows the statistics after they are cleared:
system1> cdpd show-stats
RECEIVE
Packets:
Vers:
0
Invalid length:
fails:
0
Missing TLVs:
errors:
0
TRANSMIT
Packets:
hostname:
0
Packet truncated:
errors:
0
OTHER
Init failures:
| Csum Errors:
| Unsupported
| Malformed:
| Mem alloc
| Cache overflow:
| Other
| Xmit fails:
| No
| Other
Some Cisco switches always send CDP packets that are tagged on VLAN 1 if the native (default)
VLAN of a trunk is anything other than 1.
The CDP implementation in Data ONTAP only supports CDP packets that are untagged, both for
sending and receiving. The net result is that storage platforms running Data ONTAP are visible to
Cisco devices (using the "show cdp neighbors" command), but only the Cisco devices that send
untagged CDP packets are visible to Data ONTAP.
Step
1. To view information about all CDP-compliant devices connected to your storage system, enter
the following command from the nodeshell:
cdpd show-neighbors
Example
The following example shows the output of the cdpd show-neighbors command:
system1> cdpd show-neighbors
Local Remote
Remote
Port
Device
Interface
Capability
------ --------------- ------------------------------e0a
sw-215-cr(4C2) GigabitEthernet1/17
e0b
sw-215-11(4C5) GigabitEthernet1/15
e0c
sw-215-11(4C5) GigabitEthernet1/16
Remote
Platform
Hold
Time
Remote
125
145
145
RSI
SI
SI
The output lists the Cisco devices that are connected to each port of the storage system. The
"Remote Capability" column specifies the capabilities of the remote device that are connected
to the network interface. The following capabilities are available:
RRouter
TTransparent bridge
BSource-route bridge
SSwitch
HHost
IIGMP
rRepeater
PPhone
105
Copyright information
Copyright 19942014 NetApp, Inc. All rights reserved. Printed in the U.S.
No part of this document covered by copyright may be reproduced in any form or by any means
graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an
electronic retrieval systemwithout prior written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and
disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,
WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice.
NetApp assumes no responsibility or liability arising from the use of products described herein,
except as expressly agreed to in writing by NetApp. The use or purchase of this product does not
convey a license under any patent rights, trademark rights, or any other intellectual property rights of
NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents,
or pending applications.
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to
restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer
Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).
Trademark information
NetApp, the NetApp logo, Network Appliance, the Network Appliance logo, Akorri,
ApplianceWatch, ASUP, AutoSupport, BalancePoint, BalancePoint Predictor, Bycast, Campaign
Express, ComplianceClock, Customer Fitness, Cryptainer, CryptoShred, CyberSnap, Data Center
Fitness, Data ONTAP, DataFabric, DataFort, Decru, Decru DataFort, DenseStak, Engenio, Engenio
logo, E-Stack, ExpressPod, FAServer, FastStak, FilerView, Fitness, Flash Accel, Flash Cache, Flash
Pool, FlashRay, FlexCache, FlexClone, FlexPod, FlexScale, FlexShare, FlexSuite, FlexVol, FPolicy,
GetSuccessful, gFiler, Go further, faster, Imagine Virtually Anything, Lifetime Key Management,
LockVault, Manage ONTAP, Mars, MetroCluster, MultiStore, NearStore, NetCache, NOW (NetApp
on the Web), Onaro, OnCommand, ONTAPI, OpenKey, PerformanceStak, RAID-DP, ReplicatorX,
SANscreen, SANshare, SANtricity, SecureAdmin, SecureShare, Select, Service Builder, Shadow
Tape, Simplicity, Simulate ONTAP, SnapCopy, Snap Creator, SnapDirector, SnapDrive, SnapFilter,
SnapIntegrator, SnapLock, SnapManager, SnapMigrator, SnapMirror, SnapMover, SnapProtect,
SnapRestore, Snapshot, SnapSuite, SnapValidator, SnapVault, StorageGRID, StoreVault, the
StoreVault logo, SyncMirror, Tech OnTap, The evolution of storage, Topio, VelocityStak, vFiler,
VFM, Virtual File Manager, VPolicy, WAFL, Web Filer, and XBB are trademarks or registered
trademarks of NetApp, Inc. in the United States, other countries, or both.
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corporation in the United States, other countries, or both. A complete and current list of
other IBM trademarks is available on the web at www.ibm.com/legal/copytrade.shtml.
Apple is a registered trademark and QuickTime is a trademark of Apple, Inc. in the United States
and/or other countries. Microsoft is a registered trademark and Windows Media is a trademark of
Microsoft Corporation in the United States and/or other countries. RealAudio, RealNetworks,
RealPlayer, RealSystem, RealText, and RealVideo are registered trademarks and RealMedia,
RealProxy, and SureStream are trademarks of RealNetworks, Inc. in the United States and/or other
countries.
All other brands or products are trademarks or registered trademarks of their respective holders and
should be treated as such.
NetApp, Inc. is a licensee of the CompactFlash and CF Logo trademarks.
NetApp, Inc. NetCache is certified RealSystem compatible.
107
Index
A
active connections
displaying 92
active connections by client
displaying 92
admin SVMs
host-name resolution 58
automatic LIF rebalancing
disabling 66
enabling 66
automatic load balancing
assigning weights 61
C
CDP
configuring hold time 100
configuring periodicity 101
considerations for using 99
Data ONTAP support 99
disabling 100
enabling 100
viewing neighbor information 103
viewing statistics 101
CDP (Cisco Discovery Protocol) 99
CIFS 68
Cisco Discovery Protocol
See CDP
Cisco Discovery Protocol (CDP) 99
cluster
interconnect cabling guidelines 8
Cluster
default port assignments 15
cluster connections
displaying 92
commands
managing DNS domain configuration 60
snmp traps 76
system snmp 77
vserver services dns hosts show 86
configuring
DNS 58
host-name resolution 58
connections
active, displaying count by client on node 92
D
Data ONTAP-v 15
Data ports
default assignments 15
displaying
DNS domain configurations 87
failover groups 87
host name entries 86
interface groups 82
load balancing zones 90
network information 80
network ports 80
routing groups 85
static routes 85
VLANs 82
DNS
configuration 58
DNS domain configurations
displaying 87
DNS domains
managing 59
DNS load balancing 61, 68
DNS load balancing zone
about 63
creating 63
DNS zones
description 7
E
enabling, on the cluster 31
Ethernet ports
default assignments 15
Index | 109
F
failover
disabling, of a LIF 52
enabling, of a LIF 52
failover groups
clusterwide 49
configuring 48
creating or adding entries 50
deleting 51
displaying information 87
LIFs, relation 49
removing ports from 51
renaming 51
system-defined 49
types 49
user-defined 49
failover targets
viewing 88
G
guidelines
cluster interconnect cabling 8
creating LIFs 39
H
host name entries
displaying 86
viewing 86
host-name resolution
admin SVMs 58
configuring 58
hosts table 59
hosts table
managing 59
I
interface group
dynamic multimode 16, 18
load balancing 19, 20
load balancing, IP address based 20
load balancing, MAC address based 20
single-mode 16
static multimode 16, 17
types 16
interface groups
creating 21
deleting 23
ports 16
ports, adding 22
ports, displaying information 82
ports, removing 22
ports, restrictions on 20
interfaces
logical, deleting 47
logical, modifying 43
logical, reverting to home port 46
IPv6
supported features 30
unsupported features 30
IPv6 addresses
guidelines 40
IPv6 addresses
creating 40
L
LACP (Link Aggregation Control Protocol) 18
LIF failover
disabling 52
enabling 48, 52
scenarios causing 48
LIFs
about 32
characteristics 35
cluster 33
cluster-management 33
configuring 32
creating 39, 41
data 33
deleting 47
DNS load balancing 63
failover 44
failover groups, relation 49
guidelines for creating 39
load balancing weight, assigning 62
maximum number of 39
migrating 44, 66
modifying 43
node-management 33
reverting to home port 46
roles 33
viewing information about 83
viewing, failover targets
displaying, failover-targets 88
limits
LIFs 39
cluster setup 9
network connectivity
discovering 99
network problems
commands for diagnosing 98
network traffic
optimizing, Cluster-Mode 61
networking components
cluster 7
networks
ports 13
NFS 68
NIC
removing 29
O
M
Management ports
default assignments 15
managing DNS host name entries 59
MIB
/etc/mib/iscsi.mib 70
/etc/mib/netapp.mib 70
custom mib 70
iSCSI MIB 70
migrating LIFs 44
monitoring
DNS domain configurations 87
failover groups 87
host name entries 86
interface groups 82
load balancing zones 90
network connectivity 99
network information 80
network ports 80
routing groups 85
static routes 85
VLANs 82
multimode interface groups
load balancing, IP address based 20
load balancing, MAC address based 20
load balancing, port-based 20
load balancing, round-robin 20
N
network cabling
guidelines 8
network configuration
OID 70
P
port role
cluster 14
data 14
node-management 14
ports
concepts 13
description 7
displaying 80
failover groups 48
ifgrps 16
interface groups 16
interface groups, adding ports 22
interface groups, creating 21
interface groups, displaying information 82
interface groups, removing ports 22
interface groups, restrictions on 20
managing 13
modifying attributes 28
naming conventions 13
roles 14
R
route
creating 56
routes
static, deleting 57
static, displaying information about 85
routing
Index | 111
managing 54
routing groups 54
static routes 54
routing groups
creating 54
deleting 55
description 7
displaying information 85
S
setup
network configuration 9
Simple Network Management Protocol
See SNMP
SNMP
agent 70
authKey security 73
authNoPriv security 73
authProtocol security 73
commands 77
configuring traps 76
configuring v3 users 73
example 74
MIBs 70
noAuthNoPriv security 73
security parameters 73
storage system 70
traps 70
traps, types 76
SNMP community
creating 71
SNMP traps
built-in 76
snmpwalk 74
static routes
deleting 57
displaying information about 85
T
traps
configuring 76
V
virtual LANs
creating 26
deleting 27
displaying information 82
managing 23
VLANs
advantages of 25
creating 26
deleting 27
displaying information 82
managing 23
membership 24
MTU size 28
tagged traffic 26
tagging 23
untagged traffic 26