System Administration
System Administration
Contents
Managing nodes............................................................................................................. 33
Displaying node attributes............................................................................................................................................ 33
Modifying node attributes.............................................................................................................................................33
Renaming a node.......................................................................................................................................................... 34
Adding nodes to the cluster.......................................................................................................................................... 34
Removing nodes from the cluster................................................................................................................................. 36
Accessing a node's log, core dump, and MIB files by using a web browser................................................................38
Accessing the system console of a node.......................................................................................................................39
Rules governing node root volumes and root aggregates............................................................................................. 41
Freeing up space on a node’s root volume....................................................................................................... 41
Relocating root volumes to new aggregates..................................................................................................... 42
Starting or stopping a node........................................................................................................................................... 43
Rebooting a node at the system prompt............................................................................................................43
Booting ONTAP at the boot environment prompt............................................................................................ 44
Shutting down a node....................................................................................................................................... 44
Managing a node by using the boot menu.................................................................................................................... 45
Managing a node remotely using the SP/BMC............................................................................................................ 47
Understanding the SP....................................................................................................................................... 47
What the Baseboard Management Controller does.......................................................................................... 49
Configuring the SP/BMC network................................................................................................................... 50
Methods of managing SP/BMC firmware updates........................................................................................... 55
Accessing the SP/BMC.....................................................................................................................................57
Using online help at the SP/BMC CLI............................................................................................................. 61
Commands for managing a node remotely ......................................................................................................62
Commands for managing the SP from ONTAP................................................................................................69
ONTAP commands for BMC management...................................................................................................... 72
BMC CLI commands........................................................................................................................................73
Types of SVMs
A cluster consists of four types of SVMs, which help in managing the cluster and its resources and
data access to the clients and applications.
A cluster contains the following types of SVMs:
• Admin SVM
The cluster setup process automatically creates the admin SVM for the cluster. The admin
SVM represents the cluster.
• Node SVM
System Administration Reference 8
Cluster and SVM administrators
A node SVM is created when the node joins the cluster, and the node SVM represents the
individual nodes of the cluster.
• System SVM (advanced)
A system SVM is automatically created for cluster-level communications in an IPspace.
• Data SVM
A data SVM represents the data serving SVMs. After the cluster setup, a cluster administrator
must create data SVMs and add volumes to these SVMs to facilitate data access from the
cluster.
A cluster must have at least one data SVM to serve data to its clients.
Note: Unless otherwise specified, the term SVM refers to a data (data-serving) SVM.
In the CLI, SVMs are displayed as Vservers.
System Administration Reference 9
ONTAP management interface basics
If you are using an AD domain user account, you must specify username in the format of
domainname\\AD_accountname (with double backslashes after the domain name) or
"domainname\AD_accountname" (enclosed in double quotation marks and with a single
backslash after the domain name).
hostname_or_IP is the host name or the IP address of the cluster management LIF or a node
management LIF. Using the cluster management LIF is recommended. You can use an IPv4 or
IPv6 address.
command is not required for SSH-interactive sessions.
$ ssh [email protected]
Password:
cluster1::> cluster show
Node Health Eligibility
--------------------- ------- ------------
node1 true true
node2 true true
2 entries were displayed.
System Administration Reference 11
ONTAP management interface basics
cluster1::>
The following examples show how the user account named "john" from the domain named
"DOMAIN1" can issue an SSH request to access a cluster whose cluster management LIF is
10.72.137.28:
$ ssh DOMAIN1\\[email protected]
Password:
cluster1::> cluster show
Node Health Eligibility
--------------------- ------- ------------
node1 true true
node2 true true
2 entries were displayed.
cluster1::>
The following example shows how the user account named "joe" can issue an SSH MFA request to
access a cluster whose cluster management LIF is 10.72.137.32:
$ ssh [email protected]
Authenticated with partial success.
Password:
cluster1::> cluster show
Node Health Eligibility
--------------------- ------- ------------
node1 true true
node2 true true
2 entries were displayed.
cluster1::>
Related information
Administrator authentication and RBAC
• The number of unsuccessful login attempts since the last successful login.
• Whether the role has changed since the last login (for example, if the admin account's role
changed from "admin" to "backup.")
• Whether the add, modify, or delete capabilities of the role were modified since the last login.
Note: If any of the information displayed is suspicious, you should immediately contact your
security department.
To obtain this information when you login, the following prerequisites must be met:
• Your SSH user account must be provisioned in ONTAP.
• Your SSH security login must be created.
• Your login attempt must be successful.
Restrictions and other considerations for SSH login security
The following restrictions and considerations apply to SSH login security information:
• The information is available only for SSH-based logins.
• For group-based admin accounts, such as LDAP/NIS and AD accounts, users can view the
SSH login information if the group of which they are a member is provisioned as an admin
account in ONTAP.
However, alerts about changes to the role of the user account cannot be displayed for these
users. Also, users belonging to an AD group that has been provisioned as an admin account in
ONTAP cannot view the count of unsuccessful login attempts that occurred since the last time
they logged in.
• The information maintained for a user is deleted when the user account is deleted from
ONTAP.
• The information is not displayed for connections to applications other than SSH.
Examples of SSH login security information
The following examples demonstrate the type of information displayed after you login.
• This message is displayed after each successful login:
management firewall policy that has Telnet or RSH enabled, and then associate the new policy
with the cluster management LIF.
About this task
ONTAP prevents you from changing predefined firewall policies, but you can create a new policy
by cloning the predefined mgmt management firewall policy, and then enabling Telnet or RSH
under the new policy. However, Telnet and RSH are not secure protocols, so you should consider
using SSH to access the cluster. SSH provides a secure remote shell and interactive network
session.
Perform the following steps to enable Telnet or RSH access to the clusters:
Steps
1. Enter the advanced privilege mode:
set advanced
2. Enable a security protocol (RSH or Telnet):
security protocol modify -application security_protocol -enabled true
3. Create a new management firewall policy based on the mgmt management firewall policy:
system services firewall policy clone -policy mgmt -new-policy-name policy-name
4. Enable Telnet or RSH in the new management firewall policy:
system services firewall policy create -policy policy-name -service security_protocol -
action allow -ip-list ip_address/port
To allow all IP addresses, you should specify
-ip-list 0.0.0.0/0
.
5. Associate the new policy with the cluster management LIF:
network interface modify -vserver cluster_management_LIF -lif cluster_mgmt -firewall-
policy policy-name
Related information
Network and LIF management
admin_host$
Understanding the different shells for CLI commands (cluster administrators only)
The cluster has three different shells for CLI commands, the clustershell, the nodeshell, and the
systemshell. The shells are for different purposes, and they each have a different command set.
• The clustershell is the native shell that is started automatically when you log in to the cluster.
It provides all the commands you need to configure and manage the cluster. The clustershell
CLI help (triggered by ? at the clustershell prompt) displays available clustershell commands.
The man command_name command in the clustershell displays the man page for the specified
clustershell command.
• The nodeshell is a special shell for commands that take effect only at the node level.
The nodeshell is accessible through the system node run command.
System Administration Reference 16
ONTAP management interface basics
The nodeshell CLI help (triggered by ? or help at the nodeshell prompt) displays available
nodeshell commands. The man command_name command in the nodeshell displays the man
page for the specified nodeshell command.
Many commonly used nodeshell commands and options are tunneled or aliased into the
clustershell and can be executed also from the clustershell.
• The systemshell is a low-level shell that is used only for diagnostic and troubleshooting
purposes.
The systemshell and the associated "diag" account are intended for low-level diagnostic
purposes. Their access requires the diagnostic privilege level and is reserved only for technical
support to perform troubleshooting tasks.
If you enter a nodeshell or legacy command or option in the clustershell, and the command or
option has an equivalent clustershell command, ONTAP informs you of the clustershell command
to use.
If you enter a nodeshell or legacy command or option that is not supported in the clustershell,
ONTAP informs you of the "not supported" status for the command or option.
Related tasks
Displaying available nodeshell commands on page 16
You can obtain a list of available nodeshell commands by using the CLI help from the nodeshell.
Note: The system node run command has an alias command, run.
2. Enter the following command in the nodeshell to see the list of available nodeshell commands:
[commandname] help
commandname is the name of the command whose availability you want to display. If you do
not include commandname, the CLI displays all available nodeshell commands.
You enter exit or type Ctrl-d to return to the clustershell CLI.
Related concepts
Access of nodeshell commands and options in the clustershell on page 16
Nodeshell commands and options are accessible through the nodeshell (system node run –
node nodename). Many commonly used nodeshell commands and options are tunneled or aliased
into the clustershell and can be executed also from the clustershell.
cluster1::> storage
cluster1::storage> disk
cluster1::storage disk> show
You can abbreviate commands by entering only the minimum number of letters in a command that
makes the command unique to the current directory. For example, to abbreviate the command in
the previous example, you can enter st d sh. You can also use the Tab key to expand abbreviated
commands and to display a command's parameters, including default parameter values.
You can use the top command to go to the top level of the command hierarchy, and the up
command or .. command to go up one level in the command hierarchy.
Note: Commands and command options preceded by an asterisk (*) in the CLI can be executed
only at the advanced privilege level or higher.
or a query character (when not meant as a query or text starting with a less-than or greater-than
symbol), you must enclose the entity in quotation marks.
• The CLI interprets a question mark ("?") as the command to display help information for a
particular command.
• Some text that you enter in the CLI, such as command names, parameters, and certain values,
is not case-sensitive.
For example, when you enter parameter values for the vserver cifs commands,
capitalization is ignored. However, most parameter values, such as the names of nodes, storage
virtual machines (SVMs), aggregates, volumes, and logical interfaces, are case-sensitive.
• If you want to clear the value of a parameter that takes a string or a list, you specify an empty
set of quotation marks ("") or a dash ("-").
• The hash sign ("#"), also known as the pound sign, indicates a comment for a command-line
input; if used, it should appear after the last parameter in a command line.
The CLI ignores the text between "#" and the end of the line.
In the following example, an SVM is created with a text comment. The SVM is then modified to
delete the comment:
cluster1::> vserver create -vserver vs0 -subtype default -rootvolume root_vs0
-aggregate aggr1 -rootvolume-security-style unix -language C.UTF-8 -is-repository false -ipspace
ipspaceA -comment "My SVM"
cluster1::> vserver modify -vserver vs0 -comment ""
In the following example, a command-line comment that uses the "#" sign indicates what the
command does.
cluster1::> security login create -vserver vs0 -user-or-group-name new-admin
-application ssh -authmethod password #This command creates a new user account
cluster1::> redo -3
System Administration Reference 19
ONTAP management interface basics
Replace the current content of the command line with the next entry on the history Ctrl-N
list
Esc-N
With each repetition of the keyboard shortcut, the history cursor moves to the next
entry. Down arrow
System Administration Reference 20
ONTAP management interface basics
Related reference
Use of administrative privilege levels on page 20
System Administration Reference 21
ONTAP management interface basics
ONTAP commands and parameters are defined at three privilege levels: admin, advanced, and
diagnostic. The privilege levels reflect the skill levels required in performing the tasks.
For more information, see the man pages for the set command and rows command.
Operator Description
* Wildcard that matches all entries.
For example, the command volume show -volume *tmp* displays a list of all volumes
whose names include the string tmp.
! NOT operator.
Indicates a value that is not to be matched; for example, !vs0 indicates not to match the
value vs0.
System Administration Reference 22
ONTAP management interface basics
Operator Description
| OR operator.
Separates two values that are to be compared; for example, vs0 | vs2 matches either vs0
or vs2. You can specify multiple OR statements; for example, a | b* | *c* matches the
entry a, any entry that starts with b, and any entry that includes c.
.. Range operator.
For example, 5..10 matches any value from 5 to 10, inclusive.
< Less-than operator.
For example, <20 matches any value that is less than 20.
> Greater-than operator.
For example, >5 matches any value that is greater than 5.
<= Less-than-or-equal-to operator.
For example, <=5 matches any value that is less than or equal to 5.
>= Greater-than-or-equal-to operator.
For example, >=5 matches any value that is greater than or equal to 5.
{query} Extended query.
An extended query must be specified as the first argument after the command name, before
any other parameters.
For example, the command volume modify {-volume *tmp*} -state offline sets
offline all volumes whose names include the string tmp.
If you want to parse query characters as literals, you must enclose the characters in double quotes
(for example, "^", ".", "*", or "$") for the correct results to be returned.
You can use multiple query operators in one command line. For example, the command volume
show -size >1GB -percent-used <50 -vserver !vs1 displays all volumes that are
greater than 1 GB in size, less than 50% utilized, and not in the storage virtual machine (SVM)
named "vs1".
Extended queries are generally useful only with modify and delete commands. They have no
meaning in create or show commands.
The combination of queries and modify operations is a useful tool. However, it can potentially
cause confusion and errors if implemented incorrectly. For example, using the (advanced
privilege) system node image modify command to set a node's default software image
automatically sets the other software image not to be the default. The command in the following
example is effectively a null operation:
This command sets the current default image as the non-default image, then sets the new default
image (the previous non-default image) to the non-default image, resulting in the original default
settings being retained. To perform the operation correctly, you can use the command as given in
the following example::
However, when displayed as the following, the parameter is nonpositional for the command it
appears in:
• -lif <lif-name>
• [-lif <lif-name>]
In the following example, the volume create command is specified without taking advantage of
the positional parameter functionality:
cluster1::> volume create -vserver svm1 -volume vol1 -aggregate aggr1 -size 1g
-percent-snapshot-space 0
The following examples use the positional parameter functionality to increase the efficiency of the
command input. The positional parameters are interspersed with nonpositional parameters in the
volume create command, and the positional parameter values are specified without the
parameter names. The positional parameters are specified in the same sequence indicated by the
volume create ? output. That is, the value for -volume is specified before that of -
aggregate, which is in turn specified before that of -size.
cluster1::> volume create vol2 aggr1 1g -vserver svm1 -percent-snapshot-space 0
cluster1::> volume create -vserver svm1 vol3 -snapshot-policy default aggr1 -nvfail off 1g
-space-guarantee none
System Administration Reference 25
ONTAP management interface basics
The Commands: Manual Page Reference is a compilation of man pages for the admin-level and
advanced-level ONTAP commands. It is available on the NetApp Support Site.
Related information
NetApp Support Site: mysupport.netapp.com
Modify the automatic timeout period for CLI sessions system timeout modify
Related information
ONTAP 9 commands
Steps
1. Point the web browser to the IP address of the cluster management LIF:
• If you are using IPv4: https://cluster-mgmt-LIF
• If you are using IPv6: https://[cluster-mgmt-LIF]
Only HTTPS is supported for browser access of ONTAP System Manager.
If the cluster uses a self-signed digital certificate, the browser might display a warning
indicating that the certificate is not trusted. You can either acknowledge the risk to continue the
access or install a Certificate Authority (CA) signed digital certificate on the cluster for server
authentication.
2. Optional: If you have configured an access banner by using the CLI, then read the message that
is displayed in the Warning dialog box, and choose the required option to proceed.
This option is not supported on systems on which Security Assertion Markup Language
(SAML) authentication is enabled.
• If you do not want to continue, click Cancel, and close the browser.
• If you want to continue, click OK to navigate to the ONTAP System Manager login page.
3. Log in to ONTAP System Manager by using your cluster administrator credentials.
Related concepts
Managing access to web services on page 133
A web service is an application that users can access by using HTTP or HTTPS. The cluster
administrator can set up the web protocol engine, configure SSL, enable a web service, and enable
users of a role to access a web service.
Related tasks
Accessing a node's log, core dump, and MIB files by using a web browser on page 38
The Service Processor Infrastructure (spi) web service is enabled by default to enable a web
browser to access the log, core dump, and MIB files of a node in the cluster. The files remain
accessible even when the node is down, provided that the node is taken over by its partner.
Note: The name of System Manager has changed from previous versions. Versions 9.5 and
earlier were named OnCommand System Manager. Versions 9.6 and later are now called
ONTAP System Manager.
System Manager enables you to perform many common tasks such as the following:
• Create a cluster, configure a network, and set up support details for the cluster.
• Configure and manage storage objects such as disks, aggregates, volumes, qtrees, and quotas.
• Configure protocols such as CIFS and NFS, and provision file sharing.
• Configure protocols such as FC, FCoE, NVMe, and iSCSI for block access.
• Create and configure network components such as subnets, broadcast domains, data and
management interfaces, and interface groups.
• Set up and manage mirroring and vaulting relationships.
System Administration Reference 28
ONTAP management interface basics
• Perform cluster management, storage node management, and storage virtual machine (SVM)
management operations.
• Create and configure SVMs, manage storage objects associated with SVMs, and manage SVM
services.
• Monitor and manage HA configurations in a cluster.
• Configure Service Processors to remotely log in, manage, monitor, and administer the node,
regardless of the state of the node.
For more information about System Manager, see the NetApp Support Site.
Related information
NetApp Support Site: mysupport.netapp.com
The following example displays detailed information about the node named "node1" at the
advanced privilege level:
Node: node1
Node UUID: a67f9f34-9d8f-11da-b484-000423b6f094
Epsilon: false
Eligibility: true
Health: true
Epsilon is automatically assigned to the first node when the cluster is created. If the node that
holds epsilon becomes unhealthy, takes over its high-availability partner, or is taken over by its
high-availability partner, then epsilon is automatically reassigned to a healthy node in a different
HA pair.
Taking a node offline can affect the ability of the cluster to remain in quorum. Therefore, ONTAP
issues a warning message if you attempt an operation that will either take the cluster out of
quorum or else put it one outage away from a loss of quorum. You can disable the quorum warning
messages by using the cluster quorum-service options modify command at the
advanced privilege level.
In general, assuming reliable connectivity among the nodes of the cluster, a larger cluster is more
stable than a smaller cluster. The quorum requirement of a simple majority of half the nodes plus
epsilon is easier to maintain in a cluster of 24 nodes than in a cluster of two nodes.
A two-node cluster presents some unique challenges for maintaining quorum. Two-node clusters
use cluster HA, in which neither node holds epsilon; instead, both nodes are continuously polled to
ensure that if one node fails, the other has full read-write access to data, as well as access to
logical interfaces and management functions.
You can view system volumes by using the volume show command, but most other volume
operations are not permitted. For example, you cannot modify a system volume by using the
volume modify command.
This example shows four system volumes on the admin SVM, which were automatically created
when file services auditing was enabled for a data SVM in the cluster:
cluster1::> volume show -vserver cluster1
Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
cluster1 MDV_aud_1d0131843d4811e296fc123478563412
aggr0 online RW 2GB 1.90GB 5%
cluster1 MDV_aud_8be27f813d7311e296fc123478563412
root_vs0 online RW 2GB 1.90GB 5%
cluster1 MDV_aud_9dc4ad503d7311e296fc123478563412
aggr1 online RW 2GB 1.90GB 5%
cluster1 MDV_aud_a4b887ac3d7311e296fc123478563412
aggr2 online RW 2GB 1.90GB 5%
4 entries were displayed.
System Administration Reference 33
Managing nodes
Managing nodes
A node is a controller in a cluster. You can display information about a node, set node attributes,
rename a node, add or remove a node, or start or stop a node. You can also manage a node
remotely by using the Service Processor (SP).
A node is connected to other nodes in the cluster over a cluster network. It is also connected to the
disk shelves that provide physical storage for the ONTAP system or to storage arrays that provide
array LUNs for ONTAP use. Services and components that are controlled by the node, not by the
cluster, can be managed by using the system node commands.
A node SVM represents a node in the cluster. If you run the vserver show command, the output
includes node SVMs in the list.
Step
Use the system node modify command to modify a node's attributes.
Renaming a node
You can change a node's name as required.
Step
To rename a node, use the system node rename command.
The -newname parameter specifies the new name for the node. The system node rename man
page describes the rules for specifying the node name.
If you want to rename multiple nodes in the cluster, you must run the command for each node
individually.
Note: Node name cannot be "all" because "all" is a system reserved name.
◦ IP address
◦ Netmask
◦ Default gateway
About this task
Nodes must be in even numbers so that they can form HA pairs. After you start to add a node to
the cluster, you must complete the process. The node must be part of the cluster before you can
start to add another node.
Steps
1. Power on the node that you want to add to the cluster.
The node boots, and the Node Setup wizard starts on the console.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value....
Note: For more information on setting up a cluster using the setup GUI, see the ONTAP
System Manager online help.
5. Press Enter to use the CLI to complete this task. When prompted to create a new cluster or join
an existing one, enter join.
Do you want to create a new cluster or join an existing cluster? {create, join}:
join
6. Follow the prompts to set up the node and join it to the cluster:
System Administration Reference 36
Managing nodes
Note: You must not perform operations such as cluster unjoin and node rename when the
automated upgrade is in progress. If you are planning to perform cluster unjoin and node
rename operations, then before initiating an upgrade, you must remove all of the ONTAP
images from the package repository by using the cluster image package delete
command.
Steps
1. Change the privilege level to advanced:
set -privilege advanced
2. If Storage Encryption is enabled on the cluster and the self-encrypting disks (SEDs) are
operating you must set the data-id-key to 0x0.
a. Set the data-id-key to 0x0:
System Administration Reference 37
Managing nodes
Accessing a node's log, core dump, and MIB files by using a web browser
The Service Processor Infrastructure (spi) web service is enabled by default to enable a web
browser to access the log, core dump, and MIB files of a node in the cluster. The files remain
accessible even when the node is down, provided that the node is taken over by its partner.
Before you begin
• The cluster management LIF must be up.
You can use the management LIF of the cluster or a node to access the spi web service.
However, using the cluster management LIF is recommended.
The network interface show command displays the status of all LIFs in the cluster.
• If your user account does not have the "admin" role (which has access to the spi web service
by default), your access-control role must be granted access to the spi web service.
The vserver services web access show command shows what roles are granted access
to which web services.
• If you are not using the "admin" user account (which includes the http access method by
default), your user account must be set up with the http access method.
The security login show command shows user accounts' access and login methods and
their access-control roles.
• If you want to use HTTPS for secure web access, SSL must be enabled and a digital certificate
must be installed.
The system services web show command displays the configuration of the web protocol
engine at the cluster level.
About this task
The spi web service is enabled by default, and the service can be disabled manually (vserver
services web modify -vserver * -name spi -enabled false).
The "admin" role is granted access to the spi web service by default, and the access can be
disabled manually (services web access delete -vserver cluster_name -name spi -
role admin).
Steps
1. Point the web browser to the spi web service URL in one of the following formats:
• http://cluster-mgmt-LIF/spi/
• https://cluster-mgmt-LIF/spi/
Related concepts
How ONTAP implements audit logging on page 74
Management activities recorded in the audit log are included in standard AutoSupport reports, and
certain logging activities are included in EMS messages. You can also forward the audit log to
destinations that you specify, and you can display audit log files by using the CLI or a web
browser.
Managing the web protocol engine on page 134
You can configure the web protocol engine on the cluster to control whether web access is allowed
and what SSL versions can be used. You can also display the configuration settings for the web
protocol engine.
Managing web services on page 139
You can enable or disable a web service for the cluster or a storage virtual machine (SVM),
display the settings for web services, and control whether users of a role can access a web service.
Managing access to web services on page 133
A web service is an application that users can access by using HTTP or HTTPS. The cluster
administrator can set up the web protocol engine, configure SSL, enable a web service, and enable
users of a role to access a web service.
Managing SSL on page 141
The SSL protocol improves the security of web access by using a digital certificate to establish an
encrypted connection between a web server and a browser.
Managing core dumps (cluster administrators only) on page 101
When a node panics, a core dump occurs and the system creates a core dump file that technical
support can use to troubleshoot the problem. You can configure or display core dump attributes.
You can also save, display, segment, upload, or delete a core dump file.
Related information
Network and LIF management
LOADER>
LOADER> boot_ontap
...
*******************************
* *
* Press Ctrl-C for Boot Menu. *
* *
*******************************
...
The following example shows the result of entering the system node run-console command
from ONTAP to access the system console of node2, which is hanging at the boot environment
prompt. The boot_ontap command is entered at the console to boot node2 to ONTAP. Ctrl-D is
then pressed to exit the console and return to ONTAP.
LOADER>
LOADER> boot_ontap
...
*******************************
* *
* Press Ctrl-C for Boot Menu. *
* *
*******************************
...
Related concepts
Relationship among the SP CLI, SP console, and system console sessions on page 59
You can open an SP CLI session to manage a node remotely and open a separate SP console
session to access the console of the node. The SP console session mirrors output displayed in a
concurrent system console session. The SP and the system console have independent shell
environments with independent login authentication.
Related tasks
Accessing the cluster by using SSH on page 9
System Administration Reference 41
Managing nodes
You can issue SSH requests to the cluster to perform administrative tasks. SSH is enabled by
default.
Accessing the SP/BMC from an administration host on page 57
You can log in to the SP of a node from an administration host to perform node management tasks
remotely.
Booting ONTAP at the boot environment prompt on page 44
You can boot the current release or the backup release of ONTAP when you are at the boot
environment prompt of a node.
b. If any packet trace files (*.trc) are in the node’s root volume, delete them individually:
rm /etc/log/packet_traces/file_name.trc
6. Identify and delete the node’s root volume Snapshot copies through the nodeshell:
a. Identify the root volume name:
vol status
The root volume is indicated by the word "root" in the "Options" column of the vol
status command output.
In the following example, the root volume is vol0:
Result
If all of the pre-checks are successful, the command starts a root volume replacement job and
exits.
System Administration Reference 43
Managing nodes
The node begins the reboot process. The ONTAP login prompt appears, indicating that the
reboot process is complete.
To boot... Enter...
The current release of ONTAP boot_ontap
If you are unsure about which image to use, you should use boot_ontap in the first instance.
Related tasks
Accessing the system console of a node on page 39
If a node is hanging at the boot menu or the boot environment prompt, you can access it only
through the system console (also called the serial console). You can access the system console of a
node from an SSH connection to the node's SP or to the cluster.
Note: Boot menu option (2) Boot without /etc/rc is obsolete and takes no effect on the
system.
3. Select one of the following options by entering the corresponding number:
To... Select...
Continue to boot the node in 1) Normal Boot
normal mode
Change the password of the node, 3) Change Password
which is also the "admin"
account password
System Administration Reference 46
Managing nodes
To... Select...
Initialize the node's disks and 4) Clean configuration and initialize all disks
create a root volume for the node
Attention: This menu option erases all data on the disks of the
node and resets your node configuration to the factory default
settings.
Only select this menu item after the node has been removed from a cluster
(unjoined) and is not joined to another cluster, see Removing nodes from the
cluster on page 36.
For a node with internal or external disk shelves, the root volume on the internal
disks is initialized. If there are no internal disk shelves, then the root volume on
the external disks is initialized.
For a system running FlexArray Virtualization with internal or external disk
shelves, the array LUNs are not initialized. Any native disks on either internal or
external shelves are initialized.
For a system running FlexArray Virtualization with only array LUNS and no
internal or external disk shelves, the root volume on the storage array LUNS are
initialized, see the FlexArray® Virtualization Installation Requirements and
Reference guide.
If the node you want to initialize has disks that are partitioned for root-data
partitioning, the disks must be unpartitioned before the node can be initialized,
see 9) Configure Advanced Drive Partitioning and the Disks and Aggregates
Power Guide.
To... Select...
Unpartition all disks and remove 9) Configure Advanced Drive Partitioning
their ownership information or
Beginning with ONTAP 9.2, the Advanced Drive Partitioning option provides
clean the configuration and
additional management features for disks that are configured for root-data or
initialize the system with whole
root-data-data partitioning. The following options are available from Boot
or partitioned disks
Option 9:
Related tasks
Accessing the system console of a node on page 39
If a node is hanging at the boot menu or the boot environment prompt, you can access it only
through the system console (also called the serial console). You can access the system console of a
node from an SSH connection to the node's SP or to the cluster.
Related information
Disk and aggregate management
Understanding the SP
The Service Processor (SP) is a remote management device that enables you to access, monitor,
and troubleshoot a node remotely.
The key capabilities of the SP include the following:
• The SP enables you to access a node remotely to diagnose, shut down, power-cycle, or reboot
the node, regardless of the state of the node controller.
The SP is powered by a standby voltage, which is available as long as the node has input power
to at least one of its power supplies.
You can log in to the SP by using a Secure Shell client application from an administration host.
You can then use the SP CLI to monitor and troubleshoot the node remotely. In addition, you
can use the SP to access the serial console and run ONTAP commands remotely.
You can access the SP from the serial console or access the serial console from the SP. The SP
enables you to open both an SP CLI session and a separate console session simultaneously.
For instance, when a temperature sensor becomes critically high or low, ONTAP triggers the
SP to shut down the motherboard gracefully. The serial console becomes unresponsive, but you
can still press Ctrl-G on the console to access the SP CLI. You can then use the system
power on or system power cycle command from the SP to power on or power-cycle the
node.
System Administration Reference 48
Managing nodes
• The SP monitors environmental sensors and logs events to help you take timely and effective
service actions.
The SP monitors environmental sensors such as the node temperatures, voltages, currents, and
fan speeds. When an environmental sensor has reached an abnormal condition, the SP logs the
abnormal readings, notifies ONTAP of the issue, and sends alerts and "down system"
notifications as necessary through an AutoSupport message, regardless of whether the node
can send AutoSupport messages.
The SP also logs events such as boot progress, Field Replaceable Unit (FRU) changes, events
generated by ONTAP, and SP command history. You can manually invoke an AutoSupport
message to include the SP log files that are collected from a specified node.
Other than generating these messages on behalf of a node that is down and attaching additional
diagnostic information to AutoSupport messages, the SP has no effect on the AutoSupport
functionality. The AutoSupport configuration settings and message content behavior are
inherited from ONTAP.
Note: The SP does not rely on the -transport parameter setting of the system node
autosupport modify command to send notifications. The SP only uses the Simple Mail
Transport Protocol (SMTP) and requires its host’s AutoSupport configuration to include
mail host information.
If SNMP is enabled, the SP generates SNMP traps to configured trap hosts for all "down
system" events.
• The SP has a nonvolatile memory buffer that stores up to 4,000 events in a system event log
(SEL) to help you diagnose issues.
The SEL stores each audit log entry as an audit event. It is stored in onboard flash memory on
the SP. The event list from the SEL is automatically sent by the SP to specified recipients
through an AutoSupport message.
The SEL contains the following information:
◦ Hardware events detected by the SP—for example, sensor status about power supplies,
voltage, or other components
◦ Errors detected by the SP—for example, a communication error, a fan failure, or a memory
or CPU error
◦ Critical software events sent to the SP by the node—for example, a panic, a communication
failure, a boot failure, or a user-triggered "down system" as a result of issuing the SP
system reset or system power cycle command
• The SP monitors the serial console regardless of whether administrators are logged in or
connected to the console.
When messages are sent to the console, the SP stores them in the console log. The console log
persists as long as the SP has power from either of the node power supplies. Because the SP
operates with standby power, it remains available even when the node is power-cycled or
turned off.
• Hardware-assisted takeover is available if the SP is configured.
• The SP API service enables ONTAP to communicate with the SP over the network.
The service enhances ONTAP management of the SP by supporting network-based
functionality such as using the network interface for the SP firmware update, enabling a node
to access another node's SP functionality or system console, and uploading the SP log from
another node.
You can modify the configuration of the SP API service by changing the port the service uses,
renewing the SSL and SSH certificates that are used by the service for internal communication,
or disabling the service entirely.
The following diagram illustrates access to ONTAP and the SP of a node. The SP interface is
accessed through the Ethernet port (indicated by a wrench icon on the rear of the chassis):
System Administration Reference 49
Managing nodes
controller
serial Ctrl-G to access SP
COM1 console Data from serial console
ONTAP
Data ONTAP CLI Ctrl-D and Enter to exit SP
SP and return to serial console
local admin
SP CLI (SSH)
Network interfaces Ethernet
supported by the
controller Ethernet
(Ethernet) Network
remote admin
Related information
High-availability configuration
Related reference
ONTAP commands for BMC management on page 72
These ONTAP commands are supported on the Baseboard Management Controller (BMC).
BMC CLI commands on page 73
System Administration Reference 50
Managing nodes
You can log into the BMC using SSH. The following commands are supported from the BMC
command line.
Management Data
LAN LAN
remote Data
management e0M ONTAP
device
Storage controller
If you plan to use both the remote management device and e0M, you must configure them on the
same IP subnet. Since these are low-bandwidth interfaces, the best practice is to configure
SP/BMC and e0M on a subnet dedicated to management traffic.
If you cannot isolate management traffic, or if your dedicated management network is unusually
large, you should try to keep the volume of network traffic as low as possible. Excessive ingress
broadcast or multicast traffic may degrade SP/BMC performance.
Note: Some storage controllers, such as the AFF A800, have two external ports, one for BMC
and the other for e0M. For these controllers, there is no requirement to configure BMC and e0M
on the same IP subnet.
The SP automatic network configuration enables the SP to use address resources (including the IP
address, subnet mask, and gateway address) from the specified subnet to set up its network
automatically. With the SP automatic network configuration, you do not need to manually assign
IP addresses for the SP of each node. By default, the SP automatic network configuration is
disabled; this is because enabling the configuration requires that the subnet to be used for the
configuration be defined in the cluster first.
If you enable the SP automatic network configuration, the following scenarios and considerations
apply:
• If the SP has never been configured, the SP network is configured automatically based on the
subnet specified for the SP automatic network configuration.
• If the SP was previously configured manually, or if the existing SP network configuration is
based on a different subnet, the SP network of all nodes in the cluster are reconfigured based
on the subnet that you specify in the SP automatic network configuration.
The reconfiguration could result in the SP being assigned a different address, which might have
an impact on your DNS configuration and its ability to resolve SP host names. As a result, you
might need to update your DNS configuration.
• A node that joins the cluster uses the specified subnet to configure its SP network
automatically.
• The system service-processor network modify command does not enable you to
change the SP IP address.
When the SP automatic network configuration is enabled, the command only allows you to
enable or disable the SP network interface.
◦ If the SP automatic network configuration was previously enabled, disabling the SP
network interface results in the assigned address resource being released and returned to the
subnet.
◦ If you disable the SP network interface and then reenable it, the SP might be reconfigured
with a different address.
If the SP automatic network configuration is disabled (the default), the following scenarios and
considerations apply:
• If the SP has never been configured, SP IPv4 network configuration defaults to using IPv4
DHCP, and IPv6 is disabled.
A node that joins the cluster also uses IPv4 DHCP for its SP network configuration by default.
• The system service-processor network modify command enables you to configure a
node's SP IP address.
A warning message appears when you attempt to manually configure the SP network with
addresses that are allocated to a subnet. Ignoring the warning and proceeding with the manual
address assignment might result in a scenario with duplicate addresses.
If the SP automatic network configuration is disabled after having been enabled previously, the
following scenarios and considerations apply:
• If the SP automatic network configuration has the IPv4 address family disabled, the SP IPv4
network defaults to using DHCP, and the system service-processor network modify
command enables you to modify the SP IPv4 configuration for individual nodes.
• If the SP automatic network configuration has the IPv6 address family disabled, the SP IPv6
network is also disabled, and the system service-processor network modify
command enables you to enable and modify the SP IPv6 configuration for individual nodes.
Related tasks
Enabling the SP/BMC automatic network configuration on page 52
System Administration Reference 52
Managing nodes
Enabling the SP to use automatic network configuration is preferred over manually configuring the
SP network. Because the SP automatic network configuration is cluster wide, you do not need to
manually manage the SP network for individual nodes.
Configuring the SP/BMC network manually on page 53
If you do not have automatic network configuration set up for the SP, you must manually
configure a node's SP network for the SP to be accessible by using an IP address.
• The subnet you want to use for the SP automatic network configuration must already be
defined in the cluster and must have no resource conflicts with the SP network interface.
The network subnet show command displays subnet information for the cluster.
The parameter that forces subnet association (the -force-update-lif-associations
parameter of the network subnet commands) is supported only on network LIFs and not on
the SP network interface.
• If you want to use IPv6 connections for the SP, IPv6 must already be configured and enabled
for ONTAP.
The network options ipv6 show command displays the current state of IPv6 settings for
ONTAP.
Steps
1. Specify the IPv4 or IPv6 address family and name for the subnet that you want the SP to use
by using the system service-processor network auto-configuration enable
command.
2. Display the SP automatic network configuration by using the system service-processor
network auto-configuration show command.
3. If you subsequently want to disable or reenable the SP IPv4 or IPv6 network interface for all
nodes that are in quorum, use the system service-processor network modify
command with the -address-family [IPv4|IPv6] and -enable [true|false] parameters.
When the SP automatic network configuration is enabled, you cannot modify the SP IP address
for a node that is in quorum. You can only enable or disable the SP IPv4 or IPv6 network
interface.
If a node is out of quorum, you can modify the node's SP network configuration, including the
SP IP address, by running system service-processor network modify from the node
and confirming that you want to override the SP automatic network configuration for the node.
However, when the node joins the quorum, the SP automatic reconfiguration takes place for the
node based on the specified subnet.
Related concepts
Considerations for the SP/BMC network configuration on page 50
You can enable cluster-level, automatic network configuration for the SP (recommended). You can
also leave the SP automatic network configuration disabled (the default) and manage the SP
network configuration manually at the node level. A few considerations exist for each case.
Related information
Network and LIF management
System Administration Reference 53
Managing nodes
You can configure the SP to use IPv4, IPv6, or both. The SP IPv4 configuration supports static and
DHCP addressing, and the SP IPv6 configuration supports static addressing only.
If the SP automatic network configuration has been set up, you do not need to manually configure
the SP network for individual nodes, and the system service-processor network modify
command allows you to only enable or disable the SP network interface.
Steps
1. Configure the SP network for a node by using by using the system service-processor
network modify command.
• The -address-family parameter specifies whether the IPv4 or IPv6 configuration of the
SP is to be modified.
• The -enable parameter enables the network interface of the specified IP address family.
• The -dhcp parameter specifies whether to use the network configuration from the DHCP
server or the network address that you provide.
You can enable DHCP (by setting -dhcp to v4) only if you are using IPv4. You cannot
enable DHCP for IPv6 configurations.
• The -ip-address parameter specifies the public IP address for the SP.
A warning message appears when you attempt to manually configure the SP network with
addresses that are allocated to a subnet. Ignoring the warning and proceeding with the
manual address assignment might result in a duplicate address assignment.
• The -netmask parameter specifies the netmask for the SP (if using IPv4.)
• The -prefix-length parameter specifies the network prefix-length of the subnet mask
for the SP (if using IPv6.)
• The -gateway parameter specifies the gateway IP address for the SP.
2. Configure the SP network for the remaining nodes in the cluster by repeating the step 1 on
page 53.
3. Display the SP network configuration and verify the SP setup status by using the system
service-processor network show command with the –instance or –fields setup-
status parameters.
The SP setup status for a node can be one of the following:
• not-setup – Not configured
• succeeded -- Configuration succeeded
• in-progress -- Configuration in progress
• failed -- Configuration failed
Node: node1
Address Type: IPv4
Interface Enabled: true
Type of Device: SP
Status: online
Link Status: up
DHCP Status: none
IP Address: 192.168.123.98
MAC Address: ab:cd:ef:fe:ed:02
Netmask: 255.255.255.0
Prefix Length of Subnet Mask: -
Router Assigned IP Address: -
Link Local IP Address: -
Gateway IP Address: 192.168.123.1
Time Last Updated: Thu Apr 10 17:02:13 UTC 2014
Subnet Name: -
Enable IPv6 Router Assigned Address: -
SP Network Setup Status: succeeded
SP Network Setup Failure Reason: -
cluster1::>
Related concepts
Considerations for the SP/BMC network configuration on page 50
You can enable cluster-level, automatic network configuration for the SP (recommended). You can
also leave the SP automatic network configuration disabled (the default) and manage the SP
network configuration manually at the node level. A few considerations exist for each case.
Related tasks
Enabling the SP/BMC automatic network configuration on page 52
Enabling the SP to use automatic network configuration is preferred over manually configuring the
SP network. Because the SP automatic network configuration is cluster wide, you do not need to
manually manage the SP network for individual nodes.
Related information
Network and LIF management
If the SP API service is disabled, the API does not accept any incoming connections. In
addition, functionality such as network-based SP firmware updates and network-based SP
"down system" log collection becomes unavailable. The system switches to using the serial
interface.
Steps
1. Switch to the advanced privilege level by using the set -privilege advanced command.
2. Modify the SP API service configuration:
Disable or reenable the SP API system service-processor api-service modify with the -is-
service enabled {true|false} parameter
3. Display the SP API service configuration by using the system service-processor api-
service show command.
the functionality enabled. Disabling the functionality can result in suboptimal or nonqualified
combinations between the ONTAP image and the SP firmware image.
• ONTAP enables you to trigger an SP update manually and specify how the update should take
place by using the system service-processor image update command.
You can specify the following options:
◦ The SP firmware package to use (-package)
You can update the SP firmware to a downloaded package by specifying the package file
name. The system image package show command displays all package files
(including the files for the SP firmware package) that are available on a node.
◦ Whether to use the baseline SP firmware package for the SP update (-baseline)
You can update the SP firmware to the baseline version that is bundled with the currently
running version of ONTAP.
◦ Whether to update the entire firmware image or only the changed portions, using the
network interface or the serial interface (-update-type)
◦ If updating the entire firmware image, whether to also reset log settings to the factory
defaults and clear contents of all logs maintained by the SP, including the event logs, IPMI
logs, and forensics logs (-clear-logs)
Note: If you use some of the more advanced update options or parameters, the BMC's
configuration settings may be temporarily cleared. After reboot, it can take up to 10 minutes
for ONTAP to restore the BMC configuration.
• ONTAP enables you to display the status for the latest SP firmware update triggered from
ONTAP by using the system service-processor image update-progress show
command.
Any existing connection to the SP is terminated when the SP firmware is being updated. This is
the case whether the SP firmware update is automatically or manually triggered.
Related information
NetApp Downloads: System Firmware and Diagnostics
ONTAP 9 commands
When the SP/BMC uses the network interface for firmware updates
An SP firmware update that is triggered from ONTAP with the SP running version 1.5, 2.5, 3.1, or
later supports using an IP-based file transfer mechanism over the SP network interface.
Note: This topic applies to both the SP and the BMC.
An SP firmware update over the network interface is faster than an update over the serial interface.
It reduces the maintenance window during which the SP firmware is being updated, and it is also
nondisruptive to ONTAP operation. The SP versions that support this capability are included with
ONTAP . They are also available on the NetApp Support Site and can be installed on controllers
that are running a compatible version of ONTAP.
When you are running SP version 1.5, 2.5, 3.1, or later, the following firmware upgrade behaviors
apply:
• An SP firmware update that is automatically triggered by ONTAP defaults to using the
network interface for the update; however, the SP automatic update switches to using the serial
interface for the firmware update if one of the following conditions occurs:
◦ The SP network interface is not configured or not available.
◦ The IP-based file transfer fails.
◦ The SP API service is disabled.
System Administration Reference 57
Managing nodes
The system timeout commands manage the settings for the automatic timeout period of CLI
sessions, including console and SSH connections to the SP.
Related concepts
Managing the automatic timeout period of CLI sessions on page 26
The timeout value specifies how long a CLI session remains idle before being automatically
terminated. The CLI timeout value is cluster-wide. That is, every node in a cluster uses the same
CLI timeout value.
To access the SP, your user account must have been created with the -application
parameter of the security login create command set to service-processor and the -
authmethod parameter set to password.
If the SP is configured to use an IPv4 or IPv6 address, and if five SSH login attempts from a host
fail consecutively within 10 minutes, the SP rejects SSH login requests and suspends the
communication with the IP address of the host for 15 minutes. The communication resumes after
15 minutes, and you can try to log in to the SP again.
ONTAP prevents you from creating or using system-reserved names (such as "root" and "naroot")
to access the cluster or the SP.
Steps
1. From the administration host, log in to the SP:
ssh username@SP_IP_address
2. When you are prompted, enter the password for username.
The SP prompt appears, indicating that you have access to the SP CLI.
The following examples show how to use the IPv6 global address or IPv6 router-advertised
address to log in to the SP on a node that has SSH set up for IPv6 and the SP configured for IPv6.
Related concepts
Relationship among the SP CLI, SP console, and system console sessions on page 59
You can open an SP CLI session to manage a node remotely and open a separate SP console
session to access the console of the node. The SP console session mirrors output displayed in a
concurrent system console session. The SP and the system console have independent shell
environments with independent login authentication.
Related tasks
Accessing the system console of a node on page 39
System Administration Reference 59
Managing nodes
If a node is hanging at the boot menu or the boot environment prompt, you can access it only
through the system console (also called the serial console). You can access the system console of a
node from an SSH connection to the node's SP or to the cluster.
cluster1::>
cluster1::>
existing SP CLI session is terminated, enabling you to return from the SP console to the SP
CLI. This action is recorded in the SP event log.
In an ONTAP CLI session that is connected through SSH, you can switch to the system
console of a node by running the ONTAP system node run-console command from
another node.
• For security reasons, the SP CLI session and the system console session have independent
login authentication.
When you initiate an SP console session from the SP CLI (by using the SP system console
command), you are prompted for the system console credential. When you access the SP CLI
from a system console session (by pressing Ctrl-G), you are prompted for the SP CLI
credential.
• The SP console session and the system console session have independent shell environments.
The SP console session mirrors output that is displayed in a concurrent system console session.
However, the concurrent system console session does not mirror the SP console session.
The SP console session does not mirror output of concurrent SSH sessions.
Warning: The default "allow all" setting (0.0.0.0/0, ::/0) will be replaced
with your changes. Do you want to continue? {y|n}: y
System Administration Reference 61
Managing nodes
Warning: If all IP addresses are removed from the allowed address list, all IP
addresses will be denied access. To restore the "allow all" default,
use the "system service-processor ssh add-allowed-addresses
-allowed-addresses 0.0.0.0/0, ::/0" command. Do you want to continue?
{y|n}: y
SP> help
date - print date and time
exit - exit from the SP command line interface
events - print system events and event information
help - print command help
priv - show and set user mode
sp - commands to control the SP
system - commands to control the system
version - print SP version
BMC> system
Usage: system cmd [option]
The following example shows the BMC CLI online help for the BMC system power
command.
If you want to... Use this SP command... Or this ONTAP command ...
Display available SP commands or help [command]
subcommands of a specified SP
command
Display the current privilege level for priv show
the SP CLI
Set the privilege level to access the priv set {admin | advanced |
specified mode for the SP CLI diag}
If you want to... Use this SP command... Or this ONTAP command ...
Display the power status for the system power status system node power show
controller of a node
Display battery information system battery show
List all system FRUs and their IDs system fru list
If you want to... Use this SP command... Or this ONTAP command ...
Turn the node on or off, or perform a system power on system node power on
power-cycle (turning the power off (advanced privilege level)
and then back on)
system power off
Create a core dump and reset the system core [-f] system node coredump trigger
node The -f option forces the creation of a (advanced privilege level)
core dump and the reset of the node.
These commands have the same effect as pressing the Non-maskable
Interrupt (NMI) button on a node, causing a dirty shutdown of the node and
forcing a dump of the core files when halting the node. These commands are
helpful when ONTAP on the node is hung or does not respond to commands
such as system node shutdown. The generated core dump files are
displayed in the output of the system node coredump show command.
The SP stays operational as long as the input power to the node is not
interrupted.
Reboot the node with an optionally system reset {primary | backup system node reset with the -
specified BIOS firmware image | current} firmware {primary | backup |
(primary, backup, or current) to current} parameter
recover from issues such as a (advanced privilege level)
corrupted image of the node's boot
system node reset
device
Attention: This operation causes a dirty shutdown of the
node.
If no BIOS firmware image is specified, the current image is used for the
reboot. The SP stays operational as long as the input power to the node is not
interrupted.
Display the status of battery firmware system battery auto_update
automatic update, or enable or disable [status | enable | disable]
battery firmware automatic update (advanced privilege level)
upon next SP boot
Compare the current battery firmware system battery verify
image against a specified firmware [image_URL]
image (advanced privilege level)
If image_URL is not specified, the
default battery firmware image is
used for comparison.
System Administration Reference 65
Managing nodes
If you want to... Use this SP command... Or this ONTAP command ...
Update the battery firmware from the system battery flash
image at the specified location image_URL
(advanced privilege level)
You use this command if the
automatic battery firmware upgrade
process has failed for some reason.
Update the SP firmware by using the sp update image_URL system service-processor
image at the specified location image_URL must not exceed 200 image update
characters.
Reboot the SP sp reboot system service-processor
reboot-sp
Understanding the threshold-based SP sensor readings and status values of the system sensors command
output
Threshold-based sensors take periodic readings of a variety of system components. The SP
compares the reading of a threshold-based sensor against its preset threshold limits that define a
component’s acceptable operating conditions.
Based on the sensor reading, the SP displays the sensor state to help you monitor the condition of
the component.
Examples of threshold-based sensors include sensors for the system temperatures, voltages,
currents, and fan speeds. The specific list of threshold-based sensors depends on the platform.
Threshold-based sensors have the following thresholds, displayed in the output of the SP system
sensors command:
A sensor reading between LNC and LCR or between UNC and UCR means that the component is
showing signs of a problem and a system failure might occur as a result. Therefore, you should
plan for component service soon.
A sensor reading below LCR or above UCR means that the component is malfunctioning and a
system failure is about to occur. Therefore, the component requires immediate attention.
The following diagram illustrates the severity ranges that are specified by the thresholds:
You can find the reading of a threshold-based sensor under the Current column in the system
sensors command output. The system sensors get sensor_name command displays
additional details for the specified sensor. As the reading of a threshold-based sensor crosses the
noncritical and critical threshold ranges, the sensor reports a problem of increasing severity. When
the reading exceeds a threshold limit, the sensor's status in the system sensors command
output changes from ok to nc (noncritical) or cr (critical) depending on the exceeded threshold,
and an event message is logged in the SEL event log.
Some threshold-based sensors do not have all four threshold levels. For those sensors, the missing
thresholds show na as their limits in the system sensors command output, indicating that the
particular sensor has no limit or severity concern for the given threshold and the SP does not
monitor the sensor for that threshold.
Status : ok
Lower Non-Recoverable : na
Lower Critical : 4.246
Lower Non-Critical : 4.490
Upper Non-Critical : 5.490
Upper Critical : 5.758
Upper Non-Recoverable : na
Assertion Events :
Assertions Enabled : lnc- lcr- ucr+
Deassertions Enabled : lnc- lcr- ucr+
Understanding the discrete SP sensor status values of the system sensors command output
Discrete sensors do not have thresholds. Their readings, displayed under the Current column in
the SP CLI system sensors command output, do not carry actual meanings and thus are
ignored by the SP. The Status column in the system sensors command output displays the
status values of discrete sensors in hexadecimal format.
Examples of discrete sensors include sensors for the fan, power supply unit (PSU) fault, and
system fault. The specific list of discrete sensors depends on the platform.
You can use the SP CLI system sensors get sensor_name command for help with
interpreting the status values for most discrete sensors. The following examples show the results of
entering system sensors get sensor_name for the discrete sensors CPU0_Error and
IO_Slot1_Present:
Although the system sensors get sensor_name command displays the status information for
most discrete sensors, it does not provide status information for the System_FW_Status,
System_Watchdog, PSU1_Input_Type, and PSU2_Input_Type discrete sensors. You can use the
following information to interpret these sensors' status values.
System_FW_Status
The System_FW_Status sensor's condition appears in the form of 0xAABB. You can combine the
information of AA and BB to determine the condition of the sensor.
AA can have one of the following values:
Values Condition of the sensor
01 System firmware error
02 System firmware hang
04 System firmware progress
For instance, the System_FW_Status sensor status 0x042F means "system firmware progress (04),
ONTAP is running (2F)."
System_Watchdog
The System_Watchdog sensor can have one of the following conditions:
0x0080
The state of this sensor has not changed
For instance, the System_Watchdog sensor status 0x0880 means a watchdog timeout occurs and
causes a system power cycle.
PSU1_Input_Type and PSU2_Input_Type
For direct current (DC) power supplies, the PSU1_Input_Type and PSU2_Input_Type sensors do
not apply. For alternating current (AC) power supplies, the sensors' status can have one of the
following values:
System Administration Reference 69
Managing nodes
For instance, the PSU1_Input_Type sensor status 0x0280 means that the sensor reports that the
PSU type is 110V.
Manually configure the SP network for a node, including system service-processor network modify
the following:
• The IP address family (IPv4 or IPv6)
• Whether the network interface of the specified IP
address family should be enabled
• If you are using IPv4, whether to use the network
configuration from the DHCP server or the network
address that you specify
• The public IP address for the SP
• The netmask for the SP (if using IPv4)
• The network prefix-length of the subnet mask for the
SP (if using IPv6)
• The gateway IP address for the SP
System Administration Reference 70
Managing nodes
Modify the SP API service configuration, including the system service-processor api-service
following: modify
• Changing the port used by the SP API service (advanced privilege level)
• Enabling or disabling the SP API service
Enable or disable the SP automatic firmware update system service-processor image modify
By default, the SP firmware is automatically updated
with the update of ONTAP or when a new version of the
SP firmware is manually downloaded. Disabling the
automatic update is not recommended because doing so
can result in suboptimal or nonqualified combinations
between the ONTAP image and the SP firmware image.
System Administration Reference 71
Managing nodes
Display the status for the latest SP firmware update system service-processor image update-
triggered from ONTAP, including the following progress show
information:
• The start and end time for the latest SP firmware
update
• Whether an update is in progress and the percentage
that is complete
Block the specified IP addresses from accessing the SP system service-processor ssh remove-
allowed-addresses
Display the IP addresses that can access the SP system service-processor ssh show
System Administration Reference 72
Managing nodes
• The remote management device type Displaying complete SP information requires the -
• The current SP status instance parameter.
• Whether the SP network is configured
• Network information, such as the public IP address
and the MAC address
• The SP firmware version and Intelligent Platform
Management Interface (IPMI) version
• Whether the SP firmware automatic update is enabled
Reboot the SP on a node and optionally specify the SP system service-processor reboot-sp
firmware image (primary or backup) to use
Attention: You should avoid booting
the SP from the backup image. Booting
from the backup image is reserved for
troubleshooting and recovery purposes
only. It might require that the SP
automatic firmware update be disabled,
which is not a recommended setting.
You should contact Technical Support
before attempting to boot the SP from
the backup image.
Generate and send an AutoSupport message that includes system node autosupport invoke-splog
the SP log files collected from a specified node
Display the allocation map of the collected SP log files in system service-processor log show-
the cluster, including the sequence numbers for the SP allocations
log files that reside in each collecting node
Related information
ONTAP 9 commands
Display/modify the details of the currently installed BMC system service-processor image show/modify
firmware image
Update BMC firmware system service-processor image update
System Administration Reference 73
Managing nodes
Enable the automatic network configuration for the BMC system service-processor network auto-
to use an IPv4 or IPv6 address on the specified subnet configuration enable
Disable the automatic network configuration for an IPv4 system service-processor network auto-
or IPv6 address on the subnet specified for the BMC configuration disable
Display the BMC automatic network configuration system service-processor network auto-
configuration show
For commands that are not supported by the BMC firmware, the following error message is
returned.
Command Function
system Display a list of all commands.
system console Connect to the system's console. Use
ESC + t
to exit the session.
system core Dump the system core and reset.
system power cycle Power the system off, then on.
system power off Power the system off.
system power on Power the system on.
system power status Print system power status.
system reset Reset the system.
system fw ver View all firmware versions that the BMC can read.
system fw upgrade Update the BMC firmware.
system log console Dump host console logs.
system log sel Dump system event (SEL) logs.
system fru show [id] Dump all/selected field replaceable unit (FRU) info.
System Administration Reference 74
Managing audit logging for management activities
The audit.log file is sent by the AutoSupport tool to the specified recipients. You can also
forward the content securely to external destinations that you specify; for example, a Splunk or a
syslog server.
The audit.log file is rotated daily. The rotation also occurs when it reaches 100 MB in size, and
the previous 48 copies are preserved (with a maximum total of 49 files). When the audit file
performs its daily rotation, no EMS message is generated. If the audit file rotates because its file
size limit is exceeded, an EMS message is generated.
You can use the security audit log show command to display audit entries for individual
nodes or merged from multiple nodes in the cluster. You can also display the content of the /
mroot/etc/log/mlog directory on a single node by using a web browser.
Related tasks
Accessing a node's log, core dump, and MIB files by using a web browser on page 38
The Service Processor Infrastructure (spi) web service is enabled by default to enable a web
browser to access the log, core dump, and MIB files of a node in the cluster. The files remain
accessible even when the node is down, provided that the node is taken over by its partner.
Verify Syslog
System Administration Reference 76
Managing audit logging for management activities
Display audit entries merged from multiple nodes in the security audit log show
cluster
Specify a forwarding destination for the audit log and cluster log-forwarding create
security measures for its transmission
Modify a destination for the audit log cluster log-forwarding modify
Show the configured destinations for the audit log cluster log-forwarding show
Related information
ONTAP 9 commands
System Administration Reference 77
Managing the cluster time (cluster administrators only)
◦ For redundancy and quality of time service, you should associate at least three external
NTP servers with the cluster.
◦ You can specify an NTP server by using its IPv4 or IPv6 address or fully qualified host
name.
◦ You can manually specify the NTP version (v3 or v4) to use.
By default, ONTAP automatically selects the NTP version that is supported for a given
external NTP server.
If the NTP version you specify is not supported for the NTP server, time exchange cannot
take place.
◦ At the advanced privilege level, you can specify an external NTP server that is associated
with the cluster to be the primary time source for correcting and adjusting the cluster time.
• You can display the NTP servers that are associated with the cluster (cluster time-
service ntp server show).
• You can modify the cluster's NTP configuration (cluster time-service ntp server
modify).
• You can disassociate the cluster from an external NTP server (cluster time-service ntp
server delete).
• At the advanced privilege level, you can reset the configuration by clearing all external NTP
servers' association with the cluster (cluster time-service ntp server reset).
A node that joins a cluster automatically adopts the NTP configuration of the cluster.
In addition to using NTP, ONTAP also enables you to manually manage the cluster time. This
capability is helpful when you need to correct erroneous time (for example, a node's time has
become significantly incorrect after a reboot). In that case, you can specify an approximate time
for the cluster until NTP can synchronize with an external time server. The time you manually set
takes effect across all nodes in the cluster.
You can manually manage the cluster time in the following ways:
• You can set or modify the time zone, date, and time on the cluster (cluster date modify).
• You can display the current time zone, date, and time settings of the cluster (cluster date
show).
Note: Job schedules do not adjust to manual cluster date and time changes. These jobs are
scheduled to run based on the current cluster time when the job was created or when the job
most recently ran. Therefore, if you manually change the cluster date or time, you must use the
job show and job history show commands to verify that all scheduled jobs are queued
and completed according to your requirements.
Network Time Protocol (NTP) Support
System Administration Reference 78
Managing the cluster time (cluster administrators only)
Associate the cluster with an external NTP server with cluster time-service ntp server create -
symmetric authentication server server_ip_address -key-id key_id
Available in ONTAP 9.5 or later Note: The key_id must refer to an existing shared key
configured with ‘cluster time-service ntp key'.
Enable symmetric authentication for an existing NTP cluster time-service ntp server modify -
server server server_name -key-id key_id
An existing NTP server can be modified to enable
authentication by adding the required key-id.
Available in ONTAP 9.5 or later
Disable symmetric authentication cluster time-service ntp server modify -
server server_name -authentication disabled
Configure a shared NTP key cluster time-service ntp key create -id
shared_key_id -type shared_key_type -value
shared_key_value
Display information about the NTP servers that are cluster time-service ntp server show
associated with the cluster
Modify the configuration of an external NTP server that cluster time-service ntp server modify
is associated with the cluster
Dissociate an NTP server from the cluster cluster time-service ntp server delete
Reset the configuration by clearing all external NTP cluster time-service ntp server reset
servers' association with the cluster
Note: This command requires the advanced privilege
level.
The following commands enable you to manage the cluster time manually:
If you want to... Use this command...
Set or modify the time zone, date, and time cluster date modify
Display the time zone, date, and time settings for the cluster date show
cluster
Related information
ONTAP 9 commands
System Administration Reference 79
Managing the banner and MOTD
This system is for authorized users only. Your IP Address has been logged.
Password:
cluster1::> _
An MOTD is displayed in a console session (for cluster access only) or an SSH session (for cluster
or SVM access) after a user is authenticated but before the clustershell prompt appears. For
example, you can use the MOTD to display a welcome or informational message such as the
following that only authenticated users will see:
$ ssh admin@cluster1-01
Password:
cluster1::> _
You can create or modify the content of the banner or MOTD by using the security login
banner modify or security login motd modify command, respectively, in the following
ways:
• You can use the CLI interactively or noninteractively to specify the text to use for the banner or
MOTD.
The interactive mode, launched when the command is used without the -message or -uri
parameter, enables you to use newlines (also known as end of lines) in the message.
The noninteractive mode, which uses the -message parameter to specify the message string,
does not support newlines.
• You can upload content from an FTP or HTTP location to use for the banner or MOTD.
• You can configure the MOTD to display dynamic content.
Examples of what you can configure the MOTD to display dynamically include the following:
◦ Cluster name, node name, or SVM name
◦ Cluster date and time
◦ Name of the user logging in
◦ Last login for the user on any node in the cluster
◦ Login device name or IP address
◦ Operating system name
◦ Software release version
◦ Effective cluster version string
The security login motd modify man page describes the escape sequences that you can
use to enable the MOTD to display dynamically generated content.
The banner does not support dynamic content.
You can manage the banner and MOTD at the cluster or SVM level:
System Administration Reference 80
Managing the banner and MOTD
Creating a banner
You can create a banner to display a message to someone who attempts to access the cluster or
SVM. The banner is displayed in a console session (for cluster access only) or an SSH session (for
cluster or SVM access) before a user is prompted for authentication.
Steps
1. Use the security login banner modify command to create a banner for the cluster or
SVM:
cluster1::>
The following example uses the interactive mode to create a banner for the "svm1" SVM:
cluster1::> security login banner modify -vserver svm1
cluster1::>
The following example displays the banners that have been created:
cluster1::> security login banner show
Vserver: cluster1
Message
-----------------------------------------------------------------------------
Authorized users only!
Vserver: svm1
Message
-----------------------------------------------------------------------------
The svm1 SVM is reserved for authorized users only!
cluster1::>
Related tasks
Managing the banner on page 81
You can manage the banner at the cluster or SVM level. The banner configured for the cluster is
also used for all SVMs that do not have a banner message defined. A subsequently created banner
for an SVM overrides the cluster banner for that SVM.
Remove the banner for all Set the banner to an empty string (""):
(cluster and SVM) logins security login banner modify -vserver * -message ""
Suppress the banner supplied by Set the SVM banner to an empty string for the SVM:
the cluster administrator so that security login banner modify -vserver svm_name -message ""
no banner is displayed for the
SVM
Use the cluster-level banner when Set the SVM banner to "-":
the SVM currently uses an SVM- security login banner modify -vserver svm_name -message
level banner "-"
Related tasks
Creating a banner on page 80
You can create a banner to display a message to someone who attempts to access the cluster or
SVM. The banner is displayed in a console session (for cluster access only) or an SSH session (for
cluster or SVM access) before a user is prompted for authentication.
Creating an MOTD
You can create a message of the day (MOTD) to communicate information to authenticated CLI
users. The MOTD is displayed in a console session (for cluster access only) or an SSH session (for
cluster or SVM access) after a user is authenticated but before the clustershell prompt appears.
Steps
1. Use the security login motd modify command to create an MOTD for the cluster or
SVM:
Specifying the -message parameter with an empty string ("") displays MOTDs that are not
configured or have no content.
cluster1::>
The following example uses the interactive mode to create an MOTD for the "svm1" SVM that
uses escape sequences to display dynamically generated content:
cluster1::> security login motd modify -vserver svm1
cluster1::>
The following example displays the MOTDs that have been created:
cluster1::> security login motd show
Vserver: cluster1
Is the Cluster MOTD Displayed?: true
Message
-----------------------------------------------------------------------------
Greetings!
Vserver: svm1
Is the Cluster MOTD Displayed?: true
Message
-----------------------------------------------------------------------------
Welcome to the \n SVM. Your user ID is '\N'. Your last successful login was \L.
cluster1::>
Related tasks
Managing the MOTD on page 83
You can manage the message of the day (MOTD) at the cluster or SVM level. By default, the
MOTD configured for the cluster is also enabled for all SVMs. Additionally, an SVM-level
MOTD can be configured for each SVM. The cluster-level MOTD can be enabled or disabled for
each SVM by the cluster administrator.
Remove the MOTD for all logins Set the cluster-level MOTD to an empty string (""):
when no SVM-level MOTDs are security login motd modify -vserver cluster_name -message
configured ""
Have every SVM display the Set a cluster-level MOTD, then set all SVM-level MOTDs to an empty string
cluster-level MOTD instead of with the cluster-level MOTD enabled:
using the SVM-level MOTD
1. security login motd modify -vserver cluster_name { [-
message "text"] | [-uri ftp_or_http_addr] }
2. security login motd modify { -vserver !"cluster_name" }
-message "" -is-cluster-message-enabled true
Have an MOTD displayed for Set the cluster-level MOTD to an empty string, then set SVM-level MOTDs for
only selected SVMs, and use no selected SVMs:
cluster-level MOTD
1. security login motd modify -vserver cluster_name -
message ""
2. security login motd modify -vserver svm_name { [-message
"text"] | [-uri ftp_or_http_addr] }
You can repeat this step for each SVM as needed.
Use the same SVM-level MOTD Set the cluster and all SVMs to use the same MOTD:
for all (data and admin) SVMs security login motd modify -vserver * { [-message "text"]
| [-uri ftp_or_http_addr] }
Note: If you use the interactive mode, the CLI prompts you to enter the
MOTD individually for the cluster and each SVM. You can paste the same
MOTD into each instance when you are prompted to.
Have a cluster-level MOTD Set a cluster-level MOTD, but disable its display for the cluster:
optionally available to all SVMs, security login motd modify -vserver cluster_name { [-
but do not want the MOTD message "text"] | [-uri ftp_or_http_addr] } -is-cluster-
displayed for cluster logins message-enabled false
Remove all MOTDs at the cluster Set the cluster and all SVMs to use an empty string for the MOTD:
and SVM levels when only some security login motd modify -vserver * -message ""
SVMs have both cluster-level and
SVM-level MOTDs
Modify the MOTD only for the Use extended queries to modify the MOTD selectively:
SVMs that have a non-empty security login motd modify { -vserver !"cluster_name" -
string, when other SVMs use an message !"" } { [-message "text"] | [-uri
empty string, and when a ftp_or_http_addr] }
different MOTD is used at the
cluster level
System Administration Reference 85
Managing the banner and MOTD
Not have the SVM display any Set the SVM-level MOTD to an empty string, then have the cluster administrator
MOTD, when both the cluster- disable the cluster-level MOTD for the SVM:
level and SVM-level MOTDs are
1. security login motd modify -vserver svm_name -message ""
currently displayed for the SVM
2. (For the cluster administrator) security login motd modify -
vserver svm_name -is-cluster-message-enabled false
Related tasks
Creating an MOTD on page 82
You can create a message of the day (MOTD) to communicate information to authenticated CLI
users. The MOTD is displayed in a console session (for cluster access only) or an SSH session (for
cluster or SVM access) after a user is authenticated but before the clustershell prompt appears.
System Administration Reference 86
Managing licenses (cluster administrators only)
If the evaluation process determines that the cluster has a license entitlement risk, the
command output also suggests a corrective action.
Related information
What are Data ONTAP 8.2 and 8.3 licensing overview and references?
How to verify Data ONTAP Software Entitlements and related License Keys using the Support
Site
FAQ: Licensing updates in Data ONTAP 9.2
NetApp: Data ONTAP Entitlement Risk Status
• If a package has only one license type installed in the cluster, the installed license type is the
licensed method.
• If a package does not have any licenses installed in the cluster, the licensed method is none.
• If a package has multiple license types installed in the cluster, the licensed method is
determined in the following priority order of the license type—site, license, and demo.
For example:
◦ If you have a site license, a standard license, and an evaluation license for a package, the
licensed method for the package in the cluster is site.
◦ If you have a standard license and an evaluation license for a package, the licensed method
for the package in the cluster is license.
◦ If you have only an evaluation license for a package, the licensed method for the package in
the cluster is demo.
Display all packages that require licenses and their system license status show
current license status, including the following:
• The package name
• The licensed method
• The expiration date, if applicable
Delete the license of a package from the cluster or a node system license delete
whose serial number you specify
Display or remove expired or unused licenses system license clean-up
Display summary of feature usage in the cluster on a per- system feature-usage show-summary
node basis
Display feature usage status in the cluster on a per-node system feature-usage show-history
and per-week basis
Display the status of license entitlement risk for each system license entitlement-risk show
license package
Note: Some information is displayed only when you
use the -detail and -instance parameters.
System Administration Reference 89
Managing licenses (cluster administrators only)
Related information
ONTAP 9 commands
System Administration Reference 90
Managing jobs and schedules
Job categories
There are three categories of jobs that you can manage: server-affiliated, cluster-affiliated, and
private.
A job can be in any of the following categories:
Server-Affiliated jobs
These jobs are queued by the management framework to a specific node to be run.
Cluster-Affiliated jobs
These jobs are queued by the management framework to any node in the cluster to be run.
Private jobs
These jobs are specific to a node and do not use the replicated database (RDB) or any other
cluster mechanism. The commands that manage private jobs require the advanced privilege
level or higher.
Note: You can use the event log show command to determine the outcome of a completed
job.
Related information
ONTAP 9 commands
Job schedules do not adjust to manual changes to the cluster date and time. These jobs are
scheduled to run based on the current cluster time when the job was created or when the job most
recently ran. Therefore, if you manually change the cluster date or time, you should use the job
show and job history show commands to verify that all scheduled jobs are queued and
completed according to your requirements.
If the cluster is part of a MetroCluster configuration, then the job schedules on both clusters must
be identical. Therefore, if you create, modify, or delete a job schedule, you must perform the same
operation on the remote cluster.
For single-node clusters (including Data ONTAP Edge systems), you can specify the configuration
backup destination during software setup. After setup, those settings can be modified using
ONTAP commands.
Set the password to be used to log in to the remote URL system configuration backup settings set-
password
View the settings for the configuration backup schedule system configuration backup settings show
Copy a configuration backup file from a node to another system configuration backup copy
node in the cluster
System Administration Reference 95
Backing up and restoring cluster configurations (cluster administrators only)
Download a configuration backup file from a remote system configuration backup download
URL to a node in the cluster, and, if specified, validate When you use HTTPS in the remote URL, use the -
the digital certificate validate-certification option to enable or disable
digital certificate validation. Certificate validation is
disabled by default.
Rename a configuration backup file on a node in the system configuration backup rename
cluster
View the node and cluster configuration backup files for system configuration backup show
one or more nodes in the cluster
Delete a configuration backup file on a node system configuration backup delete
If you previously re-created the cluster, you should choose a configuration backup file that was
created after the cluster recreation. If you must use a configuration backup file that was created
prior to the cluster recreation, then after recovering the node, you must re-create the cluster again.
This example restores the node's configuration using one of the configuration backup files
stored on the node:
4. If you are operating in a SAN environment, use the system node reboot command to
reboot the node and reestablish SAN quorum.
After you finish
If you previously re-created the cluster, and if you are restoring the node configuration by using a
configuration backup file that was created prior to that cluster re-creation, then you must re-create
the cluster again.
Related tasks
Synchronizing a node with the cluster on page 100
If cluster-wide quorum exists, but one or more nodes are out of sync with the cluster, then you
must synchronize the node to restore the replicated database (RDB) on the node and bring it into
quorum.
Steps
1. Disable storage failover for each HA pair:
storage failover modify -node node_name -enabled false
You only need to disable storage failover once for each HA pair. When you disable storage
failover for a node, storage failover is also disabled on the node's partner.
2. Halt each node except for the recovering node:
system node halt -node node_name -reason "text"
Warning: Are you sure you want to halt the node? {y|n}: y
3. Set the privilege level to advanced:
set -privilege advanced
4. On the recovering node, use the system configuration recovery cluster
recreate command to re-create the cluster.
This example re-creates the cluster using the configuration information stored on the
recovering node:
5. If you are re-creating the cluster from a configuration backup file, verify that the cluster
recovery is still in progress:
system configuration recovery cluster show
You do not need to verify the cluster recovery state if you are re-creating the cluster from a
healthy node.
Warning: This command will rejoin node "node2" into the local
cluster, potentially overwriting critical cluster
configuration files. This command should only be used
to recover from a disaster. Do not perform any other
recovery operations while this operation is in progress.
This command will cause node "node2" to reboot.
Do you want to continue? {y|n}: y
Warning: This command will synchronize node "node2" with the cluster
configuration, potentially overwriting critical cluster
configuration files on the node. This feature should only be
used to recover from a disaster. Do not perform any other
recovery operations while this operation is in progress. This
command will cause all the cluster applications on node
"node2" to restart, interrupting administrative CLI and Web
interface on that node.
Do you want to continue? {y|n}: y
All cluster applications on node "node2" will be restarted. Verify that the cluster
applications go online.
Result
The RDB is replicated to the node, and the node becomes eligible to participate in the cluster.
System Administration Reference 101
Managing core dumps (cluster administrators only)
Display the configuration settings for core dumps system node coredump config show
Display basic information about core dumps system node coredump show
Manually trigger a core dump when you reboot a node system node reboot with both the -dump and -
skip-lif-migration parameters
Manually trigger a core dump when you shut down a system node halt with both the -dump and -skip-
node lif-migration parameters
Save all unsaved core dumps that are on a specified node system node coredump save-all
System Administration Reference 102
Managing core dumps (cluster administrators only)
Display status information about core dumps system node coredump status
Delete all unsaved core dumps or all saved core files on a system node coredump delete-all
node
Display application core dump reports system node coredump reports show
Delete an application core dump report system node coredump reports delete
Related information
ONTAP 9 commands
System Administration Reference 103
Monitoring a storage system
Managing AutoSupport
AutoSupport is a mechanism that proactively monitors the health of your system and automatically
sends messages to NetApp technical support, your internal support organization, and a support
partner. Although AutoSupport messages to technical support are enabled by default, you must set
the correct options and have a valid mail host to have messages sent to your internal support
organization.
Only the cluster administrator can perform AutoSupport management. The storage virtual machine
(SVM) administrator has no access to AutoSupport.
AutoSupport is enabled by default when you configure your storage system for the first time.
AutoSupport begins sending messages to technical support 24 hours after AutoSupport is enabled.
You can shorten the 24-hour period by upgrading or reverting the system, modifying the
AutoSupport configuration, or changing the system time to be something other than a 24-hour
period.
System Administration Reference 104
Monitoring a storage system
Note: You can disable AutoSupport at any time, but you should leave it enabled. Enabling
AutoSupport can significantly help speed problem determination and resolution should a
problem occur on your storage system. By default, the system collects AutoSupport information
and stores it locally, even if you disable AutoSupport.
For more information about AutoSupport, see the NetApp Support Site.
Related information
The NetApp Support Site: mysupport.netapp.com
Event-triggered messages
When events occur on the system that require corrective action, AutoSupport automatically sends
an event-triggered message.
Scheduled messages
AutoSupport automatically sends several messages on a regular schedule.
You manually initiate a message using the system node If a URI is specified using the -uri parameter in the
autosupport invoke-performance-archive system node autosupport invoke-
command performance-archive command, the message is sent
to that URI, and the performance archive file is uploaded
to the URI.
If -uri is omitted in the system node autosupport
invoke-performance-archive, the message is sent
to technical support, and the performance archive file is
uploaded to the technical support site.
Both scenarios require that -support is set to enable
and -transport is set to https or http.
Due to the large size of performance archive files, the
message is not sent to the addresses specified in the -to
and -partner-addresses parameters.
You manually resend a past message using the system Only to the URI that you specify in the -uri parameter
node autosupport history retransmit command of the system node autosupport history
retransmit command
When AutoSupport obtains delivery instructions to Technical support, if -support is set to enable and -
resend past AutoSupport messages transport is set to https
When AutoSupport obtains delivery instructions to Technical support, if -support is set to enable and -
generate new AutoSupport messages that upload core transport is set to https. The core dump or
dump or performance archive files performance archive file is uploaded to the technical
support site.
System Administration Reference 106
Monitoring a storage system
Related concepts
How AutoSupport OnDemand obtains delivery instructions from technical support on page 109
AutoSupport OnDemand periodically communicates with technical support to obtain delivery
instructions for sending, resending, and declining AutoSupport messages as well as uploading
large files to the NetApp support site. AutoSupport OnDemand enables AutoSupport messages to
be sent on-demand instead of waiting for the weekly AutoSupport job to run.
Note that the callhome. prefix is dropped from the callhome.shlf.ps.fault event when
you use the system node autosupport trigger commands, or when referenced by
AutoSupport and EMS events in the CLI.
Triggered by AutoSupport AutoSupport OnDemand can request new messages or past messages:
OnDemand
• New messages, depending on the type of AutoSupport collection, can be test, all,
or performance.
• Past messages depend on the type of message that is resent.
AutoSupport OnDemand can request new messages that upload the following files to the
NetApp Support Site at mysupport.netapp.com:
• Core dump
• Performance archive
In addition to the subsystems that are included by default for each event, you can add additional
subsystems at either a basic or a troubleshooting level using the system node autosupport
trigger modify command.
• Log files from the /mroot/etc/log/mlog/ Only new lines added to the logs since the last AutoSupport
directory message up to a specified maximum.
• The MESSAGES log file This ensures that AutoSupport messages have unique, relevant
—not overlapping—data.
(Log files from partners are the exception; for partners, the
maximum allowed data is included.)
• Log files from the /mroot/etc/log/ The most recent lines of data up to a specified maximum.
shelflog/ directory
• Log files from the /mroot/etc/log/acp/
directory
• Event Management System (EMS) log data
System Administration Reference 109
Monitoring a storage system
If you want to have AutoSupport OnDemand disabled, but keep AutoSupport enabled and
configured to send messages to technical support by using the HTTPS transport protocol, contact
technical support.
The following illustration shows how AutoSupport OnDemand sends HTTPS requests to technical
support to obtain delivery instructions.
System Administration Reference 110
Monitoring a storage system
The delivery instructions can include requests for AutoSupport to do the following:
• Generate new AutoSupport messages.
Technical support might request new AutoSupport messages to help triage issues.
• Generate new AutoSupport messages that upload core dump files or performance archive files
to the NetApp support site.
Technical support might request core dump or performance archive files to help triage issues.
• Retransmit previously generated AutoSupport messages.
This request automatically happens if a message was not received due to a delivery failure.
• Disable delivery of AutoSupport messages for specific trigger events.
Technical support might disable delivery of data that is not used.
Subject
The subject line of messages sent by the AutoSupport mechanism contains a text string that
identifies the reason for the notification. The format of the subject line is as follows:
HA Group Notification from System_Name (Message) Severity
• System_Name is either the hostname or the system ID, depending on the AutoSupport
configuration
Body
The body of the AutoSupport message contains the following information:
• Date and timestamp of the message
• Version of ONTAP on the node that generated the message
• System ID, serial number, and hostname of the node that generated the message
• AutoSupport sequence number
• SNMP contact name and location, if specified
• System ID and hostname of the HA partner node
Attached files
The key information in an AutoSupport message is contained in files that are compressed into a 7z
file called body.7z and attached to the message.
The files contained in the attachment are specific to the type of AutoSupport message.
System Administration Reference 111
Monitoring a storage system
If you configure AutoSupport with specific email addresses for your internal support organization,
or a support partner organization, those messages are always sent by SMTP.
For example, if you use the recommended protocol to send messages to technical support and you
also want to send messages to your internal support organization, your messages will be
transported using both HTTPS and SMTP, respectively.
AutoSupport limits the maximum file size for each protocol. The default setting for HTTP and
HTTPS transfers is 25 MB. The default setting for SMTP transfers is 5 MB. If the size of the
AutoSupport message exceeds the configured limit, AutoSupport delivers as much of the message
as possible. You can edit the maximum size by modifying AutoSupport configuration. See the
system node autosupport modify man page for more information.
Note: AutoSupport automatically overrides the maximum file size limit for the HTTPS and
HTTP protocols when you generate and send AutoSupport messages that upload core dump or
performance archive files to the NetApp support site or a specified URI. The automatic override
applies only when you upload files by using the system node autosupport invoke-
core-upload or the system node autosupport invoke-performance-archive
commands.
Configuration requirements
Depending on your network configuration, use of HTTP or HTTPS protocols may require
additional configuration of a proxy URL. If you use HTTP or HTTPS to send AutoSupport
messages to technical support and you have a proxy, you must identify the URL for that proxy. If
the proxy uses a port other than the default port, which is 3128, you can specify the port for that
proxy. You can also specify a user name and password for proxy authentication.
If you use SMTP to send AutoSupport messages either to your internal support organization or to
technical support, you must configure an external mail server. The storage system does not
function as a mail server; it requires an external mail server at your site to send mail. The mail
server must be a host that listens on the SMTP port (25) or another port, and it must be configured
to send and receive 8-bit Multipurpose Internet Mail Extensions (MIME) encoding. Example mail
hosts include a UNIX host running an SMTP server such as the sendmail program and a Windows
server running the Microsoft Exchange server. You can have one or more mail hosts.
Setting up AutoSupport
You can control whether and how AutoSupport information is sent to technical support and your
internal support organization, and then test that the configuration is correct.
About this task
In ONTAP 9.5 and later releases, you can enable AutoSupport and modify its configuration on all
nodes of the cluster simultaneously. When a new node joins the cluster, the node inherits the
System Administration Reference 113
Monitoring a storage system
AutoSupport cluster configuration automatically. You do not have to update the configuration on
each node separately.
Note: Starting with ONTAP 9.5, the scope of the system node autosupport modify
command is cluster-wide. The AutoSupport configuration is modified on all nodes in the cluster,
even when the -node option is specified. The option is ignored, but it has been retained for CLI
backward compatibility.
In ONTAP 9.4 and earlier releases, the scope of the "system node autosupport modify"
command is specific to the node. The AutoSupport configuration should be modified on each
node in your cluster.
By default, AutoSupport is enabled on each node to send messages to technical support by using
the HTTPS transport protocol.
Steps
1. Ensure that AutoSupport is enabled by using the following command:
You must enable this option if you want to enable AutoSupport to work with AutoSupport
OnDemand or if you want to upload large files, such as core dump and performance archive
files, to technical support or a specified URL.
3. If technical support is enabled to receive AutoSupport messages, specify which transport
protocol to use for the messages.
You can choose from the following options:
If you want to... Then set the following parameters of the system node autosupport
modify command...
4. If you want your internal support organization or a support partner to receive AutoSupport
messages, perform the following actions:
a. Identify the recipients in your organization by setting the following parameters of the
system node autosupport modify command:
System Administration Reference 114
Monitoring a storage system
If you want to do this... Then set the following parameters of the system node autosupport
modify command...
The status of the latest outgoing AutoSupport message should eventually change to
sent-successful for all appropriate protocol destinations.
c. Optional: Confirm that the AutoSupport message is being sent to your internal support
organization or to your support partner by checking the email of any address that you
configured for the -to, -noteto, or -partner-address parameters of the system
node autosupport modify command.
Related tasks
Troubleshooting AutoSupport when messages are not received on page 119
If the system does not send the AutoSupport message, you can determine whether that is because
AutoSupport cannot generate the message or cannot deliver the message.
Related information
ONTAP 9 commands
Network and LIF management
In the following example, an AutoSupport message is generated and sent to the default
location, which is technical support, and the core dump file is uploaded to the default location,
which is the NetApp support site:
cluster1::> system node autosupport invoke-core-upload -core-filename
core.4073000068.2013-09-11.15_05_01.nz -node local
In the following example, an AutoSupport message is generated and sent to the location
specified in the URI, and the core dump file is uploaded to the URI:
System Administration Reference 116
Monitoring a storage system
Related tasks
Setting up AutoSupport on page 112
You can control whether and how AutoSupport information is sent to technical support and your
internal support organization, and then test that the configuration is correct.
In the following example, 4 hours of performance archive files from January 12, 2015 are added to
an AutoSupport message and uploaded to the default location, which is the NetApp support site:
In the following example, 4 hours of performance archive files from January 12, 2015 are added to
an AutoSupport message and uploaded to the location specified by the URI:
Related tasks
Setting up AutoSupport on page 112
System Administration Reference 117
Monitoring a storage system
You can control whether and how AutoSupport information is sent to technical support and your
internal support organization, and then test that the configuration is correct.
Control whether AutoSupport messages are sent to system node autosupport modify with the -
technical support support parameter
The AutoSupport manifest is included with every AutoSupport message and presented in XML
format, which means that you can either use a generic XML viewer to read it or view it using the
Active IQ (formerly known as My AutoSupport) portal.
System Administration Reference 119
Monitoring a storage system
Troubleshooting AutoSupport
If you do not receive AutoSupport messages, you can check a number of settings to resolve the
problem.
Troubleshooting AutoSupport when messages are not received
If the system does not send the AutoSupport message, you can determine whether that is because
AutoSupport cannot generate the message or cannot deliver the message.
Steps
1. Check delivery status of the messages by using the system node autosupport history
show command.
2. Read the status.
collection-in-progress AutoSupport is collecting AutoSupport content. You can view what AutoSupport
is collecting by entering the system node autosupport manifest show
command.
queued AutoSupport messages are queued for delivery, but not yet delivered.
transmitting AutoSupport is currently delivering messages.
sent-successful AutoSupport successfully delivered the message. You can find out where
AutoSupport delivered the message by entering the system node
autosupport history show -delivery command.
ignore AutoSupport has no destinations for the message. You can view the delivery
details by entering the system node autosupport history show -
delivery command.
re-queued AutoSupport tried to deliver messages, but the attempt failed. As a result,
AutoSupport placed the messages back in the delivery queue for another
attempt. You can view the error by entering the system node autosupport
history show command.
transmission-failed AutoSupport failed to deliver the message the specified number of times and
stopped trying to deliver the message. You can view the error by entering the
system node autosupport history show command.
System Administration Reference 120
Monitoring a storage system
Related tasks
Troubleshooting AutoSupport message delivery over SMTP on page 121
If the system cannot deliver AutoSupport messages over SMTP, you can check a number of
settings to resolve the problem.
Troubleshooting AutoSupport message delivery over HTTP or HTTPS on page 120
If the system does not send the expected AutoSupport message and you are using HTTP or
HTTPS, you can check a number of settings to resolve the problem.
220 filer.yourco.com Sendmail 4.1/SMI-4.1 ready at Thu, 30 Nov 2014 10:49:04 PST
8. At the telnet prompt, ensure that a message can be relayed from your mail host:
HELO domain_name
MAIL FROM: your_email_address
RCPT TO: [email protected]
domain_name is the domain name of your network.
If an error is returned saying that relaying is denied, relaying is not enabled on the mail host.
Contact your system administrator.
9. At the telnet prompt, send a test message:
DATA
System Administration Reference 123
Monitoring a storage system
SUBJECT: TESTING
THIS IS A TEST
.
Note: Ensure that you enter the last period (.) on a line by itself. The period indicates to
the mail host that the message is complete.
If an error is returned, your mail host is not configured correctly. Contact your system
administrator.
10. From the ONTAP command-line interface, send an AutoSupport test message to a trusted
email address that you have access to:
system node autosupport invoke -node local -type test
11. Find the sequence number of the attempt:
system node autosupport history show -node local -destination smtp
Find the sequence number for your attempt based on the timestamp. It is probably the most
recent attempt.
12. Display the error for your test message attempt:
system node autosupport history show -node local -seq-num seq_num -fields error
If the error displayed is Login denied, your SMTP server is not accepting send requests
from the cluster management LIF. If you do not want to change to using HTTPS as your
transport protocol, contact your site network administrator to configure the SMTP gateways
to address this issue.
If this test succeeds but the same message sent to mailto:[email protected] does not,
ensure that SMTP relay is enabled on all of your SMTP mail hosts, or use HTTPS as a
transport protocol.
If even the message to the locally administered email account does not succeed, confirm that
your SMTP servers are configured to forward attachments with both of these characteristics:
• The "7z" suffix
• The "application/x-7x-compressed" MIME type.
Related tasks
Troubleshooting AutoSupport when messages are not received on page 119
If the system does not send the AutoSupport message, you can determine whether that is because
AutoSupport cannot generate the message or cannot deliver the message.
Related information
Network and LIF management
message when the problem occurs. You can configure policies to not trigger AutoSupport
messages by using the system health policy definition modify command.
You can view a list of all of the alert-triggered AutoSupport messages sent in the previous week
using the system health autosupport trigger history show command.
Alerts also trigger the generation of events to the EMS. An event is generated each time an alert is
created and each time an alert is cleared.
MetroCluster Switch Monitors the MetroCluster configuration back-end fabric topology and
Fabric detects misconfigurations such as incorrect cabling and zoning, and ISL
failures.
MetroCluster Interconnect, Monitors FC-VI adapters, FC initiator adapters, left-behind aggregates and
Health RAID, and disks, and inter-cluster ports
storage
Node CIFS Monitors SMB connections for nondisruptive operations to Hyper-V
connectivity nondisruptive applications.
(node-connect) operations (CIFS-
NDO)
Storage (SAS- Monitors shelves, disks, and adapters at the node level for appropriate paths
connect) and connections.
System not applicable Aggregates information from other health monitors.
System Storage (SAS- Monitors shelves at the cluster level for appropriate paths to two HA
connectivity connect) clustered nodes.
(system-connect)
All hm.alert.raised messages and all hm.alert.cleared messages include an SNMP trap. The names
of the SNMP traps are HealthMonitorAlertRaised and HealthMonitorAlertCleared. For
information about SNMP traps, see the Network Management Guide.
Steps
1. Use the event destination create command to define the destination to which you want
to send the EMS messages.
You show alerts to find out where the problem is, and see that shelf 2 does not have two paths to
node1:
cluster1::>system health alert show
Node: node1
Resource: Shelf ID 2
Severity: Major
Indication Time: Mon Nov 10 16:48:12 2013
Probable Cause: Disk shelf 2 does not have two paths to controller
node1.
Possible Effect: Access to disk shelf 2 via controller node1 will be
lost with a single hardware component failure (e.g.
cable, HBA, or IOM failure).
Corrective Actions: 1. Halt controller node1 and all controllers attached to disk shelf 2.
2. Connect disk shelf 2 to controller node1 via two paths following the rules in the
Universal SAS and ACP Cabling Guide.
3. Reboot the halted controllers.
4. Contact support personnel if the alert persists.
You display details about the alert to get more information, including the alert ID:
cluster1::>system health alert show -monitor node-connect -alert-id DualPathToDiskShelf_Alert -instance
Node: node1
Monitor: node-connect
Alert ID: DualPathToDiskShelf_Alert
Alerting Resource: 50:05:0c:c1:02:00:0f:02
Subsystem: SAS-connect
Indication Time: Mon Mar 21 10:26:38 2011
Perceived Severity: Major
Probable Cause: Connection_establishment_error
Description: Disk shelf 2 does not have two paths to controller node1.
Corrective Actions: 1. Halt controller node1 and all controllers attached to disk shelf 2.
2. Connect disk shelf 2 to controller node1 via two paths following the rules in the
Universal SAS and ACP Cabling Guide.
3. Reboot the halted controllers.
4. Contact support personnel if the alert persists.
Possible Effect: Access to disk shelf 2 via controller node1 will be lost with a single
hardware component failure (e.g. cable, HBA, or IOM failure).
Acknowledge: false
Suppress: false
Policy: DualPathToDiskShelf_Policy
Acknowledger: -
Suppressor: -
Additional Information: Shelf uuid: 50:05:0c:c1:02:00:0f:02
Shelf id: 2
Shelf Name: 4d.shelf2
Number of Paths: 1
Number of Disks: 6
Adapter connected to IOMA:
Adapter connected to IOMB: 4d
Alerting Resource Name: Shelf ID 2
You acknowledge the alert to indicate that you are working on it.
cluster1::>system health alert modify -node node1 -alert-id DualPathToDiskShelf_Alert -acknowledge true
You fix the cabling between shelf 2 and node1, and then reboot the system. Then you check
system health again, and see that the status is OK:
cluster1::>system health status show
Status
---------------
OK
monitor if it cannot automatically discover a switch or if you do not want to use CDP for
automatic discovery.
About this task
The system cluster-switch show command lists the switches that the health monitor
discovered. If you do not see a switch that you expected to see in that list, then the health monitor
cannot automatically discover it.
Steps
1. If you want to use CDP for automatic discovery, do the following; otherwise, go to step 2:
a. Ensure that the Cisco Discovery Protocol (CDP) is enabled on your switches.
Refer to your switch documentation for instructions.
b. Run the following command on each node in the cluster to verify whether CDP is enabled
or disabled:
run -node node_name -command options cdpd.enable
If CDP is enabled, go to step d. If CDP is disabled, go to step c.
c. Run the following command to enable CDP:
run -node node_name -command options cdpd.enable on
Wait five minutes before you go to the next step.
d. Use the system cluster-switch show command to verify whether ONTAP can now
automatically discover the switches.
2. If the health monitor cannot automatically discover a switch, use the system cluster-
switch create command to configure discovery of the switch:
The community string in the switch's RCF must match the community string that the health
monitor is configured to use. By default, the health monitor uses the community string
cshm1!.
If you need to change information about a switch that the cluster monitors, you can modify the
community string that the health monitor uses by using the system health cluster-
switch modify command.
3. Verify that the switch's management port is connected to the management network.
This connection is required to perform SNMP queries.
Related tasks
Configuring discovery of cluster and management network switches on page 128
The cluster switch health monitor automatically attempts to discover your cluster and management
network switches using the Cisco Discovery Protocol (CDP). You must configure the health
monitor if it cannot automatically discover a switch or if you do not want to use CDP for
automatic discovery.
Modify information about a switch that the cluster system cluster-switch modify
monitors (for example, device name, IP address, SNMP
version, and community string)
Disable monitoring of a switch system cluster-switch modify -disable-
monitoring
Display the interval in which the health monitor polls system cluster-switch polling-interval
switches to gather information show
Modify the interval in which the health monitor polls system cluster-switch polling-interval
switches to gather information modify
This command is available at the advanced privilege
level.
Disable discovery and monitoring of a switch and delete system cluster-switch delete
switch configuration information
Permanently remove the switch configuration information system cluster-switch delete -force
which is stored in the database (doing so reenables
automatic discovery of the switch)
Enable automatic logging to send with AutoSupport system cluster-switch log
messages
This command is available at the advanced privilege
level.
Suppress a subsequent alert so that it does not affect the system health alert modify -suppress
health status of a subsystem
Delete an alert that was not automatically cleared system health alert delete
System Administration Reference 132
Monitoring a storage system
Display information about the alerts that a health monitor system health alert definition show
can potentially generate
Note: Use the -instance parameter to display
detailed information about each alert definition.
Display information about health monitor policies, which system health policy definition show
determine when alerts are raised
Note: Use the -instance parameter to display
detailed information about each policy. Use other
parameters to filter the list of alerts—for example, by
policy status (enabled or not), health monitor, alert,
and so on.
The Service Processor Infrastructure (spi) web service is enabled by default to enable a web
browser to access the log, core dump, and MIB files of a node in the cluster. The files remain
accessible even when the node is down, provided that the node is taken over by its partner.
If the firewall is enabled, the firewall policy for the logical interface (LIF) to be used for web
services must be set up to allow HTTP or HTTPS access.
If you use HTTPS for web service access, SSL for the cluster or storage virtual machine (SVM)
that offers the web service must also be enabled, and you must provide a digital certificate for the
cluster or SVM.
In MetroCluster configurations, the setting changes you make for the web protocol engine on a
cluster are not replicated on the partner cluster.
Related concepts
Managing SSL on page 141
The SSL protocol improves the security of web access by using a digital certificate to establish an
encrypted connection between a web server and a browser.
Managing web services on page 139
You can enable or disable a web service for the cluster or a storage virtual machine (SVM),
display the settings for web services, and control whether users of a role can access a web service.
Related tasks
Configuring access to web services on page 142
System Administration Reference 135
Managing access to web services
Configuring access to web services allows authorized users to use HTTP or HTTPS to access the
service content on the cluster or a storage virtual machine (SVM).
Related information
Network and LIF management
Display the configuration of the web protocol engine at system services web show
the cluster level, determine whether the web protocols are
functional throughout the cluster, and display whether
FIPS 140-2 compliance is enabled and online
Display the configuration of the web protocol engine at system services web node show
the node level and the activity of web service handling
for the nodes in the cluster
Create a firewall policy or add HTTP or HTTPS protocol system services firewall policy create
service to an existing firewall policy to allow web access Setting the -service parameter to http or https
requests to go through firewall enables web access requests to go through firewall.
Associate a firewall policy with a LIF network interface modify
You can use the -firewall-policy parameter to
modify the firewall policy of a LIF.
Related reference
Commands for managing SSL on page 141
You use the security ssl commands to manage the SSL protocol for the cluster or a storage
virtual machine (SVM).
Related information
ONTAP 9 commands
Related information
Administrator authentication and RBAC
idp_uri is the FTP or HTTP address of the IdP host from where the IdP metadata can be
downloaded.
ontap_host_name is the host name or IP address of the SAML service provider host, which
in this case is the ONTAP system. By default, the IP address of the cluster-management LIF is
used.
You can optionally provide the ONTAP server certificate information. By default, the ONTAP
web server certificate information is used.
Warning: This restarts the web server. Any HTTP/S connections that are active
will be disrupted.
Do you want to continue? {y|n}: y
[Job 179] Job succeeded: Access the SAML SP metadata using the URL:
https://10.63.56.150/saml-sp/Metadata
Configure the IdP and Data ONTAP users for the same directory server domain to ensure
that users are the same for different authentication methods. See the "security login
show" command for the Data ONTAP user configuration.
Any existing user that accesses the http or ontapi application is automatically configured for
SAML authentication.
4. If you want to create users for the http or ontapi application after SAML is configured,
specify SAML as the authentication method for the new users.
System Administration Reference 137
Managing access to web services
Vserver: cluster_12
Second
User/Group Authentication Acct Authentication
Name Application Method Role Name Locked Method
-------------- ----------- ------------- ---------------- ------ --------------
admin console password admin no none
admin http password admin no none
admin http saml admin - none
admin ontapi password admin no none
admin ontapi saml admin - none
admin service-processor
password admin no none
admin ssh password admin no none
admin1 http password backup no none
admin1 http saml backup - none
Related information
ONTAP 9 commands
authentication. You can identify the node on which SAML configuration has failed and then
manually repair that node.
Steps
1. Log in to the advanced privilege level:
set -privilege advanced
2. Identify the node on which SAML configuration failed:
security saml-sp status show -instance
Node: node1
Update Status: config-success
Database Epoch: 9
Database Transaction Count: 997
Error Text:
SAML Service Provider Enabled: false
ID of SAML Config Job: 179
Node: node2
Update Status: config-failed
Database Epoch: 9
Database Transaction Count: 997
Error Text: SAML job failed, Reason: Internal error. Failed to
receive the SAML IDP Metadata file.
SAML Service Provider Enabled: false
ID of SAML Config Job: 180
2 entries were displayed.
3. Repair the SAML configuration on the failed node:
security saml-sp repair -node node_name
Warning: This restarts the web server. Any HTTP/S connections that are active
will be disrupted.
Do you want to continue? {y|n}: y
[Job 181] Job is running.
[Job 181] Job success.
The web server is restarted and any active HTTP connections or HTTPS connections are
disrupted.
4. Verify that SAML is successfully configured on all of the nodes:
security saml-sp status show -instance
Node: node1
Update Status: config-success
Database Epoch: 9
Database Transaction Count: 997
Error Text:
SAML Service Provider Enabled: false
ID of SAML Config Job: 179
Node: node2
Update Status: config-success
Database Epoch: 9
Database Transaction Count: 997
Error Text:
SAML Service Provider Enabled: false
ID of SAML Config Job: 180
2 entries were displayed.
System Administration Reference 139
Managing access to web services
If a firewall is enabled, the firewall policy for the LIF to be used for web services must be set up to
allow HTTP or HTTPS.
If you use HTTPS for web service access, SSL for the cluster or SVM that offers the web service
must also be enabled, and you must provide a digital certificate for the cluster or SVM.
Related concepts
Managing the web protocol engine on page 134
You can configure the web protocol engine on the cluster to control whether web access is allowed
and what SSL versions can be used. You can also display the configuration settings for the web
protocol engine.
Managing SSL on page 141
The SSL protocol improves the security of web access by using a digital certificate to establish an
encrypted connection between a web server and a browser.
Related tasks
Configuring access to web services on page 142
System Administration Reference 140
Managing access to web services
Configuring access to web services allows authorized users to use HTTP or HTTPS to access the
service content on the cluster or a storage virtual machine (SVM).
Display the configuration and availability of web services vserver services web show
for the cluster or an SVM
Authorize a role to access a web service on the cluster or vserver services web access create
an SVM
Display the roles that are authorized to access web vserver services web access show
services on the cluster or an SVM
Prevent a role from accessing a web service on the cluster vserver services web access delete
or an SVM
Related information
ONTAP 9 commands
Managing SSL
The SSL protocol improves the security of web access by using a digital certificate to establish an
encrypted connection between a web server and a browser.
You can manage SSL for the cluster or a storage virtual machine (SVM) in the following ways:
• Enabling SSL
• Generating and installing a digital certificate and associating it with the cluster or SVM
• Displaying the SSL configuration to see whether SSL has been enabled, and, if available, the
SSL certificate name
• Setting up firewall policies for the cluster or SVM, so that web access requests can go through
• Defining which SSL versions can be used
• Restricting access to only HTTPS requests for a web service
Related concepts
Managing the web protocol engine on page 134
You can configure the web protocol engine on the cluster to control whether web access is allowed
and what SSL versions can be used. You can also display the configuration settings for the web
protocol engine.
Managing web services on page 139
You can enable or disable a web service for the cluster or a storage virtual machine (SVM),
display the settings for web services, and control whether users of a role can access a web service.
Related tasks
Configuring access to web services on page 142
Configuring access to web services allows authorized users to use HTTP or HTTPS to access the
service content on the cluster or a storage virtual machine (SVM).
Related information
Network and LIF management
ONTAP 9 commands
a. To verify that HTTP or HTTPS is set up in the firewall policy, use the system services
firewall policy show command.
You set the -service parameter of the system services firewall policy create
command to http or https to enable the policy to support web access.
b. To verify that the firewall policy supporting HTTP or HTTPS is associated with the LIF
that provides web services, use the network interface show command with the -
firewall-policy parameter.
You use the network interface modify command with the -firewall-policy
parameter to put the firewall policy into effect for a LIF.
2. To configure the cluster-level web protocol engine and make web service content accessible,
use the system services web modify command.
3. If you plan to use secure web services (HTTPS), enable SSL and provide digital certificate
information for the cluster or SVM by using the security ssl modify command.
4. To enable a web service for the cluster or SVM, use the vserver services web modify
command.
You must repeat this step for each service that you want to enable for the cluster or SVM.
5. To authorize a role to access web services on the cluster or SVM, use the vserver services
web access create command.
The role that you grant access must already exist. You can display existing roles by using the
security login role show command or create new roles by using the security login
role create command.
6. For a role that has been authorized to access a web service, ensure that its users are also
configured with the correct access method by checking the output of the security login
show command.
To access the ONTAP API web service (ontapi), a user must be configured with the ontapi
access method. To access all other web services, a user must be configured with the http
access method.
Note: You use the security login create command to add an access method for a
user.
Related concepts
Managing SSL on page 141
The SSL protocol improves the security of web access by using a digital certificate to establish an
encrypted connection between a web server and a browser.
Managing the web protocol engine on page 134
System Administration Reference 143
Managing access to web services
You can configure the web protocol engine on the cluster to control whether web access is allowed
and what SSL versions can be used. You can also display the configuration settings for the web
protocol engine.
Managing web services on page 139
You can enable or disable a web service for the cluster or a storage virtual machine (SVM),
display the settings for web services, and control whether users of a role can access a web service.
Related information
Network and LIF management
Your firewall might be Ensure that a firewall policy is set up to support HTTP
configured incorrectly. or HTTPS and that the policy is assigned to the LIF
that provides the web service.
Note: You use the system services firewall
policy commands to manage firewall policies.
You use the network interface modify
command with the -firewall-policy parameter
to associate a policy with a LIF.
Your web protocol engine Ensure that the web protocol engine is enabled so that
might be disabled. web services are accessible.
Note: You use the system services web
commands to manage the web protocol engine for
the cluster.
Your web browser returns a The web service might be Ensure that each web service that you want to allow
not found error when you disabled. access to is enabled individually.
try to access a web service.
Note: You use the vserver services web
modify command to enable a web service for
access.
System Administration Reference 144
Managing access to web services
You connect to your web You might not have SSL Ensure that the cluster or SVM has SSL enabled and
service with HTTPS, and enabled on the cluster or that the digital certificate is valid.
your web browser indicates storage virtual machine
Note: You use the security ssl commands to
that your connection is (SVM) that provides the web
manage SSL configuration for HTTP servers and
interrupted. service.
the security certificate show command to
display digital certificate information.
You connect to your web You might be using a self- Ensure that the digital certificate associated with the
service with HTTPS, and signed digital certificate. cluster or SVM is signed by a trusted CA.
your web browser indicates
Note: You use the security certificate
that the connection is
generate-csr command to generate a digital
untrusted.
certificate signing request and the security
certificate install command to install a CA-
signed digital certificate. You use the security
ssl commands to manage the SSL configuration
for the cluster or SVM that provides the web
service.
Related concepts
Managing the web protocol engine on page 134
You can configure the web protocol engine on the cluster to control whether web access is allowed
and what SSL versions can be used. You can also display the configuration settings for the web
protocol engine.
Managing web services on page 139
You can enable or disable a web service for the cluster or a storage virtual machine (SVM),
display the settings for web services, and control whether users of a role can access a web service.
Managing SSL on page 141
The SSL protocol improves the security of web access by using a digital certificate to establish an
encrypted connection between a web server and a browser.
Related information
Network and LIF management
System Administration Reference 145
Verifying the identity of remote servers using certificates
The following command enables OCSP support for AutoSupport and EMS.
When OCSP is enabled, the application receives one of the following responses:
• Good - the certificate is valid and communication proceeds.
• Revoked - the certificate is permanently deemed as not trustworthy by its issuing Certificate
Authority and communication fails to proceed.
• Unknown - the server does not have any status information about the certificate and
communication fails to proceed.
System Administration Reference 146
Verifying the identity of remote servers using certificates
• OCSP server information is missing in the certificate - the server acts as if OCSP is
disabled and continues with TLS communication, but no status check occurs.
• No response from OCSP server - the application fails to proceed.
3. To enable or disable OCSP certificate status checks for all applications using TLS
communications, use the appropriate command.
If you want OCSP certificate status Use the command...
checks for all applications to be...
Enabled security config ocsp enable -app all
When enabled, all applications receive a signed response signifying that the specified
certificate is good, revoked, or unknown. In the case of a revoked certificate, the application
will fail to proceed. If the application fails to receive a response from the OCSP server or if the
server is unreachable, the application will fail to proceed.
4. Use the security config ocsp show command to display all the applications that support
OCSP and their support status.
Step
You can view the default certificates that are installed on the admin SVM by using the security
certificate show command: security certificate show -vserver –type server-ca.
fas2552-2n-abc-3
01 AAACertificateServices server-ca
Certificate Authority: AAA Certificate Services
Expiration Date: Sun Dec 31 18:59:59 2028
2. Copy the certificate request from the CSR output, and then send it in electronic form (such as
email) to a trusted third-party CA for signing.
After processing your request, the CA sends you the signed digital certificate. You should keep
a copy of the private key and the CA-signed digital certificate.
Steps
1. Use the security certificate install command with the -type server-ca and -
subtype kmip-cert parameters to install a KMIP certificate for the KMIP server.
2. When you are prompted, enter the certificate, and then press Enter.
ONTAP reminds you to keep a copy of the certificate for future reference.
cluster1::> security certificate install -type server-ca -subtype kmip-cert
-vserver cluster1
You should keep a copy of the CA-signed digital certificate for future reference.
cluster1::>
System Administration Reference 150
Logical space reporting and enforcement for volumes
Copyright
Copyright © 2020 NetApp, Inc. All rights reserved. Printed in the U.S.
No part of this document covered by copyright may be reproduced in any form or by any means—
graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an
electronic retrieval system—without prior written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and
disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,
WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice.
NetApp assumes no responsibility or liability arising from the use of products described herein,
except as expressly agreed to in writing by NetApp. The use or purchase of this product does not
convey a license under any patent rights, trademark rights, or any other intellectual property rights
of NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign
patents, or pending applications.
Data contained herein pertains to a commercial item (as defined in FAR 2.101) and is proprietary
to NetApp, Inc. The U.S. Government has a non-exclusive, non-transferrable, non-sublicensable,
worldwide, limited irrevocable license to use the Data only in connection with and in support of
the U.S. Government contract under which the Data was delivered. Except as provided herein, the
Data may not be used, disclosed, reproduced, modified, performed, or displayed without the prior
written approval of NetApp, Inc. United States Government license rights for the Department of
Defense are limited to those rights identified in DFARS clause 252.227-7015(b).
Trademark
NETAPP, the NETAPP logo, and the marks listed on the NetApp Trademarks page are trademarks
of NetApp, Inc. Other company and product names may be trademarks of their respective owners.
http://www.netapp.com/us/legal/netapptmlist.aspx