Netapp Clustered Ontap CLI Pocket Guide
Netapp Clustered Ontap CLI Pocket Guide
On this page I will be constantly adding Netapp Clustered Data Ontap CLI commands as an easy
reference Pocket Guide
Most Clustered Ontap commands work in 8.x, however some require 9.x (I have created a note
if they require a 9.x version)
(Updated 26-July-2018)
MISC
system node run – node local sysconfig -a (Run sysconfig on the local node)
The symbol ! means other than in clustered ontap i.e. storage aggregate show -state !online
(show all aggregates that are not online)
node run -node <node_name> -command sysstat -c 10 -x 3 (Running the sysstat performance
tool with cluster mode)
system node image show (Show the running Data Ontap versions and which is the default boot)
node run * environment shelf (Shows information about the Shelves Connected including Model
Number)
network options switchless-cluster show (Displays if nodes are setup for cluster switchless or
switched – need to be in advanced mode)
network options switchless-cluster modify true (Sets the nodes to use cluster switchless, setting
to false sets the node to use cluster switches – need to be in advanced mode)
security login banner modify -message “Only Authorized users allowed!” (Set the login banner
to Only Authorized users allowed)
security login banner modify -message “” (Clears the login banner)
security login motd show (Shows the current Message of the day)
security login motd modify -vserver vserver1 (Modify the Message of the day, use the variable
below)
Operating System = s
Software Version = r
Node Name = n
Username = N
Time = t
Date = d
security login motd modify -vserver vserver1 -message “” (Clears the current Message of the
Day)
security login password -username diag (Set a password for the diag user)
system configuration backup create -backup-name node1-backup -node node1 (Create a cluster
backup from node1)
LOGS
To look at the logs within clustered ontap you must log in as the diag user to a specific node
username: diag
password: <your diag password>
cd /mroot/etc/mlog
cat command-history.log | grep volume (searches the command-history.log file for the keyword
volume)
COREDUMP
system coredump status (shows unsaved cored, saved cores and partial cores)
SERVICE PROCESSOR
system node service-processor show (Show the service processor firmware levels of each node in
the cluster)
system node service-processor image update-progress show (Shows the progress of a firmware
update on the Service Processor)
DISK SHELVES
storage shelf show (an 8.3 command that displays the loops and shelf information)
AUTOSUPPORT
system node autosupport budget show -node local (In diag mode – displays current time and size
budgets)
system node autosupport budget modify -node local -subsystem wafl -size-limit 0 -time-limit
10m (In diag mode – modification as per Netapp KB1014211)
system node autosupport show -node local -fields max-http-size,max-smtp-size (Displays max
http and smtp sizes)
CLUSTER
set -privilege advanced (required to be in advanced mode for the below commands)
cluster statistics show (shows statistics of the cluster – CPU, NFS, CIFS, FCP, Cluster
Interconnect Traffic)
cluster ring show -unitname vldb (check if volume location database is in quorum)
cluster ring show -unitname vifmgr (check if virtual interface manager is in quorum)
cluster ring show -unitname bcomd (check if san management daemon is in quorum)
cluster unjoin (must be run in priv -set admin, disjoins a cluster node. Must also remove its
cluster HA partner)
debug vreport show (must be run in priv -set diag, shows WAFL and VLDB consistency)
cluster kernel-service show -list (in diag mode, displays in quorum information)
debug smdb table bcomd_info show (displays database master / secondary for bcomd)
NODES
system node reboot -node NODENAME -reason ENTER REASON (Reboot node with a given
reason. NOTE: check ha policy)
FLASH CACHE
system node run -node * options flexscale.enable on (Enabling Flash Cache on each node)
system node run -node * options flexscale.lopri_blocks on (Enabling Flash Cache on each node)
system node run -node * options flexscale.normal_data_blocks on (Enabling Flash Cache on
each node)
node run NODENAME stats show -p flexscale-access (display flash cache statistics)
FLASH POOL
storage aggregate add-disks -disktype SSD (Add SSD disks to AGGR to begin creating a flash
pool)
priority hybrid-cache set volume1 read-cache=none write-cache=none (Within node shell and
diag mode disable read and write cache on volume1)
FAILOVER
storage failover modify -node <node_name> -enabled true (Enabling failover on one of the
nodes enables it on the other)
storage failover modify -node <node_name> -auto-giveback false (Disables auto giveback on
this ha node)
storage failover modify -node <node_name> -auto-giveback enable (Enables auto giveback on
this ha node)
aggregate show -node NODENAME -fields ha-policy (show SFO HA Policy for aggregate)
AGGREGATES
aggr show -space (Show used and used% for volume foot prints and
aggregate metadata)
aggregate show (show all aggregates size, used% and state)
aggregate add-disks -aggregate <aggregate_name> -diskcount
<number_of_disks> (Adds a number of disks to the aggregate)
storage disk assign -disk 0a.00.1 -owner <node_name> (Assign a specific disk to a node) OR
storage disk assign -count <number_of_disks> -owner <node_name> (Assign unallocated disks
to a node)
storage disk show -state broken | copy | maintenance | partner | percent | reconstructing | removed
| spare | unfail |zeroing (Show the state of a disk)
storage disk modify -disk NODE1:4c.10.0 -owner NODE1 -force-owner true (Force the change
of ownership of a disk)
storage disk removeowner -disk NODE1:4c.10.0 -force true (Remove ownership of a drive)
storage disk set-led -disk Node1:4c.10.0 -action blink -time 5 (Blink the led of disk 4c.10.0 for 5
minutes. Use the blinkoff action to turn it off)
VSERVER
VOLUMES
volume move show (shows all volume moves currently active or waiting. NOTE: You can only
do 8 volume moves at one time, more than 8 and they get queued)
system node run – node <node_name> vol size <volume_name> 400g (resize volume_name to
400GB) OR
volume recovery-queue purge-all (An 8.3 command that purges the volume undelete cache)
volume show -vserver SVM1 -volume * -autosize true (Shows which volumes have autosize
enabled)
volume show -vserver SVM1 -volume * -atime-update true (Shows which volumes have update
access time enabled)
volume modify -vserver SVM1 -volume volume1 -atime-update false (Turns update access time
off on the volume)
LUNS
lun show -vserver <vserver_name> (Shows all luns belonging to this specific vserver)
lun geometry -vserver <vserver_name> path /vol/vol1/lun1 (Displays the lun geometry)
lun mapping add-reporting-nodes -vserver <vserver_name> -volume <vol name> -lun <lun
path> -igroup <igroup name> -destination-aggregate <aggregate name> (Adds the igroup as
reporting nodes for the lun)
lun mapping show -vserver <vserver name> -volume <volume name> -fields reporting-nodes
(Show reporting nodes for a specific volume)
NFS
vserver <vserver_name> modify -4.1 -pnfs enabled (Enable pNFS. NOTE: Cannot coexist with
NFSv4)
FCP
fcp adapter modify -node NODENAME -adapter 0e -state down (Take port 0e offline)
node run <nodename>fcpadmin config (Shows the config of the adapters – Initiator or Target)
node run <nodename> -t target 0a (Changes port 0a from initiator or target – You must reboot
the node)
CIFS
vserver cifs modify -vserver <vserver_name> -default-site AD-DC-Site (Ontap 9.4 – Specify a
Active Directory Site)
vserver cifs options modify -vserver <vserver_name> -is-large-mtu-enabled false (Ontap 9.x set
to false due to bug ID: Netapp Bug ID 1139257 )
cifs domain discovered-servers discovery-mode modify -vserver <vserver name> -mode site
(Ontap 9.3 – Set Domain Controller discover to single site)
vserver cifs share create -share-name root -path / (Create a CIFS share called root)
SMB
vserver cifs options modify -vserver <vserver_name>-smb2-enabled true (Enable SMB2.0 and
2.1)
SNAPSHOTS
volume snapshot create -vserver vserver1 -volume vol1 -snapshot snapshot1 (Create a snapshot
on vserver1, vol1 called snapshot1)
volume snapshot restore -vserver vserver1 -volume vol1 -snapshot snapshot1 (Restore a snapshot
on vserver1, vol1 called snapshot1)
volume snapshot show -vserver vserver1 -volume vol1 (Show snapshots on vserver1 vol1)
snap autodelete show -vserver SVM1 -enabled true (Shows which volumes have autodelete
enabled)
NOTE: You can create snapmirror relationships between 2 different clusters by creating a peer
relationship
SNAPVAULT
NOTE: Type DP (asynchronous), LS (load-sharing mirror), XDP (backup vault, snapvault), TDP
(transition), RST (transient restore)
DEDUPE
volume efficiency on -vserver SVM1 -volume volume1 (Turns Dedupe on for this volume)
volume efficiency start -vserver SVM1 -volume volume1 -dedupe true -scan-old-data true (Starts
a volume efficiency dedupe job on volume1, scanning old data)
volume efficiency start -vserver SVM1 -volume volume1 -dedupe true (Starts a volume
efficiency dedupe job on volume1, not scanning old data)
volume efficiency show -op-status !idle (This will display the running volume efficiency tasks)
NETWORK INTERFACE
network interface modify -vserver vserver1 -lif cifs1 -address 192.168.1.10 -netmask
255.255.255.0 -force-subnet-association (Data Ontap 8.3 – forces the lif to use an IP address
from the subnet range that has been setup)
network port show (Shows the status and information on current network ports)
network port modify -node * -port <vif_name> -mtu 9000 (Enable Jumbo Frames on interface
vif_name>
network port modify -node * -port <data_port_name> -flowcontrol-admin none (Disables Flow
Control on port data_port_name)
network interface revert * (revert all network interfaces to their home port)
ifgrp create -node <node_name> -ifgrp <vif_name> -distr-func ip -mode multimode (Create an
interface group called vif_name on node_name)
network port ifgrp add-port -node <node_name> -ifgrp <vif_name> -port <port_name> (Add a
port to vif_name)
ifgrp show (Shows the status and information on current interface groups)
net int failover-groups show (Show Failover Group Status and information)
node run node1 ifstat -a (shows interface statistics such as crc errors)
node run node1 ifstat -z (clears interface statistics, optionally specify the interface name to clear
for that specific interface)
ROUTING GROUPS
network routing-groups show -vserver vserver1 (show routing groups for vserver1)
ping -lif-owner vserver1 -lif data1 -destination www.google.com (ping www.google.com via
vserver1 using the data1 port)
DNS
UNIX
vserver name-mapping create -vserver vserver1 -direction win-unix -position 1 -pattern (.+) -
replacement root (Create a name mapping from windows to unix)
vserver name-mapping create -vserver vserver1 -direction unix-win -position 1 -pattern (.+) -
replacement sysadmin011 (Create a name mapping from unix to windows)
NIS
vserver services nis-domain create -vserver vserver1 -domain vmlab.local -active true -servers
10.10.10.1 (Create nis-domain called vmlab.local pointing to 10.10.10.1)
vserver modify -vserver vserver1 -ns-switch nis-file (Name Service Switch referencing a file)
NTP
system services ntp server create -node <node_name> -server <ntp_server> (Adds an NTP server
to node_name)
system node date modify -timezone <Area/Location Timezone> (Sets timezone for
Area/Location Timezone. i.e. Australia/Sydney)
timezone -timezone Australia/Sydney (Sets the timezone for Sydney. Type ? after -timezone for
a list)
date -node <node_name> (Displays the date and time for the node)
PERFORMANCE
show-periodic -object volume -instance volumename -node node1 -vserver vserver1 -counter
total_ops|avg_latency|read_ops|read_latency (Show the specific counters for a volume)
GOTCHA’S
Removing a port from an ifgrp – To remove a port from an ifgrp, you must first shut down any
sub-interfaces of that ifgrp. For example if your ifgrp is named a0a and you have a vlan on it
called a0a-100, you must first shut down a0a-100, you will then be able to remove the port from
the ifgrp
FCoE – If you run multiple 10Gb Converged Network Adapters connecting to Nexus 5k, you
will not be allowed, and it’s not supported, to run more than 1 link per port-channel per switch
and use this port-channel or interface for a virtual fiber channel interface (VFC). For example, if
I have 2 x CNA’s in my Netapp Clustered Ontap FAS, and I connect e1a and e2a to Nexus
Switch 1, and then connect e1b and e2b to Nexus Switch 2, and create 1 port channel, you will
get an error message when you try to bind the interface to the vfc. The error is “VFC cannot be
bound to Port Channel as it has more than one member”
FCoE Lif Moves – To move a FCoE lif from it’s current home-port in clustered ontap, you must
first offline the FCoE lif, perform the lif move and then online the lif