0% found this document useful (0 votes)
77 views8 pages

Cli Commands Used To Troubleshoot Aci Fa

This document describes CLI commands that can be entered on the APIC controller by the 'admin' user to troubleshoot the ACI fabric. It provides sample outputs of commands like acidiag avread, acidiag fnvread, show controller, and techsupport controller. These commands can display fabric membership, controller states and health, and trigger tech support bundles without using root access.

Uploaded by

Kailash Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views8 pages

Cli Commands Used To Troubleshoot Aci Fa

This document describes CLI commands that can be entered on the APIC controller by the 'admin' user to troubleshoot the ACI fabric. It provides sample outputs of commands like acidiag avread, acidiag fnvread, show controller, and techsupport controller. These commands can display fabric membership, controller states and health, and trigger tech support bundles without using root access.

Uploaded by

Kailash Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

CLI Commands Used to Troubleshoot ACI

Fabric
Contents
Introduction
Commands
Display Outputs
General
Check Active Links
Show Link Status
Show Commands

Introduction
This document describes CLI commands that can be entered on the APIC controller by the
"admin" user in order to troubleshoot ACI fabric. Sample command output is also provided.

If you have used "root" access on the APIC controllers and have upgraded to a later release, you
might have noticed that "root" access has been removed or disallowed. One of the impacts to
Technical Support engineers is the use of av.bin/fnv.bin to troubleshoot fabric issues on the APIC
controllers. The binaries av.bin/fnv.bin are internal debugging tools for developers.

This information can be obtained with the "admin" user. The "admin" user runs in a special
container which does not have access to any of the binaries or internal storage. As an "admin"
user, the developers do not want users to access root file system which has all the binaries.
Currently, there is no plan to open this access at this time to the "admin" user.

Note: Some of the commands in this document might be deprecated in later releases.

Commands
This list contains some helpful commands that can be entered in the CLI of the APIC controller
and can be executed by the "admin" user.

● acidiag avread
● acidiag fnvread
● acidiag rvread
● acidiag start/stop/restart
# start/stop DMEs
● acidiag verifyapic
● controller
● eraseconfig setup or acidiag touch setup
● show controller [1,2,3]
# executing command: ls /aci/system/controllers/
● show fabric membership
# executing command: cat /aci/fabric/inventory/fabric-membership/clients/summary
● techsupport controller [controller]
controller = controller ID (for example, 1), range (for example, 1-2), name (for example, apic1),
or name list (for example, apic1,apic2,..)
● techsupport switch [switch]
switch = switch node ID (for example, 101), range (for example, 101-103), name (for example,
leaf1, spine1), or name list (for example, leaf1,leaf2,spine1...)
● version
● whoami
● cat /proc/net/bonding/bond0
● ip link

Display Outputs
General

acidiag avread

admin@rtp_apic1:~> acidiag avread


Local appliance ID=1 ADDRESS=10.0.0.1 TEP ADDRESS=10.0.0.0/16 CHASSIS_ID=f9269682-dcaf-11e3-
ad0a-5bdcd2d9fd69
Cluster of 3 lm(t):1(2014-05-16T05:54:52.713+00:00) appliances (out of targeted 3 lm(t):3(2014-
05-16T05:56:32.773+00:00)) with FABRIC_DOMAIN name=rtp_fabric set to version= lm(t):3(2014-05-
16T05:56:32.651+00:00)
appliance id=1 last mutated at 2014-05-16T04:10:26.983+00:00 address=10.0.0.1 tep
address=10.0.0.0/16 oob address=172.18.217.211/24 version=1.0(0.160i) lm(t):1(2014-05-
16T05:56:09.698+00:00) chassisId=f9269682-dcaf-11e3-ad0a-5bdcd2d9fd69 lm(t):1(2014-05-
16T05:56:09.698+00:00) commissioned=1 registered=1 active=yes(zeroTime) health=(applnc:255
lm(t):1(2014-05-16T05:57:49.232+00:00) svc's)
appliance id=2 last mutated at 2014-05-16T05:52:00.907+00:00 address=10.0.0.2 tep
address=10.0.0.0/16 oob address=172.18.217.215/24 version=1.0(0.160i) lm(t):2(2014-05-
16T05:56:09.675+00:00) chassisId=0c09da4e-dcbe-11e3-b521-83983b363aa3 lm(t):2(2014-05-
16T05:56:09.675+00:00) commissioned=1 registered=1 active=yes(2014-05-16T05:52:00.907+00:00)
health=(applnc:255 lm(t):2(2014-05-16T05:57:49.221+00:00) svc's)
appliance id=3 last mutated at 2014-05-16T05:56:09.565+00:00 address=10.0.0.3 tep
address=10.0.0.0/16 oob address=172.18.217.49/24 version=1.0(0.160i) lm(t):3(2014-05-
16T05:56:09.813+00:00) chassisId=ae236bc4-dcbe-11e3-abbf-c1ff139eb074 lm(t):3(2014-05-
16T05:56:09.813+00:00) commissioned=1 registered=1 active=yes(2014-05-16T05:56:09.565+00:00)
health=(applnc:255 lm(t):3(2014-05-16T05:57:49.098+00:00) svc's)
clusterTime=<diff=76746 common=2014-05-16T07:27:56.384+00:00 local=2014-05-16T07:26:39.638+00:00
pF=<displForm=0 offsSt=0 offsVlu=0 lm(t):3(2014-05-16T05:56:32.935+00:00)>>
---------------------------------------------

acidiag fnvread

admin@rtp_apic1:~> acidiag fnvread


ID Name Serial Number IP Address Role State LastUpdMsgId
------------------------------------------------------------------------------------------------
-
101 rtp_leaf1 SAL1732B53W 10.0.20.95/32 leaf active 0
102 rtp_leaf2 SAL172682S0 10.0.20.91/32 leaf active 0
103 rtp_leaf3 SAL1802KLJF 10.0.20.92/32 leaf active 0
201 rtp-spine1 FGE173400H2 10.0.20.94/32 spine active 0
202 rtp_spine2 FGE173400H7 10.0.20.93/32 spine active 0
Total 5 nodes

acidiag rvread

admin@apic1:~> acidiag rvread


Replicas are in expected states
---------------------------------------------
clusterTime=<diff=4081694 common=2014-06-09T03:01:40.314-07:00 local=2014-06-09T01:53:38.620-
07:00 pF=<displForm=0 offsSt=0 offsVlu=-25200 lm(t):3(2014-06-05T01:00:20.067-07:00)>>

acidiag verifyapic

admin@apic1:~> acidiag verifyapic


openssl_check: passed
ssh_check: passed
all_checks: passed

controller

admin@apic1:~> controller
operational-cluster-size : 3
differences-between-local-time-and-unified-cluster-time : 0
administrative-cluster-size : 3

controllers:
id name ip cluster-admin-state cluster-operational- health-state up-time system-
current-time
state
-- ----- -------- ------------------- -------------------- ------------ --------------- --------
-------------
1 apic1 10.0.0.1 in-service available fully-fit 02:18:35:52.000 2014-06-
09T02:10:14.668-07:00
2 apic2 10.0.0.2 in-service available fully-fit 05:00:24:46.000 2014-06-
09T03:18:08.740-07:00
3 apic3 10.0.0.3 in-service available fully-fit 05:00:24:30.000 2014-06-
09T02:18:51.843-07:00

eraseconfig setup or acidiag touch setup

admin@rtp_apic3:~> eraseconfig setup or acidiag touch setup

Do you want to cleanup the initial setup data? The system will be REBOOTED. (Y/n):

<< A way to reset w/o a complete reload of the APIC software. >>

show controller 1

admin@rtp_apic1:~> show controller 1


# Executing command: cat /aci/system/controllers/1/summary

# fabric-node
id : 1
admin-state : on
controller-uuid : f9269682-dcaf-11e3-ad0a-5bdcd2d9fd69
operational-state : available
cluster-state : in-service
health-state : fully-fit
infra-ip : 10.0.0.1
in-band-management-ip : 0.0.0.0
out-of-band-management-ip : 172.18.217.211
up-time : 00:03:41:30.000
system-current-time : 2014-05-16T07:40:44.782+00:00
firmware : 1.0(0.160i)
allocated-memory : 10553140
cpu-architecture : x86_64
cores : 6
model : 45
speed-mhz : 2500.000000
vendor : GenuineIntel
loc-led-administrative-state : disabled
loc-led-operation-state : on

tags:
name
----

show controller 2

admin@rtp_apic1:/> show controller 2


# Executing command: cat /aci/system/controllers/2/summary

# fabric-node
id : 2
admin-state : on
controller-uuid : 0c09da4e-dcbe-11e3-b521-83983b363aa3
operational-state : available
cluster-state : in-service
health-state : fully-fit
infra-ip : 10.0.0.2
in-band-management-ip : 0.0.0.0
out-of-band-management-ip : 172.18.217.215
up-time : 00:02:12:42.000
system-current-time : 2014-05-16T08:02:06.891+00:00
firmware : 1.0(0.160i)
allocated-memory : 8836508
cpu-architecture : x86_64
cores : 4
model : 62
speed-mhz : 2499.000000
vendor : GenuineIntel
loc-led-administrative-state : disabled
loc-led-operation-state : on

tags:
name
----

show controller 3

admin@rtp_apic1:/> show controller 3


# Executing command: cat /aci/system/controllers/3/summary

# fabric-node
id : 3
admin-state : on
controller-uuid : ae236bc4-dcbe-11e3-abbf-c1ff139eb074
operational-state : available
cluster-state : in-service
health-state : fully-fit
infra-ip : 10.0.0.3
in-band-management-ip : 0.0.0.0
out-of-band-management-ip : 172.18.217.49
up-time : 00:02:16:11.000
system-current-time : 2014-05-16T08:04:02.646+00:00
firmware : 1.0(0.160i)
allocated-memory : 8607340
cpu-architecture : x86_64
cores : 6
model : 45
speed-mhz : 2499.000000
vendor : GenuineIntel
loc-led-administrative-state : disabled
loc-led-operation-state : on

tags:
name

show fabric membership

admin@rtp_apic1:~> show fabric membership


# Executing command: cat /aci/fabric/inventory/fabric-membership/clients/summary

clients:
serial-number node-id node-name model role ip decomissioned supported-model
------------- ------- ---------- ------------ ----- ------------- ------------- ---------------
SAL1732B53W 101 rtp_leaf1 N9K-C9396PX leaf 10.0.20.95/32 no yes
SAL172682S0 102 rtp_leaf2 N9K-C93128TX leaf 10.0.20.91/32 no yes
SAL1802KLJF 103 rtp_leaf3 N9K-C9396PX leaf 10.0.20.92/32 no yes
FGE173400H2 201 rtp-spine1 N9K-C9508 spine 10.0.20.94/32 no yes
FGE173400H7 202 rtp_spine2 N9K-C9508 spine 10.0.20.93/32 no yes

techsupport controller 1

admin@rtp_apic1:~> techsupport controller 1


Triggered on demand tech support successfully for IFCs, available at: /data/techsupport on the
IFCs.

techsupport controller 1-2

admin@rtp_apic1:~> techsupport controller 1-2


Triggered on demand tech support successfully for IFCs, available at: /data/techsupport on the
IFCs.
Triggered on demand tech support successfully for IFCs, available at: /data/techsupport on the
IFCs.

techsupport controller rtp_apic1

admin@rtp_apic1:~> techsupport controller rtp_apic1


Triggered on demand tech support successfully for IFCs, available at: /data/techsupport on the
IFCs.

techsupport controller rtp_apic1,rtp_apic2,rtp_apic3

admin@rtp_apic1:~> techsupport controller rtp_apic1,rtp_apic2,rtp_apic3


Triggered on demand tech support successfully for IFCs, available at: /data/techsupport on the
IFCs.
Triggered on demand tech support successfully for IFCs, available at: /data/techsupport on the
IFCs.
Triggered on demand tech support successfully for IFCs, available at: /data/techsupport on the
IFCs.

techsupport switch 101

admin@rtp_apic1:/> techsupport switch 101


Triggered on demand tech support successfully for node 101, available at: /data/techsupport on
an IFC.

techsupport switch 101-103

admin@rtp_apic1:/> techsupport switch 101-103


Triggered on demand tech support successfully for node 101, available at: /data/techsupport on
an IFC.
Triggered on demand tech support successfully for node 102, available at: /data/techsupport on
an IFC.
Triggered on demand tech support successfully for node 103, available at: /data/techsupport on
an IFC.

techsupport switch rtp_leaf1

admin@rtp_apic1:/> techsupport switch rtp_leaf1


Triggered on demand tech support successfully for node 101, available at: /data/techsupport on
an IFC.

techsupport switch rtp_leaf1,rtp_leaf2,rtp-spine1,rtp_spine2

admin@rtp_apic1:/> techsupport switch rtp_leaf1,rtp_leaf2,rtp-spine1,rtp_spine2


Triggered on demand tech support successfully for node 101, available at: /data/techsupport on
an IFC.
Triggered on demand tech support successfully for node 102, available at: /data/techsupport on
an IFC.
Triggered on demand tech support successfully for node 201, available at: /data/techsupport on
an IFC.
Triggered on demand tech support successfully for node 202, available at: /data/techsupport on
an IFC.

version

admin@rtp_apic1:~> version
node type node id node name version
---------- ------- ---------- ------------------
controller 1 rtp_apic1 1.0(0.160i)
controller 2 rtp_apic2 1.0(0.160i)
controller 3 rtp_apic3 1.0(0.160i)
leaf 101 rtp_leaf1 n9000-11.0(0.791a)
leaf 102 rtp_leaf2 n9000-11.0(0.791a)
leaf 103 rtp_leaf3 n9000-11.0(0.791a)
spine 202 rtp_spine2 n9000-11.0(0.791a)
spine 201 rtp-spine1 n9000-11.0(0.791a)

whoami

admin@rtp_apic1:~>whoami
admin

Check Active Links

cat /proc/net/bonding/bond0

admin@apic1:~> cat /proc/net/bonding/bond0


Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)


Primary Slave: None
Currently Active Slave: eth3-2
MII Status: up
MII Polling Interval (ms): 60
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth3-1


MII Status: down
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:4b:fa:d4
Slave queue ID: 0

Slave Interface: eth3-2


MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:4b:fa:d5
Slave queue ID: 0

Show Link Status

ip link

admin@apic1:~> ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth1-1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP qlen
1000
link/ether 24:e9:b3:15:8e:5a brd ff:ff:ff:ff:ff:ff
3: eth1-2: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq master bond1 state DOWN
qlen 1000
link/ether 24:e9:b3:15:8e:5a brd ff:ff:ff:ff:ff:ff
4: eth3-1: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq master bond0 state DOWN
qlen 1000
link/ether 90:e2:ba:4b:fa:d4 brd ff:ff:ff:ff:ff:ff
5: eth3-2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen
1000
link/ether 90:e2:ba:4b:fa:d4 brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 90:e2:ba:4b:fa:d4 brd ff:ff:ff:ff:ff:ff
7: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master oobmgmt state
UP
link/ether 24:e9:b3:15:8e:5a brd ff:ff:ff:ff:ff:ff
8: oobmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 24:e9:b3:15:8e:5a brd ff:ff:ff:ff:ff:ff
9: bond0.4093@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1496 qdisc noqueue state UP
link/ether 90:e2:ba:4b:fa:d4 brd ff:ff:ff:ff:ff:ff
10: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 52:93:59:73:d3:d7 brd ff:ff:ff:ff:ff:ff
11: vxlan0: <BROADCAST,MULTICAST,100000> mtu 1500 qdisc noop state DOWN
link/ether 46:e9:7b:c9:3c:f7 brd ff:ff:ff:ff:ff:ff
12: tep0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fe:b0:5b:8d:28:e4 brd ff:ff:ff:ff:ff:ff
13: tep1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 1a:62:8c:72:f3:5a brd ff:ff:ff:ff:ff:ff
14: tep2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether aa:3f:ef:fb:b4:1a brd ff:ff:ff:ff:ff:ff
15: tep3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 1e:d7:52:b4:34:c7 brd ff:ff:ff:ff:ff:ff
16: tep4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 4e:69:6e:2f:8a:a7 brd ff:ff:ff:ff:ff:ff
17: tep5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 3e:0c:10:9d:8a:79 brd ff:ff:ff:ff:ff:ff
18: tep6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether de:3f:b2:51:58:8c brd ff:ff:ff:ff:ff:ff
19: tep7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 12:a9:3f:c0:26:dc brd ff:ff:ff:ff:ff:ff
20: teplo-1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether ca:9d:69:a3:61:4f brd ff:ff:ff:ff:ff:ff

Show Commands

Note: Some of these show commands might not execute at this time. This is a list of items
for your reference.

● show aaa - show AAA


● show access - show fabric access policies
● show auditlog - show auditlog on current path
● show bgp - show BGP information
● show cdp - show Cisco Discovery Protocol information
● show controller - show controller node
● show eventlog - show event log on current path
● show fabric - show fabric details
● show fex - show FEX information
● show external-data-collectors - show external data collectors
● show faults - show faults current path
● show firmware - show firmware information
● show health - show health on current path
● show historical-record-policy - show historic record policies
● show import-export - show import/export
● show interface - show interface status and information
● show interface-policies - show interface policies
● show ip - show IP information
● show lldp - show information about LLDP
● show isis - show IS-IS status and configuration
● show l4-l7 - show L4-L7 services details
● show module - show module information
● show schedulers - show schedulers
● show switch - show switch node
● show tenant - show tenant
● show trafficmap - show trafficmap
● show version - show version
● show vmware - show VMware vCenter/vShield Controllers
● show vpc - show VPC information

You might also like