0% found this document useful (0 votes)
2 views470 pages

5200-8310-Network Management and Monitoring Configuration Guide

The HPE FlexNetwork 5520 HI Switch Series Configuration Guide provides detailed instructions for network management and monitoring, including the use of utilities like ping and tracert for connectivity testing. It covers the configuration of Network Quality Assurance (NQA) operations, templates, and various monitoring features. The document is intended for users of software version 6525 and later, and includes guidelines for troubleshooting and system debugging.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views470 pages

5200-8310-Network Management and Monitoring Configuration Guide

The HPE FlexNetwork 5520 HI Switch Series Configuration Guide provides detailed instructions for network management and monitoring, including the use of utilities like ping and tracert for connectivity testing. It covers the configuration of Network Quality Assurance (NQA) operations, templates, and various monitoring features. The document is intended for users of software version 6525 and later, and includes guidelines for troubleshooting and system debugging.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 470

HPE FlexNetwork 5520 HI Switch Series

Network Management and Monitoring


Configuration Guide

Part number: 5200-8310


Software version: Release 6525 and later
Document version: 6W100-20210810
© Copyright 2021 Hewlett Packard Enterprise Development LP
The information contained herein is subject to change without notice. The only warranties for Hewlett Packard
Enterprise products and services are set forth in the express warranty statements accompanying such
products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett
Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or
copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software
Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s
standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard
Enterprise has no control over and is not responsible for information outside the Hewlett Packard Enterprise
website.
Acknowledgments
Intel®, Itanium®, Pentium®, Intel Inside®, and the Intel Inside logo are trademarks of Intel Corporation in the
United States and other countries.
Microsoft® and Windows® are either registered trademarks or trademarks of Microsoft Corporation in the
United States and/or other countries.
Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated.
Java and Oracle are registered trademarks of Oracle and/or its affiliates.
UNIX® is a registered trademark of The Open Group.
Contents
Using ping, tracert, and system debugging ···················································· 1
Ping ···································································································································································· 1
About ping ·················································································································································· 1
Using a ping command to test network connectivity ·················································································· 1
Example: Using the ping utility ··················································································································· 2
Tracert ································································································································································ 2
About tracert··············································································································································· 2
Prerequisites ·············································································································································· 3
Using a tracert command to identify failed or all nodes in a path······························································· 4
Example: Using the tracert utility················································································································ 4
System debugging ············································································································································· 5
About system debugging···························································································································· 5
Debugging a feature module ······················································································································ 6
Configuring NQA ··························································································· 7
About NQA ························································································································································· 7
NQA operating mechanism ························································································································ 7
Collaboration with Track····························································································································· 7
Threshold monitoring ································································································································· 8
NQA tasks at a glance ······································································································································· 8
Configuring the NQA server ······························································································································· 9
Enabling the NQA client ····································································································································· 9
Configuring NQA operations on the NQA client ································································································· 9
NQA operations tasks at a glance·············································································································· 9
Configuring the ICMP echo operation ······································································································ 10
Configuring the ICMP jitter operation ······································································································· 11
Configuring the DHCP operation·············································································································· 12
Configuring the DNS operation ················································································································ 13
Configuring the FTP operation ················································································································· 14
Configuring the HTTP operation ·············································································································· 15
Configuring the UDP jitter operation ········································································································ 16
Configuring the SNMP operation ············································································································· 18
Configuring the TCP operation················································································································· 18
Configuring the UDP echo operation ······································································································· 19
Configuring the UDP tracert operation ····································································································· 20
Configuring the voice operation ··············································································································· 22
Configuring the DLSw operation ·············································································································· 24
Configuring the path jitter operation ········································································································· 24
Configuring optional parameters for the NQA operation ·········································································· 26
Configuring the collaboration feature ······································································································· 27
Configuring threshold monitoring ············································································································· 27
Configuring the NQA statistics collection feature ····················································································· 29
Configuring the saving of NQA history records ························································································ 30
Scheduling the NQA operation on the NQA client ··················································································· 31
Configuring NQA templates on the NQA client ································································································ 31
Restrictions and guidelines ······················································································································ 31
NQA template tasks at a glance··············································································································· 31
Configuring the ICMP template ················································································································ 32
Configuring the DNS template ················································································································· 33
Configuring the TCP template ·················································································································· 34
Configuring the TCP half open template ·································································································· 35
Configuring the UDP template ················································································································· 36
Configuring the HTTP template················································································································ 38
Configuring the HTTPS template ············································································································· 39
Configuring the FTP template ·················································································································· 41
Configuring the RADIUS template ··········································································································· 42
Configuring the SSL template ·················································································································· 43

i
Configuring optional parameters for the NQA template ··········································································· 44
Display and maintenance commands for NQA ································································································ 45
NQA configuration examples ··························································································································· 46
Example: Configuring the ICMP echo operation ······················································································ 46
Example: Configuring the ICMP jitter operation ······················································································· 47
Example: Configuring the DHCP operation······························································································ 50
Example: Configuring the DNS operation ································································································ 51
Example: Configuring the FTP operation ································································································· 52
Example: Configuring the HTTP operation ······························································································ 53
Example: Configuring the UDP jitter operation ························································································ 54
Example: Configuring the SNMP operation ····························································································· 57
Example: Configuring the TCP operation································································································· 58
Example: Configuring the UDP echo operation ······················································································· 59
Example: Configuring the UDP tracert operation ····················································································· 61
Example: Configuring the voice operation ······························································································· 62
Example: Configuring the DLSw operation ······························································································ 65
Example: Configuring the path jitter operation ························································································· 66
Example: Configuring NQA collaboration································································································· 68
Example: Configuring the ICMP template ································································································ 70
Example: Configuring the DNS template ································································································· 71
Example: Configuring the TCP template ·································································································· 72
Example: Configuring the TCP half open template ·················································································· 72
Example: Configuring the UDP template ································································································· 73
Example: Configuring the HTTP template································································································ 74
Example: Configuring the HTTPS template ····························································································· 75
Example: Configuring the FTP template ·································································································· 75
Example: Configuring the RADIUS template ··························································································· 76
Example: Configuring the SSL template ·································································································· 77
Configuring iNQA ························································································· 79
About iNQA ······················································································································································ 79
iNQA benefits ··········································································································································· 79
Basic concepts ········································································································································· 79
Application scenarios ······························································································································· 81
Operating mechanism ······························································································································ 83
Restrictions and guidelines: iNQA configuration ······························································································ 85
Instance restrictions and guidelines ················································································································· 85
Network and target flow requirements ····································································································· 85
Collaborating with other features ············································································································· 85
iNQA tasks at a glance····································································································································· 86
Prerequisites ···················································································································································· 86
Configure a collector ········································································································································ 86
Configuring parameters for a collector ····································································································· 86
Configuring a collector instance ··············································································································· 87
Configuring an MP ··································································································································· 88
Enabling packet loss measurement ········································································································· 88
Configure the analyzer ····································································································································· 89
Configuring parameters for the analyzer ·································································································· 89
Configuring an analyzer instance ············································································································· 89
Configuring an AMS ································································································································· 90
Enabling the measurement functionality ·································································································· 90
Configuring iNQA logging································································································································· 91
Display and maintenance commands for iNQA collector ················································································· 91
Display and maintenance commands for iNQA analyzer ················································································· 91
iNQA configuration examples ·························································································································· 92
Example: Configuring an end-to-end iNQA packet loss measurement ···················································· 92
Example: Configuring an point-to-point iNQA packet loss measurement ················································ 96
Configuring NTP ························································································ 102
About NTP······················································································································································ 102
NTP application scenarios ····················································································································· 102
NTP working mechanism ······················································································································· 102

ii
NTP architecture ···································································································································· 103
NTP association modes ························································································································· 104
NTP security··········································································································································· 105
Protocols and standards ························································································································ 106
Restrictions and guidelines: NTP configuration ····························································································· 106
NTP tasks at a glance ···································································································································· 107
Enabling the NTP service······························································································································· 107
Configuring NTP association mode················································································································ 108
Configuring NTP in client/server mode ·································································································· 108
Configuring NTP in symmetric active/passive mode ·············································································· 108
Configuring NTP in broadcast mode ······································································································ 109
Configuring NTP in multicast mode········································································································ 110
Configuring the local clock as the reference source ······················································································ 110
Configuring access control rights ··················································································································· 111
Configuring NTP authentication ····················································································································· 111
Configuring NTP authentication in client/server mode ··········································································· 111
Configuring NTP authentication in symmetric active/passive mode ······················································ 113
Configuring NTP authentication in broadcast mode··············································································· 114
Configuring NTP authentication in multicast mode ················································································ 116
Controlling NTP message sending and receiving ·························································································· 117
Specifying a source address for NTP messages ··················································································· 117
Disabling an interface from receiving NTP messages ··········································································· 118
Configuring the maximum number of dynamic associations ·································································· 118
Setting a DSCP value for NTP packets·································································································· 119
Specifying the NTP time-offset thresholds for log and trap outputs ······························································· 119
Display and maintenance commands for NTP ······························································································· 120
NTP configuration examples ·························································································································· 120
Example: Configuring NTP client/server association mode ··································································· 120
Example: Configuring IPv6 NTP client/server association mode ··························································· 121
Example: Configuring NTP symmetric active/passive association mode··············································· 123
Example: Configuring IPv6 NTP symmetric active/passive association mode ····································· 124
Example: Configuring NTP broadcast association mode ······································································· 125
Example: Configuring NTP multicast association mode ········································································ 127
Example: Configuring IPv6 NTP multicast association mode ································································ 130
Example: Configuring NTP authentication in client/server association mode ········································ 133
Example: Configuring NTP authentication in broadcast association mode············································ 135
Configuring SNTP ······················································································ 138
About SNTP ··················································································································································· 138
SNTP working mode ······························································································································ 138
Protocols and standards ························································································································ 138
Restrictions and guidelines: SNTP configuration ··························································································· 138
SNTP tasks at a glance·································································································································· 138
Enabling the SNTP service ···························································································································· 138
Specifying an NTP server for the device ········································································································ 139
Configuring SNTP authentication ··················································································································· 139
Specifying the SNTP time-offset thresholds for log and trap outputs····························································· 140
Display and maintenance commands for SNTP ···························································································· 140
SNTP configuration examples ······················································································································· 141
Example: Configuring SNTP ·················································································································· 141
Configuring PoE ························································································ 143
About PoE ······················································································································································ 143
PoE system ············································································································································ 143
AI-driven PoE ········································································································································· 143
Protocols and standards ························································································································ 144
Restrictions: Hardware compatibility with PoE ······························································································· 144
Restrictions and guidelines: PoE configuration ······························································································ 144
PoE configuration tasks at a glance ··············································································································· 145
Prerequisites for configuring PoE ·················································································································· 145
Enabling PoE for a PI ····································································································································· 145
Enabling AI-driven PoE ·································································································································· 146

iii
Configuring PoE delay ··································································································································· 146
Disabling PoE on shutdown interfaces ·········································································································· 147
Enabling nonstandard PD detection··············································································································· 147
Configuring the PoE power ···························································································································· 148
Configuring the maximum PSE power ··································································································· 148
Configuring the maximum PI power ······································································································· 148
Configuring the PI priority policy ···················································································································· 149
Configuring PSE power monitoring ················································································································ 149
Configuring a PI by using a PoE profile ········································································································· 150
Restrictions and guidelines ···················································································································· 150
Configuring a PoE profile ······················································································································· 150
Applying a PoE profile ···························································································································· 150
Upgrading PSE firmware in service ··············································································································· 151
Enabling forced power supply ························································································································ 151
Display and maintenance commands for PoE ······························································································· 152
PoE configuration examples ·························································································································· 152
Example: Configuring PoE ····················································································································· 152
Troubleshooting PoE······································································································································ 153
Failure to set the priority of a PI to critical ······························································································ 153
Failure to apply a PoE profile to a PI······································································································ 154
Configuring SNMP ····················································································· 155
About SNMP ·················································································································································· 155
SNMP framework ··································································································································· 155
MIB and view-based MIB access control ······························································································· 155
SNMP operations ··································································································································· 156
Protocol versions···································································································································· 156
Access control modes ···························································································································· 156
FIPS compliance ············································································································································ 156
SNMP tasks at a glance ································································································································· 157
Enabling the SNMP agent ······························································································································ 157
Enabling SNMP versions ······························································································································· 157
Configuring SNMP common parameters ······································································································· 158
Configuring an SNMPv1 or SNMPv2c community ························································································· 159
About configuring an SNMPv1 or SNMPv2c community ······································································· 159
Restrictions and guidelines for configuring an SNMPv1 or SNMPv2c community································· 159
Configuring an SNMPv1/v2c community by a community name ··························································· 159
Configuring an SNMPv1/v2c community by creating an SNMPv1/v2c user ·········································· 160
Configuring an SNMPv3 group and user ······································································································· 160
Restrictions and guidelines for configuring an SNMPv3 group and user ··············································· 160
Configuring an SNMPv3 group and user in non-FIPS mode ································································· 161
Configuring an SNMPv3 group and user in FIPS mode········································································· 162
Configuring SNMP notifications ····················································································································· 162
About SNMP notifications ······················································································································ 162
Enabling SNMP notifications ·················································································································· 162
Configuring parameters for sending SNMP notifications ······································································· 163
Configuring SNMP logging ····························································································································· 165
Setting the interval at which the SNMP module examines the system configuration for changes ················· 165
Display and maintenance commands for SNMP ··························································································· 166
SNMP configuration examples ······················································································································· 167
Example: Configuring SNMPv1/SNMPv2c····························································································· 167
Example: Configuring SNMPv3·············································································································· 168
Configuring RMON ···················································································· 171
About RMON ·················································································································································· 171
RMON working mechanism ··················································································································· 171
RMON groups ········································································································································ 171
Sample types for the alarm group and the private alarm group ····························································· 173
Protocols and standards ························································································································ 173
Configuring the RMON statistics function ······································································································ 173
About the RMON statistics function ······································································································· 173
Creating an RMON Ethernet statistics entry ·························································································· 173

iv
Creating an RMON history control entry ································································································ 173
Configuring the RMON alarm function ··········································································································· 174
Display and maintenance commands for RMON ··························································································· 175
RMON configuration examples ······················································································································ 176
Example: Configuring the Ethernet statistics function ············································································ 176
Example: Configuring the history statistics function ··············································································· 176
Example: Configuring the alarm function ······························································································· 177
Configuring the Event MIB ········································································· 180
About the Event MIB ······································································································································ 180
Trigger ···················································································································································· 180
Monitored objects ··································································································································· 180
Trigger test ············································································································································· 180
Event actions·········································································································································· 181
Object list ··············································································································································· 181
Object owner ·········································································································································· 182
Restrictions and guidelines: Event MIB configuration ···················································································· 182
Event MIB tasks at a glance··························································································································· 182
Prerequisites for configuring the Event MIB ··································································································· 182
Configuring the Event MIB global sampling parameters ················································································ 183
Configuring Event MIB object lists ················································································································· 183
Configuring an event ······································································································································ 183
Creating an event ··································································································································· 183
Configuring a set action for an event ····································································································· 184
Configuring a notification action for an event ························································································· 184
Enabling the event ································································································································· 185
Configuring a trigger······································································································································· 185
Creating a trigger and configuring its basic parameters········································································· 185
Configuring a Boolean trigger test·········································································································· 186
Configuring an existence trigger test······································································································ 186
Configuring a threshold trigger test ········································································································ 187
Enabling trigger sampling······················································································································· 188
Enabling SNMP notifications for the Event MIB module ················································································ 188
Display and maintenance commands for Event MIB ····················································································· 189
Event MIB configuration examples················································································································· 189
Example: Configuring an existence trigger test······················································································ 189
Example: Configuring a Boolean trigger test·························································································· 191
Example: Configuring a threshold trigger test ························································································ 194
Configuring NETCONF ·············································································· 197
About NETCONF ··········································································································································· 197
NETCONF structure ······························································································································· 197
NETCONF message format ··················································································································· 197
How to use NETCONF ··························································································································· 199
Protocols and standards ························································································································ 199
FIPS compliance ············································································································································ 199
NETCONF tasks at a glance ·························································································································· 199
Establishing a NETCONF session ················································································································· 200
Restrictions and guidelines for NETCONF session establishment ························································ 200
Setting NETCONF session attributes····································································································· 200
Establishing NETCONF over SOAP sessions ······················································································· 202
Establishing NETCONF over SSH sessions ·························································································· 203
Establishing NETCONF over Telnet or NETCONF over console sessions ··········································· 203
Exchanging capabilities·························································································································· 204
Retrieving device configuration information ··································································································· 204
Restrictions and guidelines for device configuration retrieval ································································ 204
Retrieving device configuration and state information ··········································································· 205
Retrieving non-default settings··············································································································· 207
Retrieving NETCONF information ·········································································································· 208
Retrieving YANG file content ················································································································· 208
Retrieving NETCONF session information····························································································· 209
Example: Retrieving a data entry for the interface table ········································································ 209

v
Example: Retrieving non-default configuration data ·············································································· 211
Example: Retrieving syslog configuration data ······················································································ 212
Example: Retrieving NETCONF session information············································································· 213
Filtering data ·················································································································································· 214
About data filtering ································································································································· 214
Restrictions and guidelines for data filtering ·························································································· 214
Table-based filtering······························································································································· 214
Column-based filtering ··························································································································· 215
Example: Filtering data with regular expression match·········································································· 217
Example: Filtering data by conditional match························································································· 219
Locking or unlocking the running configuration ······························································································ 220
About configuration locking and unlocking ····························································································· 220
Restrictions and guidelines for configuration locking and unlocking ······················································ 220
Locking the running configuration ·········································································································· 220
Unlocking the running configuration ······································································································· 220
Example: Locking the running configuration ·························································································· 221
Modifying the configuration ···························································································································· 222
About the <edit-config> operation ·········································································································· 222
Procedure··············································································································································· 222
Example: Modifying the configuration ···································································································· 223
Saving the running configuration ··················································································································· 224
About the <save> operation ··················································································································· 224
Restrictions and guidelines ···················································································································· 224
Procedure··············································································································································· 224
Example: Saving the running configuration···························································································· 225
Loading the configuration ······························································································································· 226
About the <load> operation ···················································································································· 226
Restrictions and guidelines ···················································································································· 226
Procedure··············································································································································· 226
Rolling back the configuration ························································································································ 226
Restrictions and guidelines ···················································································································· 226
Rolling back the configuration based on a configuration file ·································································· 227
Rolling back the configuration based on a rollback point ······································································· 227
Performing CLI operations through NETCONF······························································································ 231
About CLI operations through NETCONF ······························································································ 231
Restrictions and guidelines ···················································································································· 231
Procedure··············································································································································· 231
Example: Performing CLI operations ····································································································· 232
Subscribing to events ····································································································································· 233
About event subscription ························································································································ 233
Restrictions and guidelines ···················································································································· 233
Subscribing to syslog events·················································································································· 233
Subscribing to events monitored by NETCONF····················································································· 234
Subscribing to events reported by modules ··························································································· 235
Canceling an event subscription ············································································································ 236
Example: Subscribing to syslog events·································································································· 237
Terminating NETCONF sessions ··················································································································· 238
About NETCONF session termination ··································································································· 238
Procedure··············································································································································· 238
Example: Terminating another NETCONF session ··············································································· 238
Returning to the CLI ······································································································································· 239
Supported NETCONF operations ······························································ 240
action······················································································································································ 240
CLI·························································································································································· 240
close-session ········································································································································· 241
edit-config: create··································································································································· 241
edit-config: delete ··································································································································· 242
edit-config: merge ·································································································································· 242
edit-config: remove································································································································· 242
edit-config: replace ································································································································· 243
edit-config: test-option ···················································································································· 243

vi
edit-config: default-operation·················································································································· 244
edit-config: error-option ·························································································································· 245
edit-config: incremental ·························································································································· 246
get ·························································································································································· 246
get-bulk ·················································································································································· 247
get-bulk-config········································································································································ 247
get-config ··············································································································································· 248
get-sessions ··········································································································································· 248
kill-session·············································································································································· 248
load ························································································································································ 249
lock ························································································································································· 249
rollback ··················································································································································· 249
save························································································································································ 250
unlock ····················································································································································· 250
Configuring CWMP ···················································································· 251
About CWMP ················································································································································· 251
CWMP network framework ···················································································································· 251
Basic CWMP functions··························································································································· 251
How CWMP works ································································································································· 253
Restrictions and guidelines: CWMP configuration ························································································· 255
CWMP tasks at a glance ································································································································ 255
Enabling CWMP from the CLI ························································································································ 256
Configuring ACS attributes····························································································································· 256
About ACS attributes······························································································································ 256
Configuring the preferred ACS attributes ······························································································· 256
Configuring the default ACS attributes from the CLI ·············································································· 257
Configuring CPE attributes····························································································································· 258
About CPE attributes······························································································································ 258
Specifying an SSL client policy for HTTPS connection to ACS ····························································· 258
Configuring ACS authentication parameters ·························································································· 258
Configuring the provision code··············································································································· 259
Configuring the CWMP connection interface ························································································· 259
Configuring autoconnect parameters ····································································································· 260
Setting the close-wait timer ···················································································································· 261
Display and maintenance commands for CWMP ·························································································· 261
CWMP configuration examples ······················································································································ 261
Example: Configuring CWMP ················································································································ 261
Configuring EAA ························································································ 270
About EAA······················································································································································ 270
EAA framework ······································································································································ 270
Elements in a monitor policy ·················································································································· 271
EAA environment variables ···················································································································· 272
Configuring a user-defined EAA environment variable ·················································································· 273
Configuring a monitor policy··························································································································· 274
Restrictions and guidelines ···················································································································· 274
Configuring a monitor policy from the CLI ······························································································ 274
Configuring a monitor policy by using Tcl ······························································································ 275
Suspending monitor policies ·························································································································· 276
Display and maintenance commands for EAA ······························································································· 277
EAA configuration examples ·························································································································· 277
Example: Configuring a CLI event monitor policy by using Tcl ······························································ 277
Example: Configuring a CLI event monitor policy from the CLI ····························································· 278
Example: Configuring a track event monitor policy from the CLI ··························································· 279
Example: Configuring a CLI event monitor policy with EAA environment variables from the CLI ·········· 281
Monitoring and maintaining processes ······················································· 283
About monitoring and maintaining processes ································································································ 283
Process monitoring and maintenance tasks at a glance ················································································ 283
Starting or stopping a third-party process ······································································································ 283
About third-party processes ··················································································································· 283

vii
Starting a third-party process ················································································································· 284
Stopping a third-party process ··············································································································· 284
Monitoring and maintaining processes ·········································································································· 284
Monitoring and maintaining user processes ·································································································· 285
About monitoring and maintaining user processes ················································································ 285
Configuring core dump ··························································································································· 285
Display and maintenance commands for user processes······································································ 286
Monitoring and maintaining kernel threads ···································································································· 286
Configuring kernel thread deadloop detection ······················································································· 286
Configuring kernel thread starvation detection······················································································· 287
Display and maintenance commands for kernel threads ······································································· 287
Configuring port mirroring ·········································································· 289
About port mirroring ······································································································································· 289
Terminology ··········································································································································· 289
Port mirroring classification ···················································································································· 290
Local port mirroring (SPAN) ··················································································································· 290
Layer 2 remote port mirroring (RSPAN) ································································································· 290
Restrictions and guidelines: Port mirroring configuration ··············································································· 292
Configuring local port mirroring (SPAN) ········································································································· 292
Restrictions and guidelines for local port mirroring configuration··························································· 292
Local port mirroring tasks at a glance ···································································································· 292
Creating a local mirroring group ············································································································· 293
Configuring mirroring sources ················································································································ 293
Configuring the monitor port··················································································································· 293
Configuring local port mirroring group with multiple monitoring devices ························································ 294
Configuring Layer 2 remote port mirroring (RSPAN) ····················································································· 295
Restrictions and guidelines for Layer 2 remote port mirroring configuration ·········································· 295
Layer 2 remote port mirroring with a fixed reflector port configuration tasks at a glance ······················· 296
Layer 2 remote port mirroring with reflector port configuration task list ················································· 296
Layer 2 remote port mirroring with egress port configuration task list···················································· 296
Creating a remote destination group ······································································································ 297
Configuring the monitor port··················································································································· 297
Configuring the remote probe VLAN ······································································································ 298
Assigning the monitor port to the remote probe VLAN··········································································· 298
Creating a remote source group ············································································································ 298
Configuring mirroring sources ················································································································ 299
Configuring the reflector port·················································································································· 299
Configuring the egress port ···················································································································· 300
Display and maintenance commands for port mirroring ················································································ 301
Port mirroring configuration examples ··········································································································· 301
Example: Configuring local port mirroring ······························································································ 301
Example: Configuring local port mirroring with multiple monitoring devices ·········································· 302
Example: Configuring Layer 2 remote port mirroring (RSPAN with reflector port) ································· 303
Example: Configuring Layer 2 remote port mirroring (RSPAN with egress port) ··································· 306
Configuring flow mirroring ·········································································· 309
About flow mirroring ······································································································································· 309
Types of flow-mirroring traffic to an interface ································································································· 309
Flow mirroring SPAN or RSPAN ············································································································ 309
Flow mirroring ERSPAN························································································································· 310
Restrictions and guidelines: Flow mirroring configuration ·············································································· 311
Flow mirroring tasks at a glance ···················································································································· 311
Configuring a traffic class ······························································································································· 311
Configuring a traffic behavior ························································································································· 312
Configuring a QoS policy ······························································································································· 312
Applying a QoS policy ···································································································································· 313
Applying a QoS policy to an interface ···································································································· 313
Applying a QoS policy to a VLAN··········································································································· 313
Applying a QoS policy globally ··············································································································· 313
Flow mirroring configuration examples ·········································································································· 314
Example: Configuring flow mirroring ······································································································ 314

viii
Configuring sFlow ······················································································ 316
About sFlow ··················································································································································· 316
Protocols and standards ································································································································ 316
Configuring basic sFlow information ·············································································································· 316
Configuring flow sampling ······························································································································ 317
Configuring counter sampling ························································································································ 318
Display and maintenance commands for sFlow ···························································································· 318
sFlow configuration examples ························································································································ 318
Example: Configuring sFlow ·················································································································· 318
Troubleshooting sFlow ··································································································································· 320
The remote sFlow collector cannot receive sFlow packets ···································································· 320
Configuring the information center ····························································· 321
About the information center ·························································································································· 321
Log types················································································································································ 321
Log levels ··············································································································································· 321
Log destinations ····································································································································· 322
Default output rules for logs ··················································································································· 322
Default output rules for diagnostic logs ·································································································· 322
Default output rules for security logs ······································································································ 322
Default output rules for hidden logs ······································································································· 323
Default output rules for trace logs ·········································································································· 323
Log formats and field descriptions ········································································································· 323
FIPS compliance ············································································································································ 326
Information center tasks at a glance ·············································································································· 326
Managing standard system logs ············································································································ 326
Managing hidden logs ···························································································································· 327
Managing security logs ·························································································································· 327
Managing diagnostic logs······················································································································· 327
Managing trace logs ······························································································································· 328
Enabling the information center ····················································································································· 328
Outputting logs to various destinations ·········································································································· 328
Outputting logs to the console················································································································ 328
Outputting logs to the monitor terminal ·································································································· 329
Outputting logs to log hosts···················································································································· 330
Outputting logs to the log buffer ············································································································· 331
Saving logs to the log file ······················································································································· 331
Setting the minimum storage period for logs ································································································· 332
Enabling synchronous information output ······································································································ 333
Configuring log suppression··························································································································· 333
Enabling duplicate log suppression········································································································ 333
Configuring log suppression for a module······························································································ 334
Disabling an interface from generating link up or link down logs ··························································· 334
Enabling SNMP notifications for system logs································································································· 334
Managing security logs ·································································································································· 335
Saving security logs to the security log file ···························································································· 335
Managing the security log file················································································································· 336
Saving diagnostic logs to the diagnostic log file ····························································································· 336
Setting the maximum size of the trace log file································································································ 337
Display and maintenance commands for information center ········································································· 337
Information center configuration examples ···································································································· 338
Example: Outputting logs to the console································································································ 338
Example: Outputting logs to a UNIX log host ························································································· 339
Example: Outputting logs to a Linux log host ························································································· 340
Configuring the packet capture ·································································· 342
About packet capture ····································································································································· 342
Building a capture filter rule···························································································································· 342
Capture filter rule keywords ··················································································································· 342
Capture filter rule operators ··················································································································· 343
Capture filter rule expressions ··············································································································· 345

ix
Building a display filter rule ···························································································································· 346
Display filter rule keywords ···················································································································· 346
Display filter rule operators ···················································································································· 347
Display filter rule expressions ················································································································ 348
Restrictions and guidelines: Packet capture ·································································································· 349
Installing feature image ·································································································································· 349
Configuring feature image-based packet capture ·························································································· 349
Restrictions and guidelines ···················································································································· 349
Saving captured packets to a file ··········································································································· 349
Displaying specific captured packets ····································································································· 349
Stopping packet capture ································································································································ 350
Displaying the contents in a packet file ·········································································································· 350
Packet capture configuration examples ········································································································· 350
Example: Configuring feature image-based packet capture ·································································· 350
Configuring VCF fabric ·············································································· 354
About VCF fabric ············································································································································ 354
VCF fabric topology································································································································ 354
Neutron overview ··································································································································· 356
Automated VCF fabric deployment ········································································································ 358
Process of automated VCF fabric deployment······················································································· 359
Template file··········································································································································· 359
VCF fabric task at a glance ···························································································································· 360
Configuring automated VCF fabric deployment ····························································································· 360
Enabling VCF fabric topology discovery ········································································································ 361
Configuring automated underlay network deployment ··················································································· 362
Specify the template file for automated underlay network deployment ·················································· 362
Specifying the role of the device in the VCF fabric ················································································ 362
Configuring the device as a master spine node ····················································································· 363
Setting the NETCONF username and password ··················································································· 363
Pausing automated underlay network deployment ················································································ 363
Configuring automated overlay network deployment ····················································································· 364
Restrictions and guidelines for automated overlay network deployment ··············································· 364
Automated overlay network deployment tasks at a glance ···································································· 364
Prerequisites for automated overlay network deployment ····································································· 365
Configuring parameters for the device to communicate with RabbitMQ servers ··································· 365
Specifying the network type ··················································································································· 366
Enabling L2 agent ·································································································································· 366
Enabling L3 agent ·································································································································· 367
Configuring the border node ·················································································································· 367
Enabling local proxy ARP······················································································································· 368
Configuring the MAC address of VSI interfaces····················································································· 368
Display and maintenance commands for VCF fabric ····················································································· 369
Using Ansible for automated configuration management ··························· 370
About Ansible ················································································································································· 370
Ansible network architecture ·················································································································· 370
How Ansible works ································································································································· 370
Restrictions and guidelines ···························································································································· 370
Configuring the device for management with Ansible ···················································································· 371
Device setup examples for management with Ansible ·················································································· 371
Example: Setting up the device for management with Ansible ······························································ 371
Configuring Chef ························································································ 373
About Chef ····················································································································································· 373
Chef network framework ························································································································ 373
Chef resources ······································································································································· 374
Chef configuration file ···························································································································· 374
Restrictions and guidelines: Chef configuration ····························································································· 375
Prerequisites for Chef ···································································································································· 376
Starting Chef ·················································································································································· 376
Configuring the Chef server ··················································································································· 376

x
Configuring a workstation······················································································································· 376
Configuring a Chef client ························································································································ 376
Shutting down Chef ········································································································································ 377
Chef configuration examples·························································································································· 377
Example: Configuring Chef ···················································································································· 377
Chef resources ·························································································· 380
netdev_device ················································································································································ 380
netdev_interface············································································································································· 380
netdev_l2_interface ········································································································································ 382
netdev_l2vpn ·················································································································································· 383
netdev_lagg···················································································································································· 383
netdev_vlan ···················································································································································· 384
netdev_vsi ······················································································································································ 385
netdev_vte······················································································································································ 386
netdev_vxlan ·················································································································································· 386
Configuring Puppet ···················································································· 388
About Puppet ················································································································································· 388
Puppet network framework ···················································································································· 388
Puppet resources ··································································································································· 389
Restrictions and guidelines: Puppet configuration ························································································· 389
Prerequisites for Puppet································································································································· 389
Starting Puppet ·············································································································································· 390
Configuring resources ···························································································································· 390
Configuring a Puppet agent ··················································································································· 390
Signing a certificate for the Puppet agent ······························································································ 390
Shutting down Puppet on the device ············································································································· 390
Puppet configuration examples ······················································································································ 391
Example: Configuring Puppet ················································································································ 391
Puppet resources······················································································· 392
netdev_device ················································································································································ 392
netdev_interface············································································································································· 393
netdev_l2_interface ········································································································································ 394
netdev_l2vpn ·················································································································································· 395
netdev_lagg···················································································································································· 396
netdev_vlan ···················································································································································· 397
netdev_vsi ······················································································································································ 398
netdev_vte······················································································································································ 398
netdev_vxlan ·················································································································································· 399
Configuring EPS agent ·············································································· 401
About EPS and EPS agent ···························································································································· 401
Restrictions: Hardware compatibility with EPS agent ···················································································· 401
Configuring EPS agent··································································································································· 402
Display and maintenance commands for EPS agent ····················································································· 402
EPS agent configuration examples ················································································································ 402
Example: Configuring EPS agent··········································································································· 402
Configuring eMDI ······················································································· 404
About eMDI ···················································································································································· 404
eMDI implementation ····························································································································· 404
Monitored items······································································································································ 405
Restrictions and guidelines ···························································································································· 408
Prerequisites ·················································································································································· 409
Configure eMDI ·············································································································································· 409
Configuring an eMDI instance ················································································································ 409
Starting an eMDI instance ······················································································································ 410
Stopping an eMDI instance ···················································································································· 410
Display and maintenance commands for eMDI ····························································································· 411
eMDI configuration examples ························································································································ 411

xi
Example: Configuring an eMDI instance for a UDP data flow································································ 411
Configuring SQA ························································································ 413
About SQA ····················································································································································· 413
Restrictions and guidelines: SQA configuration ····························································································· 413
Configuring SQA ············································································································································ 413
Display and maintenance commands for SQA ······························································································ 413
SQA configuration examples·························································································································· 414
Example: Configuring SQA ···················································································································· 414
Document conventions and icons ······························································ 416
Conventions ··················································································································································· 416
Network topology icons ·································································································································· 417
Support and other resources ····································································· 418
Accessing Hewlett Packard Enterprise Support····························································································· 418
Accessing updates ········································································································································· 418
Websites ················································································································································ 419
Customer self repair ······························································································································· 419
Remote support······································································································································ 419
Documentation feedback ······················································································································· 419
Index·········································································································· 421

xii
Using ping, tracert, and system
debugging
This chapter covers ping, tracert, and information about debugging the system.

Ping
About ping
Use the ping utility to determine if an address is reachable.
Ping sends ICMP echo requests (ECHO-REQUEST) to the destination device. Upon receiving the
requests, the destination device responds with ICMP echo replies (ECHO-REPLY) to the source
device. The source device outputs statistics about the ping operation, including the number of
packets sent, number of echo replies received, and the round-trip time. You can measure the
network performance by analyzing these statistics.
You can use the ping –r command to display the routers through which ICMP echo requests have
passed. The test procedure of ping –r is as shown in Figure 1:
1. The source device (Device A) sends an ICMP echo request to the destination device (Device C)
with the RR option empty.
2. The intermediate device (Device B) adds the IP address of its outbound interface (1.1.2.1) to
the RR option of the ICMP echo request, and forwards the packet.
3. Upon receiving the request, the destination device copies the RR option in the request and
adds the IP address of its outbound interface (1.1.2.2) to the RR option. Then the destination
device sends an ICMP echo reply.
4. The intermediate device adds the IP address of its outbound interface (1.1.1.2) to the RR option
in the ICMP echo reply, and then forwards the reply.
5. Upon receiving the reply, the source device adds the IP address of its inbound interface (1.1.1.1)
to the RR option. The detailed information of routes from Device A to Device C is formatted as:
1.1.1.1 <-> {1.1.1.2; 1.1.2.1} <-> 1.1.2.2.
Figure 1 Ping operation
Device A Device B Device C
1.1.1.1/24 1.1.2.1/24

1.1.1.2/24 1.1.2.2/24
ECHO-REQUEST
(NULL)
ECHO-REQUEST
1st=1.1.2.1
ECHO-REPLY ECHO-REPLY
ECHO-REPLY 1st=1.1.2.1 1st=1.1.2.1
1st=1.1.2.1 2nd=1.1.2.2 2nd=1.1.2.2
2nd=1.1.2.2 3rd=1.1.1.2
3rd=1.1.1.2
4th=1.1.1.1

Using a ping command to test network connectivity


Perform the following tasks in any view:
• Determine if an IPv4 address is reachable.

1
ping [ ip ] [ -a source-ip | -c count | -f | -h ttl | -i interface-type
interface-number | -m interval | -n | -p pad | -q | -r | -s packet-size | -t
timeout | -tos tos | -v | -vpn-instance vpn-instance-name ] * host
Increase the timeout time (indicated by the -t keyword) on a low-speed network.
• Determine if an IPv6 address is reachable.
ping ipv6 [ -a source-ipv6 | -c count | -i interface-type
interface-number | -m interval | -q | -s packet-size | -t timeout | -tc
traffic-class | -v | -vpn-instance vpn-instance-name ] * host
Increase the timeout time (indicated by the -t keyword) on a low-speed network.

Example: Using the ping utility


Network configuration
As shown in Figure 2, determine if Device A and Device C can reach each other.
Figure 2 Network diagram
1.1.1.1/24 1.1.1.2/24 1.1.2.1/24 1.1.2.2/24

Device A Device B Device C

Procedure
# Test the connectivity between Device A and Device C.
<DeviceA> ping 1.1.2.2
Ping 1.1.2.2 (1.1.2.2): 56 data bytes, press CTRL+C to break
56 bytes from 1.1.2.2: icmp_seq=0 ttl=254 time=2.137 ms
56 bytes from 1.1.2.2: icmp_seq=1 ttl=254 time=2.051 ms
56 bytes from 1.1.2.2: icmp_seq=2 ttl=254 time=1.996 ms
56 bytes from 1.1.2.2: icmp_seq=3 ttl=254 time=1.963 ms
56 bytes from 1.1.2.2: icmp_seq=4 ttl=254 time=1.991 ms

--- Ping statistics for 1.1.2.2 ---


5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 1.963/2.028/2.137/0.062 ms

The output shows the following information:


• Device A sends five ICMP packets to Device C and Device A receives five ICMP packets.
• No ICMP packet is lost.
• The route is reachable.

Tracert
About tracert
Tracert (also called Traceroute) enables retrieval of the IP addresses of Layer 3 devices in the path
to a destination. In the event of network failure, use tracert to test network connectivity and identify
failed nodes.

2
Figure 3 Tracert operation
Device A Device B Device C Device D
1.1.1.1/24 1.1.2.1/24 1.1.3.1/24

1.1.1.2/24 1.1.2.2/24 1.1.3.2/24

Hop Limit=1
TTL exceeded

Hop Limit=2
TTL exceeded

Hop Limit=n
UDP port unreachable

Tracert uses received ICMP error messages to get the IP addresses of devices. Tracert works as
shown in Figure 3:
1. The source device sends a UDP packet with a TTL value of 1 to the destination device. The
destination UDP port is not used by any application on the destination device.
2. The first hop (Device B, the first Layer 3 device that receives the packet) responds by sending a
TTL-expired ICMP error message to the source, with its IP address (1.1.1.2) encapsulated. This
way, the source device can get the address of the first Layer 3 device (1.1.1.2).
3. The source device sends a packet with a TTL value of 2 to the destination device.
4. The second hop (Device C) responds with a TTL-expired ICMP error message, which gives the
source device the address of the second Layer 3 device (1.1.2.2).
5. This process continues until a packet sent by the source device reaches the ultimate
destination device. Because no application uses the destination port specified in the packet, the
destination device responds with a port-unreachable ICMP message to the source device, with
its IP address encapsulated. This way, the source device gets the IP address of the destination
device (1.1.3.2).
6. The source device determines that:
 The packet has reached the destination device after receiving the port-unreachable ICMP
message.
 The path to the destination device is 1.1.1.2 to 1.1.2.2 to 1.1.3.2.

Prerequisites
Before you use a tracert command, perform the tasks in this section.
For an IPv4 network:
• Enable sending of ICMP timeout packets on the intermediate devices (devices between the
source and destination devices). If the intermediate devices are HPE devices, execute the ip
ttl-expires enable command on the devices. For more information about this command,
see Layer 3—IP Services Command Reference.
• Enable sending of ICMP destination unreachable packets on the destination device. If the
destination device is an HPE device, execute the ip unreachables enable command. For
more information about this command, see Layer 3—IP Services Command Reference.
For an IPv6 network:
• Enable sending of ICMPv6 timeout packets on the intermediate devices (devices between the
source and destination devices). If the intermediate devices are HPE devices, execute the

3
ipv6 hoplimit-expires enable command on the devices. For more information about
this command, see Layer 3—IP Services Command Reference.
• Enable sending of ICMPv6 destination unreachable packets on the destination device. If the
destination device is an HPE device, execute the ipv6 unreachables enable command.
For more information about this command, see Layer 3—IP Services Command Reference.

Using a tracert command to identify failed or all nodes in a


path
Perform the following tasks in any view:
• Trace the route to an IPv4 destination.
tracert [ -a source-ip | -f first-ttl | -m max-ttl | -p port | -q
packet-number | -t tos | -vpn-instance vpn-instance-name [ -resolve-as
{ global | none | vpn } ] | -w timeout ] * host
• Trace the route to an IPv6 destination.
tracert ipv6 [ -a source-ipv6 | -f first-hop | -m max-hops | -p port | -q
packet-number | -t traffic-class | -vpn-instance vpn-instance-name
[ -resolve-as { global | none | vpn } ] | -w timeout ] * host

Example: Using the tracert utility


Network configuration
As shown in Figure 4, Device A failed to Telnet to Device C.
Test the network connectivity between Device A and Device C. If they cannot reach each other,
locate the failed nodes in the network.
Figure 4 Network diagram
1.1.1.1/24 1.1.1.2/24 1.1.2.1/24 1.1.2.2/24

Device A Device B Device C

Procedure
1. Configure IP addresses for the devices as shown in Figure 4.
2. Configure a static route on Device A.
<DeviceA> system-view
[DeviceA] ip route-static 0.0.0.0 0.0.0.0 1.1.1.2
3. Test connectivity between Device A and Device C.
[DeviceA] ping 1.1.2.2
Ping 1.1.2.2(1.1.2.2): 56 -data bytes, press CTRL+C to break
Request time out
Request time out
Request time out
Request time out
Request time out

--- Ping statistics for 1.1.2.2 ---


5 packet(s) transmitted,0 packet(s) received,100.0% packet loss
The output shows that Device A and Device C cannot reach each other.

4
4. Identify failed nodes:
# Enable sending of ICMP timeout packets on Device B.
<DeviceB> system-view
[DeviceB] ip ttl-expires enable
# Enable sending of ICMP destination unreachable packets on Device C.
<DeviceC> system-view
[DeviceC] ip unreachables enable
# Identify failed nodes.
[DeviceA] tracert 1.1.2.2
traceroute to 1.1.2.2 (1.1.2.2) 30 hops at most,40 bytes each packet, press CTRL+C
to break
1 1.1.1.2 (1.1.1.2) 1 ms 2 ms 1 ms
2 * * *
3 * * *
4 * * *
5
[DeviceA]
The output shows that Device A can reach Device B but cannot reach Device C. An error has
occurred on the connection between Device B and Device C.
5. To identify the cause of the issue, execute the following commands on Device A and Device C:
 Execute the debugging ip icmp command and verify that Device A and Device C can
send and receive the correct ICMP packets.
 Execute the display ip routing-table command to verify that Device A and Device
C have a route to each other.

System debugging
About system debugging
The device supports debugging for the majority of protocols and features, and provides debugging
information to help users diagnose errors.
The following switches control the display of debugging information:
• Module debugging switch—Controls whether to generate the module-specific debugging
information.
• Screen output switch—Controls whether to display the debugging information on a certain
screen. Use terminal monitor and terminal logging level commands to turn on
the screen output switch. For more information about these two commands, see Network
Management and Monitoring Command Reference.
As shown in Figure 5, the device can provide debugging for the three modules 1, 2, and 3. The
debugging information can be output on a terminal only when both the module debugging switch and
the screen output switch are turned on.
Debugging information is typically displayed on a console. You can also send debugging information
to other destinations. For more information, see "Configuring the information center."

5
Figure 5 Relationship between the module and screen output switch

Debugging 1 2 3 Debugging 1 2 3
information information

Protocol Protocol
debugging debugging
switch ON OFF ON switch ON OFF ON

1 3 1 3

Screen Screen
output switch output switch
OFF ON

1 3

Debugging a feature module


Restrictions and guidelines
Output from debugging commands is memory intensive. To guarantee system performance, enable
debugging only for modules that are in an exceptional condition. When debugging is complete, use
the undo debugging all command to disable all the debugging functions.
Procedure
1. Enable debugging for a module.
debugging module-name [ option ]
By default, debugging is disabled for all modules.
This command is available in user view.
2. (Optional.) Display the enabled debugging features.
display debugging [ module-name ]
This command is available in any view.

6
Configuring NQA
About NQA
Network quality analyzer (NQA) allows you to measure network performance, verify the service
levels for IP services and applications, and troubleshoot network problems.

NQA operating mechanism


An NQA operation contains a set of parameters such as the operation type, destination IP address,
and port number to define how the operation is performed. Each NQA operation is identified by the
combination of the administrator name and the operation tag. You can configure the NQA client to
run the operations at scheduled time periods.
As shown in Figure 6, the NQA source device (NQA client) sends data to the NQA destination device
by simulating IP services and applications to measure network performance.
All types of NQA operations require the NQA client, but only the TCP, UDP echo, UDP jitter, and
voice operations require the NQA server. The NQA operations for services that are already provided
by the destination device such as FTP do not need the NQA server. You can configure the NQA
server to listen and respond to specific IP addresses and ports to meet various test needs.
Figure 6 Network diagram

IP network
NQA source device/
NQA destination device
NQA client

After starting an NQA operation, the NQA client periodically performs the operation at the interval
specified by using the frequency command.
You can set the number of probes the NQA client performs in an operation by using the probe
count command. For the voice and path jitter operations, the NQA client performs only one probe
per operation and the probe count command is not available.

Collaboration with Track


NQA can collaborate with the Track module to notify application modules of state or performance
changes so that the application modules can take predefined actions.
The NQA + Track collaboration is available for the following application modules:
• VRRP.
• Static routing.
• Policy-based routing.
• Traffic redirecting.
• Smart Link
The following describes how a static route destined for 192.168.0.88 is monitored through
collaboration:
1. NQA monitors the reachability to 192.168.0.88.
2. When 192.168.0.88 becomes unreachable, NQA notifies the Track module of the change.

7
3. The Track module notifies the static routing module of the state change.
4. The static routing module sets the static route to invalid according to a predefined action.
For more information about collaboration, see High Availability Configuration Guide.

Threshold monitoring
Threshold monitoring enables the NQA client to take a predefined action when the NQA operation
performance metrics violate the specified thresholds.
Table 1 describes the relationships between performance metrics and NQA operation types.
Table 1 Performance metrics and NQA operation types

NQA operation types that can gather the


Performance metric
metric
All NQA operation types except UDP jitter, UDP
Probe duration
tracert, path jitter, and voice

All NQA operation types except UDP jitter, UDP


Number of probe failures
tracert, path jitter, and voice

Round-trip time ICMP jitter, UDP jitter, and voice


Number of discarded packets ICMP jitter, UDP jitter, and voice
One-way jitter (source-to-destination or
ICMP jitter, UDP jitter, and voice
destination-to-source)
One-way delay (source-to-destination or
ICMP jitter, UDP jitter, and voice
destination-to-source)
Calculated Planning Impairment Factor (ICPIF) (see
Voice
"Configuring the voice operation")
Mean Opinion Scores (MOS) (see "Configuring the
Voice
voice operation")

NQA tasks at a glance


To configure NQA, perform the following tasks:
1. Configuring the NQA server
Perform this task on the destination device before you configure the TCP, UDP echo, UDP jitter,
and voice operations.
2. Enabling the NQA client
3. Configuring NQA operations.
 Configuring NQA operations on the NQA client
After you configure an NQA operation, you can schedule the NQA client to run the NQA
operation.
An NQA template does not run immediately after it is configured. The template creates and run
the NQA operation only when it is required by the feature to which the template is applied.

8
Configuring the NQA server
Restrictions and guidelines
To perform TCP, UDP echo, UDP jitter, and voice operations, you must configure the NQA server on
the destination device. The NQA server listens and responds to requests on the specified IP
addresses and ports.
You can configure multiple TCP or UDP listening services on an NQA server, where each
corresponds to a specific IP address and port number.
The IP address and port number for a listening service must be unique on the NQA server and match
the configuration on the NQA client.
Procedure
1. Enter system view.
system-view
2. Enable the NQA server.
nqa server enable
By default, the NQA server is disabled.
3. Configure a TCP listening service.
nqa server tcp-connect ip-address port-number [ vpn-instance
vpn-instance-name ] [ tos tos ]
This task is required for only TCP and DLSw operations. For the DLSw operation, the port
number for the TCP listening service must be 2065.
4. Configure a UDP listening service.
nqa server udp-echo ip-address port-number [ vpn-instance
vpn-instance-name ] [ tos tos ]
This task is required for only UDP echo, UDP jitter, and voice operations.

Enabling the NQA client


1. Enter system view.
system-view
2. Enable the NQA client.
nqa agent enable
By default, the NQA client is enabled.
The NQA client configuration takes effect after you enable the NQA client.

Configuring NQA operations on the NQA client


NQA operations tasks at a glance
To configure NQA operations, perform the following tasks:
1. Configuring an NQA operation
 Configuring the ICMP echo operation
 Configuring the ICMP jitter operation
 Configuring the DHCP operation

9
 Configuring the DNS operation
 Configuring the FTP operation
 Configuring the HTTP operation
 Configuring the UDP jitter operation
 Configuring the SNMP operation
 Configuring the TCP operation
 Configuring the UDP echo operation
 Configuring the UDP tracert operation
 Configuring the voice operation
 Configuring the DLSw operation
 Configuring the path jitter operation
2. (Optional.) Configuring optional parameters for the NQA operation
3. (Optional.) Configuring the collaboration feature
4. (Optional.) Configuring threshold monitoring
5. (Optional.) Configuring the NQA statistics collection feature
6. (Optional.) Configuring the saving of NQA history records
7. Scheduling the NQA operation on the NQA client

Configuring the ICMP echo operation


About this task
The ICMP echo operation measures the reachability of a destination device. It has the same function
as the ping command, but provides more output information. In addition, if multiple paths exist
between the source and destination devices, you can specify the next hop for the ICMP echo
operation.
The ICMP echo operation sends an ICMP echo request to the destination device per probe.
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the ICMP echo type and enter its view.
type icmp-echo
4. Specify the destination IP address for ICMP echo requests.
IPv4:
destination ip ip-address
IPv6:
destination ipv6 ipv6-address
By default, no destination IP address is specified.
5. Specify the source IP address for ICMP echo requests. Choose one option as needed:
 Use the IP address of the specified interface as the source IP address.
source interface interface-type interface-number
By default, the source IP address of ICMP echo requests is the primary IP address of their
output interface.

10
The specified source interface must be up.
 Specify the source IPv4 address.
source ip ip-address
By default, the source IPv4 address of ICMP echo requests is the primary IPv4 address of
their output interface.
The specified source IPv4 address must be the IPv4 address of a local interface, and the
interface must be up. Otherwise, no probe packets can be sent out.
 Specify the source IPv6 address.
source ipv6 ipv6-address
By default, the source IPv6 address of ICMP echo requests is the primary IPv6 address of
their output interface.
The specified source IPv6 address must be the IPv6 address of a local interface, and the
interface must be up. Otherwise, no probe packets can be sent out.
6. Specify the output interface or the next hop IP address for ICMP echo requests. Choose one
option as needed:
 Specify the output interface for ICMP echo requests.
out interface interface-type interface-number
By default, the output interface for ICMP echo requests is not specified. The NQA client
determines the output interface based on the routing table lookup.
 Specify the next hop IPv4 address.
next-hop ip ip-address
By default, no next hop IPv4 address is specified.
 Specify the next hop IPv6 address.
next-hop ipv6 ipv6-address
By default, no next hop IPv6 address is specified.
7. (Optional.) Set the payload size for each ICMP echo request.
data-size size
The default payload size is 100 bytes.
8. (Optional.) Specify the payload fill string for ICMP echo requests.
data-fill string
The default payload fill string is the hexadecimal string 00010203040506070809.

Configuring the ICMP jitter operation


About this task
The ICMP jitter operation measures unidirectional and bidirectional jitters. The operation result helps
you to determine whether the network can carry jitter-sensitive services such as real-time voice and
video services.
The ICMP jitter operation works as follows:
1. The NQA client sends ICMP packets to the destination device.
2. The destination device time stamps each packet it receives, and then sends the packet back to
the NQA client.
3. Upon receiving the responses, the NQA client calculates the jitter according to the timestamps.
The ICMP jitter operation sends a number of ICMP packets to the destination device per probe. The
number of packets to send is determined by using the probe packet-number command.

11
Restrictions and guidelines
The display nqa history command does not display the results or statistics of the ICMP jitter
operation. To view the results or statistics of the operation, use the display nqa result or
display nqa statistics command.
Before starting the operation, make sure the network devices are time synchronized by using NTP.
For more information about NTP, see "Configuring NTP."
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the ICMP jitter type and enter its view.
type icmp-jitter
4. Specify the destination IP address for ICMP packets.
destination ip ip-address
By default, no destination IP address is specified.
5. Set the number of ICMP packets sent per probe.
probe packet-number number
The default setting is 10.
6. Set the interval for sending ICMP packets.
probe packet-interval interval
The default setting is 20 milliseconds.
7. Specify how long the NQA client waits for a response from the server before it regards the
response times out.
probe packet-timeout timeout
The default setting is 3000 milliseconds.
8. Specify the source IP address for ICMP packets.
source ip ip-address
By default, the source IP address of ICMP packets is the primary IP address of their output
interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no ICMP packets can be sent out.

Configuring the DHCP operation


About this task
The DHCP operation measures whether or not the DHCP server can respond to client requests.
DHCP also measures the amount of time it takes the NQA client to obtain an IP address from a
DHCP server.
The NQA client simulates the DHCP relay agent to forward DHCP requests for IP address acquisition
from the DHCP server. The interface that performs the DHCP operation does not change its IP
address. When the DHCP operation completes, the NQA client sends a packet to release the
obtained IP address.
The DHCP operation acquires an IP address from the DHCP server per probe.

12
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the DHCP type and enter its view.
type dhcp
4. Specify the IP address of the DHCP server as the destination IP address of DHCP packets.
destination ip ip-address
By default, no destination IP address is specified.
5. Specify the output interface for DHCP request packets.
out interface interface-type interface-number
By default, the NQA client determines the output interface based on the routing table lookup.
6. Specify the source IP address of DHCP request packets.
source ip ip-address
By default, the source IP address of DHCP request packets is the primary IP address of their
output interface.
The specified source IP address must be the IP address of a local interface, and the local
interface must be up. Otherwise, no probe packets can be sent out.

Configuring the DNS operation


About this task
The DNS operation simulates domain name resolution, and it measures the time for the NQA client
to resolve a domain name into an IP address through a DNS server. The obtained DNS entry is not
saved.
The DNS operation resolves a domain name into an IP address per probe.
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the DNS type and enter its view.
type dns
4. Specify the IP address of the DNS server as the destination IP address of DNS packets.
destination ip ip-address
By default, no destination IP address is specified.
5. Specify the domain name to be translated.
resolve-target domain-name
By default, no domain name is specified.

13
Configuring the FTP operation
About this task
The FTP operation measures the time for the NQA client to transfer a file to or download a file from
an FTP server.
The FTP operation uploads or downloads a file from an FTP server per probe.
Restrictions and guidelines
To upload (put) a file to the FTP server, use the filename command to specify the name of the file
you want to upload. The file must exist on the NQA client.
To download (get) a file from the FTP server, include the name of the file you want to download in the
url command. The file must exist on the FTP server. The NQA client does not save the file obtained
from the FTP server.
Use a small file for the FTP operation. A big file might result in transfer failure because of timeout, or
might affect other services because of the amount of network bandwidth it occupies.
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the FTP type and enter its view.
type ftp
4. Specify an FTP login username.
username username
By default, no FTP login username is specified.
5. Specify an FTP login password.
password { cipher | simple } string
By default, no FTP login password is specified.
6. Specify the source IP address for FTP request packets.
source ip ip-address
By default, the source IP address of FTP request packets is the primary IP address of their
output interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no FTP requests can be sent out.
7. Set the data transmission mode.
mode { active | passive }
The default mode is active.
8. Specify the FTP operation type.
operation { get | put }
The default FTP operation type is get.
9. Specify the destination URL for the FTP operation.
url url
By default, no destination URL is specified for an FTP operation.
Enter the URL in one of the following formats:
 ftp://host/filename.

14
 ftp://host:port/filename.
The filename argument is required only for the get operation.
10. Specify the name of the file to be uploaded.
filename file-name
By default, no file is specified.
This task is required only for the put operation.
The configuration does not take effect for the get operation.

Configuring the HTTP operation


About this task
The HTTP operation measures the time for the NQA client to obtain responses from an HTTP server.
The HTTP operation supports the following operation types:
• Get—Retrieves data such as a Web page from the HTTP server.
• Post—Sends data to the HTTP server for processing.
• Raw—Sends a user-defined HTTP request to the HTTP server. You must manually configure
the content of the HTTP request to be sent.
The HTTP operation completes the operation of the specified type per probe.
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the HTTP type and enter its view.
type http
4. Specify the destination URL for the HTTP operation.
url url
By default, no destination URL is specified for an HTTP operation.
Enter the URL in one of the following formats:
 http://host/resource
 http://host:port/resource
5. Specify an HTTP login username.
username username
By default, no HTTP login username is specified.
6. Specify an HTTP login password.
password { cipher | simple } string
By default, no HTTP login password is specified.
7. Specify the HTTP version.
version { v1.0 | v1.1 }
By default, HTTP 1.0 is used.
8. Specify the HTTP operation type.
operation { get | post | raw }
The default HTTP operation type is get.

15
If you set the operation type to raw, the client pads the content configured in raw request view
to the HTTP request to send to the HTTP server.
9. Configure the HTTP raw request.
a. Enter raw request view.
raw-request
Every time you enter raw request view, the previously configured raw request content is
cleared.
b. Enter or paste the request content.
By default, no request content is configured.
To ensure successful operations, make sure the request content does not contain
command aliases configured by using the alias command. For more information about
the alias command, see CLI commands in Fundamentals Command Reference.
c. Save the input and return to HTTP operation view:
quit
This step is required only when the operation type is set to raw.
10. Specify the source IP address for the HTTP packets.
source ip ip-address
By default, the source IP address of HTTP packets is the primary IP address of their output
interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no request packets can be sent out.

Configuring the UDP jitter operation


About this task
The UDP jitter operation measures unidirectional and bidirectional jitters. The operation result helps
you determine whether the network can carry jitter-sensitive services such as real-time voice and
video services.
The UDP jitter operation works as follows:
1. The NQA client sends UDP packets to the destination port.
2. The destination device time stamps each packet it receives, and then sends the packet back to
the NQA client.
3. Upon receiving the responses, the NQA client calculates the jitter according to the timestamps.
The UDP jitter operation sends a number of UDP packets to the destination device per probe. The
number of packets to send is determined by using the probe packet-number command.
The UDP jitter operation requires both the NQA server and the NQA client. Before you perform the
UDP jitter operation, configure the UDP listening service on the NQA server. For more information
about UDP listening service configuration, see "Configuring the NQA server."
Restrictions and guidelines
To ensure successful UDP jitter operations and avoid affecting existing services, do not perform the
operations on well-known ports from 1 to 1023.
The display nqa history command does not display the results or statistics of the UDP jitter
operation. To view the results or statistics of the UDP jitter operation, use the display nqa
result or display nqa statistics command.
Before starting the operation, make sure the network devices are time synchronized by using NTP.
For more information about NTP, see "Configuring NTP."

16
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the UDP jitter type and enter its view.
type udp-jitter
4. Specify the destination IP address for UDP packets.
destination ip ip-address
By default, no destination IP address is specified.
The destination IP address must be the same as the IP address of the UDP listening service
configured on the NQA server. To configure a UDP listening service on the server, use the nqa
server udp-echo command.
5. Specify the destination port number for UDP packets.
destination port port-number
By default, no destination port number is specified.
The destination port number must be the same as the port number of the UDP listening service
configured on the NQA server. To configure a UDP listening service on the server, use the nqa
server udp-echo command.
6. Specify the source IP address for UDP packets.
source ip ip-address
By default, the source IP address of UDP packets is the primary IP address of their output
interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no UDP packets can be sent out.
7. Specify the source port number for UDP packets.
source port port-number
By default, the NQA client randomly picks an unused port as the source port when the operation
starts.
8. Set the number of UDP packets sent per probe.
probe packet-number number
The default setting is 10.
9. Set the interval for sending UDP packets.
probe packet-interval interval
The default setting is 20 milliseconds.
10. Specify how long the NQA client waits for a response from the server before it regards the
response times out.
probe packet-timeout timeout
The default setting is 3000 milliseconds.
11. (Optional.) Set the payload size for each UDP packet.
data-size size
The default payload size is 100 bytes.
12. (Optional.) Specify the payload fill string for UDP packets.
data-fill string
The default payload fill string is the hexadecimal string 00010203040506070809.

17
Configuring the SNMP operation
About this task
The SNMP operation tests whether the SNMP service is available on an SNMP agent.
The SNMP operation sends one SNMPv1 packet, one SNMPv2c packet, and one SNMPv3 packet to
the SNMP agent per probe.
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the SNMP type and enter its view.
type snmp
4. Specify the destination address for SNMP packets.
destination ip ip-address
By default, no destination IP address is specified.
5. Specify the source IP address for SNMP packets.
source ip ip-address
By default, the source IP address of SNMP packets is the primary IP address of their output
interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no SNMP packets can be sent out.
6. Specify the source port number for SNMP packets.
source port port-number
By default, the NQA client randomly picks an unused port as the source port when the operation
starts.
7. Specify the community name carried in the SNMPv1 and SNMPv2c packets.
community read { cipher | simple } community-name
By default, the SNMPv1 and SNMPv2c packets carry community name public.
Make sure the specified community name is the same as the community name configured on
the SNMP agent.

Configuring the TCP operation


About this task
The TCP operation measures the time for the NQA client to establish a TCP connection to a port on
the NQA server.
The TCP operation requires both the NQA server and the NQA client. Before you perform a TCP
operation, configure a TCP listening service on the NQA server. For more information about the TCP
listening service configuration, see "Configuring the NQA server."
The TCP operation sets up a TCP connection per probe.
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.

18
nqa entry admin-name operation-tag
3. Specify the TCP type and enter its view.
type tcp
4. Specify the destination address for TCP packets.
destination ip ip-address
By default, no destination IP address is specified.
The destination address must be the same as the IP address of the TCP listening service
configured on the NQA server. To configure a TCP listening service on the server, use the nqa
server tcp-connect command.
5. Specify the destination port for TCP packets.
destination port port-number
By default, no destination port number is configured.
The destination port number must be the same as the port number of the TCP listening service
configured on the NQA server. To configure a TCP listening service on the server, use the nqa
server tcp-connect command.
6. Specify the source IP address for TCP packets.
source ip ip-address
By default, the source IP address of TCP packets is the primary IP address of their output
interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no TCP packets can be sent out.

Configuring the UDP echo operation


About this task
The UDP echo operation measures the round-trip time between the client and a UDP port on the
NQA server.
The UDP echo operation requires both the NQA server and the NQA client. Before you perform a
UDP echo operation, configure a UDP listening service on the NQA server. For more information
about the UDP listening service configuration, see "Configuring the NQA server."
The UDP echo operation sends a UDP packet to the destination device per probe.
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the UDP echo type and enter its view.
type udp-echo
4. Specify the destination address for UDP packets.
destination ip ip-address
By default, no destination IP address is specified.
The destination address must be the same as the IP address of the UDP listening service
configured on the NQA server. To configure a UDP listening service on the server, use the nqa
server udp-echo command.
5. Specify the destination port number for UDP packets.
destination port port-number

19
By default, no destination port number is specified.
The destination port number must be the same as the port number of the UDP listening service
configured on the NQA server. To configure a UDP listening service on the server, use the nqa
server udp-echo command.
6. Specify the source IP address for UDP packets.
source ip ip-address
By default, the source IP address of UDP packets is the primary IP address of their output
interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no UDP packets can be sent out.
7. Specify the source port number for UDP packets.
source port port-number
By default, the NQA client randomly picks an unused port as the source port when the operation
starts.
8. (Optional.) Set the payload size for each UDP packet.
data-size size
The default setting is 100 bytes.
9. (Optional.) Specify the payload fill string for UDP packets.
data-fill string
The default payload fill string is the hexadecimal string 00010203040506070809.

Configuring the UDP tracert operation


About this task
The UDP tracert operation determines the routing path from the source device to the destination
device.
The UDP tracert operation sends a UDP packet to a hop along the path per probe.
Restrictions and guidelines
The UDP tracert operation is not supported on IPv6 networks. To determine the routing path that the
IPv6 packets traverse from the source to the destination, use the tracert ipv6 command. For
more information about the command, see Network Management and Monitoring Command
Reference.
Prerequisites
Before you configure the UDP tracert operation, you must perform the following tasks:
• Enable sending ICMP time exceeded messages on the intermediate devices between the
source and destination devices. If the intermediate devices are HPE devices, use the ip
ttl-expires enable command.
• Enable sending ICMP destination unreachable messages on the destination device. If the
destination device is an HPE device, use the ip unreachables enable command.
For more information about the ip ttl-expires enable and ip unreachables
enable commands, see Layer 3—IP Services Command Reference.
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.

20
nqa entry admin-name operation-tag
3. Specify the UDP tracert operation type and enter its view.
type udp-tracert
4. Specify the destination device for the operation. Choose one option as needed:
 Specify the destination device by its host name.
destination host host-name
By default, no destination host name is specified.
 Specify the destination device by its IP address.
destination ip ip-address
By default, no destination IP address is specified.
5. Specify the destination port number for UDP packets.
destination port port-number
By default, the destination port number is 33434.
This port number must be an unused number on the destination device, so that the destination
device can reply with ICMP port unreachable messages.
6. Specify an output interface for UDP packets.
out interface interface-type interface-number
By default, the NQA client determines the output interface based on the routing table lookup.
7. Specify the source IP address for UDP packets.
 Specify the IP address of the specified interface as the source IP address.
source interface interface-type interface-number
By default, the source IP address of UDP packets is the primary IP address of their output
interface.
 Specify the source IP address.
source ip ip-address
The specified source interface must be up. The source IP address must be the IP address of
a local interface, and the local interface must be up. Otherwise, no probe packets can be
sent out.
8. Specify the source port number for UDP packets.
source port port-number
By default, the NQA client randomly picks an unused port as the source port when the operation
starts.
9. Set the maximum number of consecutive probe failures.
max-failure times
The default setting is 5.
10. Set the initial TTL value for UDP packets.
init-ttl value
The default setting is 1.
11. (Optional.) Set the payload size for each UDP packet.
data-size size
The default setting is 100 bytes.
12. (Optional.) Enable the no-fragmentation feature.
no-fragment enable
By default, the no-fragmentation feature is disabled.

21
Configuring the voice operation
About this task
The voice operation measures VoIP network performance.
The voice operation works as follows:
1. The NQA client sends voice packets at sending intervals to the destination device (NQA
server).
The voice packets are of one of the following codec types:
 G.711 A-law.
 G.711 µ-law.
 G.729 A-law.
2. The destination device time stamps each voice packet it receives and sends it back to the
source.
3. Upon receiving the packet, the source device calculates the jitter and one-way delay based on
the timestamp.
The voice operation sends a number of voice packets to the destination device per probe. The
number of packets to send per probe is determined by using the probe packet-number
command.
The following parameters that reflect VoIP network performance can be calculated by using the
metrics gathered by the voice operation:
• Calculated Planning Impairment Factor (ICPIF)—Measures impairment to voice quality on a
VoIP network. It is decided by packet loss and delay. A higher value represents a lower service
quality.
• Mean Opinion Scores (MOS)—A MOS value can be evaluated by using the ICPIF value, in the
range of 1 to 5. A higher value represents a higher service quality.
The evaluation of voice quality depends on users' tolerance for voice quality. For users with higher
tolerance for voice quality, use the advantage-factor command to set an advantage factor.
When the system calculates the ICPIF value, it subtracts the advantage factor to modify ICPIF and
MOS values for voice quality evaluation.
The voice operation requires both the NQA server and the NQA client. Before you perform a voice
operation, configure a UDP listening service on the NQA server. For more information about UDP
listening service configuration, see "Configuring the NQA server."
Restrictions and guidelines
To ensure successful voice operations and avoid affecting existing services, do not perform the
operations on well-known ports from 1 to 1023.
The display nqa history command does not display the results or statistics of the voice
operation. To view the results or statistics of the voice operation, use the display nqa result or
display nqa statistics command.
Before starting the operation, make sure the network devices are time synchronized by using NTP.
For more information about NTP, see "Configuring NTP."
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the voice type and enter its view.

22
type voice
4. Specify the destination IP address for voice packets.
destination ip ip-address
By default, no destination IP address is configured.
The destination IP address must be the same as the IP address of the UDP listening service
configured on the NQA server. To configure a UDP listening service on the server, use the nqa
server udp-echo command.
5. Specify the destination port number for voice packets.
destination port port-number
By default, no destination port number is configured.
The destination port number must be the same as the port number of the UDP listening service
configured on the NQA server. To configure a UDP listening service on the server, use the nqa
server udp-echo command.
6. Specify the source IP address for voice packets.
source ip ip-address
By default, the source IP address of voice packets is the primary IP address of their output
interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no voice packets can be sent out.
7. Specify the source port number for voice packets.
source port port-number
By default, the NQA client randomly picks an unused port as the source port when the operation
starts.
8. Configure the basic voice operation parameters.
 Specify the codec type.
codec-type { g711a | g711u | g729a }
By default, the codec type is G.711 A-law.
 Set the advantage factor for calculating MOS and ICPIF values.
advantage-factor factor
By default, the advantage factor is 0.
9. Configure the probe parameters for the voice operation.
 Set the number of voice packets to be sent per probe.
probe packet-number number
The default setting is 1000.
 Set the interval for sending voice packets.
probe packet-interval interval
The default setting is 20 milliseconds.
 Specify how long the NQA client waits for a response from the server before it regards the
response times out.
probe packet-timeout timeout
The default setting is 5000 milliseconds.
10. Configure the payload parameters.
a. Set the payload size for voice packets.
data-size size
By default, the voice packet size varies by codec type. The default packet size is 172 bytes
for G.711A-law and G.711 µ-law codec type, and 32 bytes for G.729 A-law codec type.

23
b. (Optional.) Specify the payload fill string for voice packets.
data-fill string
The default payload fill string is the hexadecimal string 00010203040506070809.

Configuring the DLSw operation


About this task
The DLSw operation measures the response time of a DLSw device.
It sets up a DLSw connection to the DLSw device per probe.
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the DLSw type and enter its view.
type dlsw
4. Specify the destination IP address for the probe packets.
destination ip ip-address
By default, no destination IP address is specified.
5. Specify the source IP address for the probe packets.
source ip ip-address
By default, the source IP address of the probe packets is the primary IP address of their output
interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no probe packets can be sent out.

Configuring the path jitter operation


About this task
The path jitter operation measures the jitter, negative jitters, and positive jitters from the NQA client to
each hop on the path to the destination.
The path jitter operation performs the following steps per probe:
1. Obtains the path from the NQA client to the destination through tracert. A maximum of 64 hops
can be detected.
2. Sends a number of ICMP echo requests to each hop along the path. The number of ICMP echo
requests to send is set by using the probe packet-number command.
Prerequisites
Before you configure the path jitter operation, you must perform the following tasks:
• Enable sending ICMP time exceeded messages on the intermediate devices between the
source and destination devices. If the intermediate devices are HPE devices, use the ip
ttl-expires enable command.
• Enable sending ICMP destination unreachable messages on the destination device. If the
destination device is an HPE device, use the ip unreachables enable command.
For more information about the ip ttl-expires enable and ip unreachables
enable commands, see Layer 3—IP Services Command Reference.

24
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the path jitter type and enter its view.
type path-jitter
4. Specify the destination IP address for ICMP echo requests.
destination ip ip-address
By default, no destination IP address is specified.
5. Specify the source IP address for ICMP echo requests.
source ip ip-address
By default, the source IP address of ICMP echo requests is the primary IP address of their
output interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no ICMP echo requests can be sent out.
6. Configure the probe parameters for the path jitter operation.
a. Set the number of ICMP echo requests to be sent per probe.
probe packet-number number
The default setting is 10.
b. Set the interval for sending ICMP echo requests.
probe packet-interval interval
The default setting is 20 milliseconds.
c. Specify how long the NQA client waits for a response from the server before it regards the
response times out.
probe packet-timeout timeout
The default setting is 3000 milliseconds.
7. (Optional.) Specify an LSR path.
lsr-path ip-address&<1-8>
By default, no LSR path is specified.
The path jitter operation uses tracert to detect the LSR path to the destination, and sends ICMP
echo requests to each hop on the LSR path.
8. Perform the path jitter operation only on the destination address.
target-only
By default, the path jitter operation is performed on each hop on the path to the destination.
9. (Optional.) Set the payload size for each ICMP echo request.
data-size size
The default setting is 100 bytes.
10. (Optional.) Specify the payload fill string for ICMP echo requests.
data-fill string
The default payload fill string is the hexadecimal string 00010203040506070809.

25
Configuring optional parameters for the NQA operation
Restrictions and guidelines
Unless otherwise specified, the following optional parameters apply to all types of NQA operations.
The parameter settings take effect only on the current operation.
Procedure
1. Enter system view.
system-view
2. Enter the view of an existing NQA operation.
nqa entry admin-name operation-tag
3. Configure a description for the operation.
description text
By default, no description is configured.
4. Set the interval at which the NQA operation repeats.
frequency interval
For a voice or path jitter operation, the default setting is 60000 milliseconds.
For other types of operations, the default setting is 0 milliseconds, and only one operation is
performed.
When the interval expires, but the operation is not completed or is not timed out, the next
operation does not start.
5. Specify the probe times.
probe count times
In an UDP tracert operation, the NQA client performs three probes to each hop to the
destination by default.
In other types of operations, the NQA client performs one probe to the destination per operation
by default.
This command is not available for the voice and path jitter operations. Each of these operations
performs only one probe.
6. Set the probe timeout time.
probe timeout timeout
The default setting is 3000 milliseconds.
This command is not available for the ICMP jitter, UDP jitter, voice, or path jitter operations.
7. Set the maximum number of hops that the probe packets can traverse.
ttl value
The default setting is 30 for probe packets of the UDP tracert operation, and is 20 for probe
packets of other types of operations.
This command is not available for the DHCP or path jitter operations.
8. Set the ToS value in the IP header of the probe packets.
tos value
The default setting is 0.
9. Enable the routing table bypass feature.
route-option bypass-route
By default, the routing table bypass feature is disabled.
This command is not available for the DHCP or path jitter operations.

26
This command does not take effect if the destination address of the NQA operation is an IPv6
address.
10. Specify the VPN instance where the operation is performed.
vpn-instance vpn-instance-name
By default, the operation is performed on the public network.

Configuring the collaboration feature


About this task
Collaboration is implemented by associating a reaction entry of an NQA operation with a track entry.
The reaction entry monitors the NQA operation. If the number of operation failures reaches the
specified threshold, the configured action is triggered.
Restrictions and guidelines
The collaboration feature is not available for the following types of operations:
• ICMP jitter operation.
• UDP jitter operation.
• UDP tracert operation.
• Voice operation.
• Path jitter operation.
Procedure
1. Enter system view.
system-view
2. Enter the view of an existing NQA operation.
nqa entry admin-name operation-tag
3. Configure a reaction entry.
reaction item-number checked-element probe-fail threshold-type
consecutive consecutive-occurrences action-type trigger-only
You cannot modify the content of an existing reaction entry.
4. Return to system view.
quit
5. Associate Track with NQA.
For information about the configuration, see High Availability Configuration Guide.
6. Associate Track with an application module.
For information about the configuration, see High Availability Configuration Guide.

Configuring threshold monitoring


About this task
This feature allows you to monitor the NQA operation running status.
An NQA operation supports the following threshold types:
• average—If the average value for the monitored performance metric either exceeds the upper
threshold or goes below the lower threshold, a threshold violation occurs.
• accumulate—If the total number of times that the monitored performance metric is out of the
specified value range reaches or exceeds the specified threshold, a threshold violation occurs.

27
• consecutive—If the number of consecutive times that the monitored performance metric is out
of the specified value range reaches or exceeds the specified threshold, a threshold violation
occurs.
Threshold violations for the average or accumulate threshold type are determined on a per NQA
operation basis. The threshold violations for the consecutive type are determined from the time the
NQA operation starts.
The following actions might be triggered:
• none—NQA displays results only on the terminal screen. It does not send traps to the NMS.
• trap-only—NQA displays results on the terminal screen, and meanwhile it sends traps to the
NMS.
To send traps to the NMS, the NMS address must be specified by using the snmp-agent
target-host command. For more information about the command, see Network
Management and Monitoring Command Reference.
• trigger-only—NQA displays results on the terminal screen, and meanwhile triggers other
modules for collaboration.
In a reaction entry, configure a monitored element, a threshold type, and an action to be triggered to
implement threshold monitoring.
The state of a reaction entry can be invalid, over-threshold, or below-threshold.
• Before an NQA operation starts, the reaction entry is in invalid state.
• If the threshold is violated, the state of the entry is set to over-threshold. Otherwise, the state of
the entry is set to below-threshold.
Restrictions and guidelines
The threshold monitoring feature is not available for the path jitter operations.
Procedure
1. Enter system view.
system-view
2. Enter the view of an existing NQA operation.
nqa entry admin-name operation-tag
3. Enable sending traps to the NMS when specific conditions are met.
reaction trap { path-change | probe-failure
consecutive-probe-failures | test-complete | test-failure
[ accumulate-probe-failures ] }
By default, no traps are sent to the NMS.
The ICMP jitter, UDP jitter, and voice operations support only the test-complete keyword.
The following parameters are not available for the UDP tracert operation:
 The probe-failure consecutive-probe-failures option.
 The accumulate-probe-failures argument.
4. Configure threshold monitoring. Choose the options to configure as needed:
 Monitor the operation duration.
reaction item-number checked-element probe-duration
threshold-type { accumulate accumulate-occurrences | average |
consecutive consecutive-occurrences } threshold-value
upper-threshold lower-threshold [ action-type { none | trap-only } ]
This reaction entry is not supported in the ICMP jitter, UDP jitter, UDP tracert, or voice
operations
 Monitor failure times.

28
reaction item-number checked-element probe-fail threshold-type
{ accumulate accumulate-occurrences | consecutive
consecutive-occurrences } [ action-type { none | trap-only } ]
This reaction entry is not supported in the ICMP jitter, UDP jitter, UDP tracert, or voice
operations.
 Monitor the round-trip time.
reaction item-number checked-element rtt threshold-type
{ accumulate accumulate-occurrences | average } threshold-value
upper-threshold lower-threshold [ action-type { none | trap-only } ]
Only the ICMP jitter, UDP jitter, and voice operations support this reaction entry.
 Monitor packet loss.
reaction item-number checked-element packet-loss threshold-type
accumulate accumulate-occurrences [ action-type { none |
trap-only } ]
Only the ICMP jitter, UDP jitter, and voice operations support this reaction entry.
 Monitor the one-way jitter.
reaction item-number checked-element { jitter-ds | jitter-sd }
threshold-type { accumulate accumulate-occurrences | average }
threshold-value upper-threshold lower-threshold [ action-type
{ none | trap-only } ]
Only the ICMP jitter, UDP jitter, and voice operations support this reaction entry.
 Monitor the one-way delay.
reaction item-number checked-element { owd-ds | owd-sd }
threshold-value upper-threshold lower-threshold
Only the ICMP jitter, UDP jitter, and voice operations support this reaction entry.
 Monitor the ICPIF value.
reaction item-number checked-element icpif threshold-value
upper-threshold lower-threshold [ action-type { none | trap-only } ]
Only the voice operation supports this reaction entry.
 Monitor the MOS value.
reaction item-number checked-element mos threshold-value
upper-threshold lower-threshold [ action-type { none | trap-only } ]
Only the voice operation supports this reaction entry.
The DNS operation does not support the action of sending trap messages. For the DNS
operation, the action type can only be none.

Configuring the NQA statistics collection feature


About this task
NQA forms statistics within the same collection interval as a statistics group. To display information
about the statistics groups, use the display nqa statistics command.
When the maximum number of statistics groups is reached, the NQA client deletes the oldest
statistics group to save a new one.
A statistics group is automatically deleted when its hold time expires.
Restrictions and guidelines
The NQA statistics collection feature is not available for the UDP tracert operations.

29
If you use the frequency command to set the interval to 0 milliseconds for an NQA operation, NQA
does not generate any statistics group for the operation.
Procedure
1. Enter system view.
system-view
2. Enter the view of an existing NQA operation.
nqa entry admin-name operation-tag
3. Set the statistics collection interval.
statistics interval interval
The default setting is 60 minutes.
4. Set the maximum number of statistics groups that can be saved.
statistics max-group number
By default, the NQA client can save a maximum of two statistics groups for an operation.
To disable the NQA statistics collection feature, set the number argument to 0.
5. Set the hold time of statistics groups.
statistics hold-time hold-time
The default setting is 120 minutes.

Configuring the saving of NQA history records


About this task
This task enables the NQA client to save NQA history records. You can use the display nqa
history command to display the NQA history records.
Restrictions and guidelines
The NQA history record saving feature is not available for the following types of operations:
• ICMP jitter operation.
• UDP jitter operation.
• Voice operation.
• Path jitter operation.
Procedure
1. Enter system view.
system-view
2. Enter the view of an existing NQA operation.
nqa entry admin-name operation-tag
3. Enable the saving of history records for the NQA operation.
history-record enable
By default, this feature is enabled only for the UDP tracert operation.
4. Set the lifetime of history records.
history-record keep-time keep-time
The default setting is 120 minutes.
A record is deleted when its lifetime is reached.
5. Set the maximum number of history records that can be saved.
history-record number number

30
The default setting is 50.
When the maximum number of history records is reached, the system will delete the oldest
record to save a new one.

Scheduling the NQA operation on the NQA client


About this task
The NQA operation runs between the specified start time and end time (the start time plus operation
duration). If the specified start time is ahead of the system time, the operation starts immediately. If
both the specified start and end time are ahead of the system time, the operation does not start. To
display the current system time, use the display clock command.
Restrictions and guidelines
You cannot enter the operation type view or the operation view of a scheduled NQA operation.
A system time adjustment does not affect started or completed NQA operations. It affects only the
NQA operations that have not started.
Procedure
1. Enter system view.
system-view
2. Specify the scheduling parameters for an NQA operation.
nqa schedule admin-name operation-tag start-time { hh:mm:ss
[ yyyy/mm/dd | mm/dd/yyyy ] | now } lifetime { lifetime | forever }
[ recurring ]

Configuring NQA templates on the NQA client


Restrictions and guidelines
Some operation parameters for an NQA template can be specified by the template configuration or
the feature that uses the template. When both are specified, the parameters in the template
configuration take effect.

NQA template tasks at a glance


To configure NQA templates, perform the following tasks:
1. Perform at least one of the following tasks:
 Configuring the ICMP template
 Configuring the DNS template
 Configuring the TCP template
 Configuring the TCP half open template
 Configuring the UDP template
 Configuring the HTTP template
 Configuring the HTTPS template
 Configuring the FTP template
 Configuring the RADIUS template
 Configuring the SSL template
2. (Optional.) Configuring optional parameters for the NQA template

31
Configuring the ICMP template
About this task
A feature that uses the ICMP template performs the ICMP operation to measure the reachability of a
destination device. The ICMP template is supported on both IPv4 and IPv6 networks.
Procedure
1. Enter system view.
system-view
2. Create an ICMP template and enter its view.
nqa template icmp name
3. Specify the destination IP address for the operation.
IPv4:
destination ip ip-address
IPv6:
destination ipv6 ipv6-address
By default, no destination IP address is configured.
4. Specify the source IP address for ICMP echo requests. Choose one option as needed:
 Use the IP address of the specified interface as the source IP address.
source interface interface-type interface-number
By default, the primary IP address of the output interface is used as the source IP address of
ICMP echo requests.
The specified source interface must be up.
 Specify the source IPv4 address.
source ip ip-address
By default, the primary IPv4 address of the output interface is used as the source IPv4
address of ICMP echo requests.
The specified source IPv4 address must be the IPv4 address of a local interface, and the
interface must be up. Otherwise, no probe packets can be sent out.
 Specify the source IPv6 address.
source ipv6 ipv6-address
By default, the primary IPv6 address of the output interface is used as the source IPv6
address of ICMP echo requests.
The specified source IPv6 address must be the IPv6 address of a local interface, and the
interface must be up. Otherwise, no probe packets can be sent out.
5. Specify the next hop IP address for ICMP echo requests.
IPv4:
next-hop ip ip-address
IPv6:
next-hop ipv6 ipv6-address
By default, no IP address of the next hop is configured.
6. Configure the probe result sending on a per-probe basis.
reaction trigger per-probe
By default, the probe result is sent to the feature that uses the template after three consecutive
failed or successful probes.

32
If you execute the reaction trigger per-probe and reaction trigger
probe-pass commands multiple times, the most recent configuration takes effect.
If you execute the reaction trigger per-probe and reaction trigger
probe-fail commands multiple times, the most recent configuration takes effect.
7. (Optional.) Set the payload size for each ICMP request.
data-size size
The default setting is 100 bytes.
8. (Optional.) Specify the payload fill string for ICMP echo requests.
data-fill string
The default payload fill string is the hexadecimal string 00010203040506070809.

Configuring the DNS template


About this task
A feature that uses the DNS template performs the DNS operation to determine the status of the
server. The DNS template is supported on both IPv4 and IPv6 networks.
In DNS template view, you can specify the address expected to be returned. If the returned IP
addresses include the expected address, the DNS server is valid and the operation succeeds.
Otherwise, the operation fails.
Prerequisites
Create a mapping between the domain name and an address before you perform the DNS operation.
For information about configuring the DNS server, see documents about the DNS server
configuration.
Procedure
1. Enter system view.
system-view
2. Create a DNS template and enter DNS template view.
nqa template dns name
3. Specify the destination IP address for the probe packets.
IPv4:
destination ip ip-address
IPv6:
destination ipv6 ipv6-address
By default, no destination address is specified.
4. Specify the destination port number for the probe packets.
destination port port-number
By default, the destination port number is 53.
5. Specify the source IP address for the probe packets.
IPv4:
source ip ip-address
By default, the source IPv4 address of the probe packets is the primary IPv4 address of their
output interface.
The source IPv4 address must be the IPv4 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
IPv6:

33
source ipv6 ipv6-address
By default, the source IPv6 address of the probe packets is the primary IPv6 address of their
output interface.
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
6. Specify the source port number for the probe packets.
source port port-number
By default, no source port number is specified.
7. Specify the domain name to be translated.
resolve-target domain-name
By default, no domain name is specified.
8. Specify the domain name resolution type.
resolve-type { A | AAAA }
By default, the type is type A.
A type A query resolves a domain name to a mapped IPv4 address, and a type AAAA query to
a mapped IPv6 address.
9. (Optional.) Specify the IP address that is expected to be returned.
IPv4:
expect ip ip-address
IPv6:
expect ipv6 ipv6-address
By default, no expected IP address is specified.

Configuring the TCP template


About this task
A feature that uses the TCP template performs the TCP operation to test whether the NQA client can
establish a TCP connection to a specific port on the server.
In TCP template view, you can specify the expected data to be returned. If you do not specify the
expected data, the TCP operation tests only whether the client can establish a TCP connection to the
server.
The TCP operation requires both the NQA server and the NQA client. Before you perform a TCP
operation, configure a TCP listening service on the NQA server. For more information about the TCP
listening service configuration, see "Configuring the NQA server."
Procedure
1. Enter system view.
system-view
2. Create a TCP template and enter its view.
nqa template tcp name
3. Specify the destination IP address for the probe packets.
IPv4:
destination ip ip-address
IPv6:
destination ipv6 ipv6-address
By default, no destination IP address is specified.

34
The destination address must be the same as the IP address of the TCP listening service
configured on the NQA server. To configure a TCP listening service on the server, use the nqa
server tcp-connect command.
4. Specify the destination port number for the operation.
destination port port-number
By default, no destination port number is specified.
The destination port number must be the same as the port number of the TCP listening service
configured on the NQA server. To configure a TCP listening service on the server, use the nqa
server tcp-connect command.
5. Specify the source IP address for the probe packets.
IPv4:
source ip ip-address
By default, the primary IPv4 address of the output interface is used as the source IPv4 address
of the probe packets.
The source IP address must be the IPv4 address of a local interface, and the interface must be
up. Otherwise, no probe packets can be sent out.
IPv6:
source ipv6 ipv6-address
By default, the primary IPv6 address of the output interface is used as the source IPv6 address
of the probe packets.
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
6. (Optional.) Specify the payload fill string for the probe packets.
data-fill string
The default payload fill string is the hexadecimal string 00010203040506070809.
7. (Optional.) Configure the expected data.
expect data expression [ offset number ]
By default, no expected data is configured.
The NQA client performs expect data check only when you configure both the data-fill and
expect-data commands.

Configuring the TCP half open template


About this task
A feature that uses the TCP half open template performs the TCP half open operation to test whether
the TCP service is available on the server. The TCP half open operation is used when the feature
cannot get a response from the TCP server through an existing TCP connection.
In the TCP half open operation, the NQA client sends a TCP ACK packet to the server. If the client
receives an RST packet, it considers that the TCP service is available on the server.
Procedure
1. Enter system view.
system-view
2. Create a TCP half open template and enter its view.
nqa template tcphalfopen name
3. Specify the destination IP address of the operation.
IPv4:

35
destination ip ip-address
IPv6:
destination ipv6 ipv6-address
By default, no destination IP address is specified.
The destination address must be the same as the IP address of the TCP listening service
configured on the NQA server. To configure a TCP listening service on the server, use the nqa
server tcp-connect command.
4. Specify the source IP address for the probe packets.
IPv4:
source ip ip-address
By default, the primary IPv4 address of the output interface is used as the source IPv4 address
of the probe packets.
The source IPv4 address must be the IPv4 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
IPv6:
source ipv6 ipv6-address
By default, the primary IPv6 address of the output interface is used as the source IPv6 address
of the probe packets.
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
5. Specify the next hop IP address for the probe packets.
IPv4:
next-hop ip ip-address
IPv6:
next-hop ipv6 ipv6-address
By default, the IP address of the next hop is configured.
6. Configure the probe result sending on a per-probe basis.
reaction trigger per-probe
By default, the probe result is sent to the feature that uses the template after three consecutive
failed or successful probes.
If you execute the reaction trigger per-probe and reaction trigger
probe-pass commands multiple times, the most recent configuration takes effect.
If you execute the reaction trigger per-probe and reaction trigger
probe-fail commands multiple times, the most recent configuration takes effect.

Configuring the UDP template


About this task
A feature that uses the UDP template performs the UDP operation to test the following items:
• Reachability of a specific port on the NQA server.
• Availability of the requested service on the NQA server.
In UDP template view, you can specify the expected data to be returned. If you do not specify the
expected data, the UDP operation tests only whether the client can receive the response packet from
the server.

36
The UDP operation requires both the NQA server and the NQA client. Before you perform a UDP
operation, configure a UDP listening service on the NQA server. For more information about the UDP
listening service configuration, see "Configuring the NQA server."
Procedure
1. Enter system view.
system-view
2. Create a UDP template and enter its view.
nqa template udp name
3. Specify the destination IP address of the operation.
IPv4:
destination ip ip-address
IPv6:
destination ipv6 ipv6-address
By default, no destination IP address is specified.
The destination address must be the same as the IP address of the UDP listening service
configured on the NQA server. To configure a UDP listening service on the server, use the nqa
server udp-echo command.
4. Specify the destination port number for the operation.
destination port port-number
By default, no destination port number is specified.
The destination port number must be the same as the port number of the UDP listening service
configured on the NQA server. To configure a UDP listening service on the server, use the nqa
server udp-echo command.
5. Specify the source IP address for the probe packets.
IPv4:
source ip ip-address
By default, the primary IPv4 address of the output interface is used as the source IPv4 address
of the probe packets.
The source IP address must be the IPv4 address of a local interface, and the interface must be
up. Otherwise, no probe packets can be sent out.
IPv6:
source ipv6 ipv6-address
By default, the primary IPv6 address of the output interface is used as the source IPv6 address
of the probe packets.
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
6. Specify the payload fill string for the probe packets.
data-fill string
The default payload fill string is the hexadecimal string 00010203040506070809.
7. (Optional.) Set the payload size for the probe packets.
data-size size
The default setting is 100 bytes.
8. (Optional.) Configure the expected data.
expect data expression [ offset number ]
By default, no expected data is configured.

37
Expected data check is performed only when both the data-fill command and the expect
data command are configured.

Configuring the HTTP template


About this task
A feature that uses the HTTP template performs the HTTP operation to measure the time it takes the
NQA client to obtain data from an HTTP server.
The expected data is checked only when the data is configured and the HTTP response contains the
Content-Length field in the HTTP header.
The status code of the HTTP packet is a three-digit field in decimal notation, and it includes the
status information for the HTTP server. The first digit defines the class of response.
Prerequisites
Before you perform the HTTP operation, you must configure the HTTP server.
Procedure
1. Enter system view.
system-view
2. Create an HTTP template and enter its view.
nqa template http name
3. Specify the destination URL for the HTTP template.
url url
By default, no destination URL is specified for an HTTP template.
Enter the URL in one of the following formats:
 http://host/resource
 http://host:port/resource
4. Specify an HTTP login username.
username username
By default, no HTTP login username is specified.
5. Specify an HTTP login password.
password { cipher | simple } string
By default, no HTTP login password is specified.
6. Specify the HTTP version.
version { v1.0 | v1.1 }
By default, HTTP 1.0 is used.
7. Specify the HTTP operation type.
operation { get | post | raw }
By default, the HTTP operation type is get.
If you set the operation type to raw, the client pads the content configured in raw request view to
the HTTP request to send to the HTTP server.
8. Configure the content of the HTTP raw request.
a. Enter raw request view.
raw-request
Every time you enter raw request view, the previously configured raw request content is
cleared.

38
b. Enter or paste the request content.
By default, no request content is configured.
To ensure successful operations, make sure the request content does not contain
command aliases configured by using the alias command. For more information about
the alias command, see CLI commands in Fundamentals Command Reference.
c. Return to HTTP template view.
quit
The system automatically saves the configuration in raw request view before it returns to
HTTP template view.
This step is required only when the operation type is set to raw.
9. Specify the source IP address for the probe packets.
IPv4:
source ip ip-address
By default, the primary IPv4 address of the output interface is used as the source IPv4 address
of the probe packets.
The source IPv4 address must be the IPv4 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
IPv6:
source ipv6 ipv6-address
By default, the primary IPv6 address of the output interface is used as the source IPv6 address
of the probe packets.
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
10. (Optional.) Configure the expected status codes.
expect status status-list
By default, no expected status code is configured.
11. (Optional.) Configure the expected data.
expect data expression [ offset number ]
By default, no expected data is configured.

Configuring the HTTPS template


About this task
A feature that uses the HTTPS template performs the HTTPS operation to measure the time it takes
for the NQA client to obtain data from an HTTPS server.
The expected data is checked only when the expected data is configured and the HTTPS response
contains the Content-Length field in the HTTPS header.
The status code of the HTTPS packet is a three-digit field in decimal notation, and it includes the
status information for the HTTPS server. The first digit defines the class of response.
Prerequisites
Before you perform the HTTPS operation, configure the HTTPS server and the SSL client policy for
the SSL client. For information about configuring SSL client policies, see Security Configuration
Guide.
Procedure
1. Enter system view.
system-view

39
2. Create an HTTPS template and enter its view.
nqa template https name
3. Specify the destination URL for the HTTPS template.
url url
By default, no destination URL is specified for an HTTPS template.
Enter the URL in one of the following formats:
 https://host/resource
 https://host:port/resource
4. Specify an HTTPS login username.
username username
By default, no HTTPS login username is specified.
5. Specify an HTTPS login password.
password { cipher | simple } string
By default, no HTTPS login password is specified.
6. Specify an SSL client policy.
ssl-client-policy policy-name
By default, no SSL client policy is specified.
7. Specify the HTTPS version.
version { v1.0 | v1.1 }
By default, HTTPS 1.0 is used.
8. Specify the HTTPS operation type.
operation { get | post | raw }
By default, the HTTPS operation type is get.
If you set the operation type to raw, the client pads the content configured in raw request view to
the HTTPS request to send to the HTTPS server.
9. Configure the content of the HTTPS raw request.
a. Enter raw request view.
raw-request
Every time you enter raw request view, the previously configured raw request content is
cleared.
b. Enter or paste the request content.
By default, no request content is configured.
To ensure successful operations, make sure the request content does not contain
command aliases configured by using the alias command. For more information about
the alias command, see CLI commands in Fundamentals Command Reference.
c. Return to HTTPS template view.
quit
The system automatically saves the configuration in raw request view before it returns to
HTTPS template view.
This step is required only when the operation type is set to raw.
10. Specify the source IP address for the probe packets.
IPv4:
source ip ip-address
By default, the primary IPv4 address of the output interface is used as the source IPv4 address
of the probe packets.

40
The source IP address must be the IPv4 address of a local interface, and the interface must be
up. Otherwise, no probe packets can be sent out.
IPv6:
source ipv6 ipv6-address
By default, the primary IPv6 address of the output interface is used as the source IPv6 address
of the probe packets.
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
11. (Optional.) Configure the expected data.
expect data expression [ offset number ]
By default, no expected data is configured.
12. (Optional.) Configure the expected status codes.
expect status status-list
By default, no expected status code is configured.

Configuring the FTP template


About this task
A feature that uses the FTP template performs the FTP operation. The operation measures the time
it takes the NQA client to transfer a file to or download a file from an FTP server.
Configure the username and password for the FTP client to log in to the FTP server before you
perform an FTP operation. For information about configuring the FTP server, see Fundamentals
Configuration Guide.
Procedure
1. Enter system view.
system-view
2. Create an FTP template and enter its view.
nqa template ftp name
3. Specify an FTP login username.
username username
By default, no FTP login username is specified.
4. Specify an FTP login password.
password { cipher | simple } sting
By default, no FTP login password is specified.
5. Specify the source IP address for the probe packets.
IPv4:
source ip ip-address
By default, the primary IPv4 address of the output interface is used as the source IPv4 address
of the probe packets.
The source IP address must be the IPv4 address of a local interface, and the interface must be
up. Otherwise, no probe packets can be sent out.
IPv6:
source ipv6 ipv6-address
By default, the primary IPv6 address of the output interface is used as the source IPv6 address
of the probe packets.

41
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
6. Set the data transmission mode.
mode { active | passive }
The default mode is active.
7. Specify the FTP operation type.
operation { get | put }
By default, the FTP operation type is get, which means obtaining files from the FTP server.
8. Specify the destination URL for the FTP template.
url url
By default, no destination URL is specified for an FTP template.
Enter the URL in one of the following formats:
 ftp://host/filename.
 ftp://host:port/filename.
When you perform the get operation, the file name is required.
When you perform the put operation, the filename argument does not take effect, even if it is
specified. The file name for the put operation is determined by using the filename command.
9. Specify the name of a file to be transferred.
filename filename
By default, no file is specified.
This task is required only for the put operation.
The configuration does not take effect for the get operation.

Configuring the RADIUS template


About this task
A feature that uses the RADIUS template performs the RADIUS authentication operation to check
the availability of the authentication service on the RADIUS server.
The RADIUS authentication operation workflow is as follows:
1. The NQA client sends an authentication request (Access-Request) to the RADIUS server. The
request includes the username and the password. The password is encrypted by using the
MD5 algorithm and the shared key.
2. The RADIUS server authenticates the username and password.
 If the authentication succeeds, the server sends an Access-Accept packet to the NQA
client.
 If the authentication fails, the server sends an Access-Reject packet to the NQA client.
3. The NQA client determines the availability of the authentication service on the RADIUS server
based on the response packet it received:
 If an Access-Accept packet is received, the authentication service is available on the
RADIUS server.
 If an Access-Reject packet is received, the authentication service is not available on the
RADIUS server.
Prerequisites
Before you configure the RADIUS template, specify a username, password, and shared key on the
RADIUS server. For more information about configuring the RADIUS server, see AAA in Security
Configuration Guide.

42
Procedure
1. Enter system view.
system-view
2. Create a RADIUS template and enter its view.
nqa template radius name
3. Specify the destination IP address of the operation.
IPv4:
destination ip ip-address
IPv6:
destination ipv6 ipv6-address
By default, no destination IP address is specified.
4. Specify the destination port number for the operation.
destination port port-number
By default, the destination port number is 1812.
5. Specify a username.
username username
By default, no username is specified.
6. Specify a password.
password { cipher | simple } string
By default, no password is specified.
7. Specify a shared key for secure RADIUS authentication.
key { cipher | simple } string
By default, no shared key is specified for RADIUS authentication.
8. Specify the source IP address for the probe packets.
IPv4:
source ip ip-address
By default, the primary IPv4 address of the output interface is used as the source IPv4 address
of the probe packets.
The source IP address must be the IPv4 address of a local interface, and the interface must be
up. Otherwise, no probe packets can be sent out.
IPv6:
source ipv6 ipv6-address
By default, the primary IPv6 address of the output interface is used as the source IPv6 address
of the probe packets.
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.

Configuring the SSL template


About this task
A feature that uses the SSL template performs the SSL operation to measure the time required to
establish an SSL connection to an SSL server.
Prerequisites
Before you configure the SSL template, you must configure the SSL client policy. For information
about configuring SSL client policies, see Security Configuration Guide.

43
Procedure
1. Enter system view.
system-view
2. Create an SSL template and enter its view.
nqa template ssl name
3. Specify the destination IP address of the operation.
IPv4:
destination ip ip-address
IPv6:
destination ipv6 ipv6-address
By default, no destination IP address is specified.
4. Specify the destination port number for the operation.
destination port port-number
By default, the destination port number is not specified.
5. Specify an SSL client policy.
ssl-client-policy policy-name
By default, no SSL client policy is specified.
6. Specify the source IP address for the probe packets.
IPv4:
source ip ip-address
By default, the primary IPv4 address of the output interface is used as the source IPv4 address
of the probe packets.
The source IP address must be the IPv4 address of a local interface, and the interface must be
up. Otherwise, no probe packets can be sent out.
IPv6:
source ipv6 ipv6-address
By default, the primary IPv6 address of the output interface is used as the source IPv6 address
of the probe packets.
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.

Configuring optional parameters for the NQA template


Restrictions and guidelines
Unless otherwise specified, the following optional parameters apply to all types of NQA templates.
The parameter settings take effect only on the current NQA template.
Procedure
1. Enter system view.
system-view
2. Enter the view of an existing NQA template.
nqa template { dns | ftp | http | https | icmp | radius | ssl | tcp |
tcphalfopen | udp } name
3. Configure a description.
description text

44
By default, no description is configured.
4. Set the interval at which the NQA operation repeats.
frequency interval
The default setting is 5000 milliseconds.
If the operation is not completed when the interval expires, the next operation does not start.
5. Set the probe timeout time.
probe timeout timeout
The default setting is 3000 milliseconds.
6. Set the TTL for the probe packets.
ttl value
The default setting is 20.
This command is not available for the ARP template.
7. Set the ToS value in the IP header of the probe packets.
tos value
The default setting is 0.
This command is not available for the ARP template.
8. Specify the VPN instance where the operation is performed.
vpn-instance vpn-instance-name
By default, the operation is performed on the public network.
9. Set the number of consecutive successful probes to determine a successful operation event.
reaction trigger probe-pass count
The default setting is 3.
If the number of consecutive successful probes for an NQA operation is reached, the NQA
client notifies the feature that uses the template of the successful operation event.
10. Set the number of consecutive probe failures to determine an operation failure.
reaction trigger probe-fail count
The default setting is 3.
If the number of consecutive probe failures for an NQA operation is reached, the NQA client
notifies the feature that uses the NQA template of the operation failure.

Display and maintenance commands for NQA


Execute display commands in any view.

Task Command
Display history records of NQA display nqa history [ admin-name
operations. operation-tag ]
Display the current monitoring results of display nqa reaction counters [ admin-name
reaction entries. operation-tag [ item-number ] ]
Display the most recent result of the NQA display nqa result [ admin-name
operation. operation-tag ]
Display NQA server status. display nqa server status
display nqa statistics [ admin-name
Display NQA statistics.
operation-tag ]

45
NQA configuration examples
Example: Configuring the ICMP echo operation
Network configuration
As shown in Figure 7, configure an ICMP echo operation on the NQA client (Device A) to test the
round-trip time to Device B. The next hop of Device A is Device C.
Figure 7 Network diagram
Device C

10.1.1.2/24 10.2.2.1/24

NQA client
10.1.1.1/24 10.2.2.2/24

10.3.1.1/24 10.4.1.2/24
Device A Device B

10.3.1.2/24 10.4.1.1/24

Device D

Procedure
# Assign IP addresses to interfaces, as shown in Figure 7. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create an ICMP echo operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type icmp-echo

# Specify 10.2.2.2 as the destination IP address of ICMP echo requests.


[DeviceA-nqa-admin-test1-icmp-echo] destination ip 10.2.2.2

# Specify 10.1.1.2 as the next hop. The ICMP echo requests are sent through Device C to Device B.
[DeviceA-nqa-admin-test1-icmp-echo] next-hop ip 10.1.1.2

# Configure the ICMP echo operation to perform 10 probes.


[DeviceA-nqa-admin-test1-icmp-echo] probe count 10

# Set the probe timeout time to 500 milliseconds for the ICMP echo operation.
[DeviceA-nqa-admin-test1-icmp-echo] probe timeout 500

# Configure the ICMP echo operation to repeat every 5000 milliseconds.

46
[DeviceA-nqa-admin-test1-icmp-echo] frequency 5000

# Enable saving history records.


[DeviceA-nqa-admin-test1-icmp-echo] history-record enable

# Set the maximum number of history records to 10.


[DeviceA-nqa-admin-test1-icmp-echo] history-record number 10
[DeviceA-nqa-admin-test1-icmp-echo] quit

# Start the ICMP echo operation.


[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the ICMP echo operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1

Verifying the configuration


# Display the most recent result of the ICMP echo operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 10 Receive response times: 10
Min/Max/Average round trip time: 2/5/3
Square-Sum of round trip time: 96
Last succeeded probe time: 2019-08-23 15:00:01.2
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0

# Display the history records of the ICMP echo operation.


[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test) history records:
Index Response Status Time
370 3 Succeeded 2019-08-23 15:00:01.2
369 3 Succeeded 2019-08-23 15:00:01.2
368 3 Succeeded 2019-08-23 15:00:01.2
367 5 Succeeded 2019-08-23 15:00:01.2
366 3 Succeeded 2019-08-23 15:00:01.2
365 3 Succeeded 2019-08-23 15:00:01.2
364 3 Succeeded 2019-08-23 15:00:01.1
363 2 Succeeded 2019-08-23 15:00:01.1
362 3 Succeeded 2019-08-23 15:00:01.1
361 2 Succeeded 2019-08-23 15:00:01.1

The output shows that the packets sent by Device A can reach Device B through Device C. No
packet loss occurs during the operation. The minimum, maximum, and average round-trip times are
2, 5, and 3 milliseconds, respectively.

Example: Configuring the ICMP jitter operation


Network configuration
As shown in Figure 8, configure an ICMP jitter operation to test the jitter between Device A and
Device B.

47
Figure 8 Network diagram
NQA client NQA server
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
1. Assign IP addresses to interfaces, as shown in Figure 8. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure Device A:
# Create an ICMP jitter operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type icmp-jitter
# Specify 10.2.2.2 as the destination address for the operation.
[DeviceA-nqa-admin-test1-icmp-jitter] destination ip 10.2.2.2
# Configure the operation to repeat every 1000 milliseconds.
[DeviceA-nqa-admin-test1-icmp-jitter] frequency 1000
[DeviceA-nqa-admin-test1-icmp-jitter] quit
# Start the ICMP jitter operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the ICMP jitter operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1

Verifying the configuration


# Display the most recent result of the ICMP jitter operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 10 Receive response times: 10
Min/Max/Average round trip time: 1/2/1
Square-Sum of round trip time: 13
Last packet received time: 2019-03-09 17:40:29.8
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
ICMP-jitter results:
RTT number: 10
Min positive SD: 0 Min positive DS: 0
Max positive SD: 0 Max positive DS: 0
Positive SD number: 0 Positive DS number: 0
Positive SD sum: 0 Positive DS sum: 0
Positive SD average: 0 Positive DS average: 0
Positive SD square-sum: 0 Positive DS square-sum: 0

48
Min negative SD: 1 Min negative DS: 2
Max negative SD: 1 Max negative DS: 2
Negative SD number: 1 Negative DS number: 1
Negative SD sum: 1 Negative DS sum: 2
Negative SD average: 1 Negative DS average: 2
Negative SD square-sum: 1 Negative DS square-sum: 4
SD average: 1 DS average: 2
One way results:
Max SD delay: 1 Max DS delay: 2
Min SD delay: 1 Min DS delay: 2
Number of SD delay: 1 Number of DS delay: 1
Sum of SD delay: 1 Sum of DS delay: 2
Square-Sum of SD delay: 1 Square-Sum of DS delay: 4
Lost packets for unknown reason: 0
# Display the statistics of the ICMP jitter operation.
[DeviceA] display nqa statistics admin test1
NQA entry (admin admin, tag test1) test statistics:
NO. : 1
Start time: 2019-03-09 17:42:10.7
Life time: 156 seconds
Send operation times: 1560 Receive response times: 1560
Min/Max/Average round trip time: 1/2/1
Square-Sum of round trip time: 1563
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
ICMP-jitter results:
RTT number: 1560
Min positive SD: 1 Min positive DS: 1
Max positive SD: 1 Max positive DS: 2
Positive SD number: 18 Positive DS number: 46
Positive SD sum: 18 Positive DS sum: 49
Positive SD average: 1 Positive DS average: 1
Positive SD square-sum: 18 Positive DS square-sum: 55
Min negative SD: 1 Min negative DS: 1
Max negative SD: 1 Max negative DS: 2
Negative SD number: 24 Negative DS number: 57
Negative SD sum: 24 Negative DS sum: 58
Negative SD average: 1 Negative DS average: 1
Negative SD square-sum: 24 Negative DS square-sum: 60
SD average: 16 DS average: 2
One way results:
Max SD delay: 1 Max DS delay: 2
Min SD delay: 1 Min DS delay: 1

49
Number of SD delay: 4 Number of DS delay: 4
Sum of SD delay: 4 Sum of DS delay: 5
Square-Sum of SD delay: 4 Square-Sum of DS delay: 7
Lost packets for unknown reason: 0

Example: Configuring the DHCP operation


Network configuration
As shown in Figure 9, configure a DHCP operation to test the time required for Switch A to obtain an
IP address from the DHCP server (Switch B).
Figure 9 Network diagram
NQA client DHCP server
Vlan-int2 Vlan-int2
10.1.1.1/16 10.1.1.2/16

Switch A Switch B

Procedure
# Create a DHCP operation.
<SwitchA> system-view
[SwitchA] nqa entry admin test1
[SwitchA-nqa-admin-test1] type dhcp

# Specify the DHCP server address (10.1.1.2) as the destination address.


[SwitchA-nqa-admin-test1-dhcp] destination ip 10.1.1.2

# Enable the saving of history records.


[SwitchA-nqa-admin-test1-dhcp] history-record enable
[SwitchA-nqa-admin-test1-dhcp] quit

# Start the DHCP operation.


[SwitchA] nqa schedule admin test1 start-time now lifetime forever

# After the DHCP operation runs for a period of time, stop the operation.
[SwitchA] undo nqa schedule admin test1

Verifying the configuration


# Display the most recent result of the DHCP operation.
[SwitchA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 512/512/512
Square-Sum of round trip time: 262144
Last succeeded probe time: 2019-11-22 09:56:03.2
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0

# Display the history records of the DHCP operation.


[SwitchA] display nqa history admin test1

50
NQA entry (admin admin, tag test1) history records:
Index Response Status Time
1 512 Succeeded 2019-11-22 09:56:03.2

The output shows that it took Switch A 512 milliseconds to obtain an IP address from the DHCP
server.

Example: Configuring the DNS operation


Network configuration
As shown in Figure 10, configure a DNS operation to test whether Device A can perform address
resolution through the DNS server and test the resolution time.
Figure 10 Network diagram

NQA client DNS server


10.1.1.1/16 10.2.2.2/16
IP network

Device A

Procedure
# Assign IP addresses to interfaces, as shown in Figure 10. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create a DNS operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type dns

# Specify the IP address of the DNS server (10.2.2.2) as the destination address.
[DeviceA-nqa-admin-test1-dns] destination ip 10.2.2.2

# Specify host.com as the domain name to be translated.


[DeviceA-nqa-admin-test1-dns] resolve-target host.com

# Enable the saving of history records.


[DeviceA-nqa-admin-test1-dns] history-record enable
[DeviceA-nqa-admin-test1-dns] quit

# Start the DNS operation.


[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the DNS operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1

Verifying the configuration


# Display the most recent result of the DNS operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 62/62/62
Square-Sum of round trip time: 3844
Last succeeded probe time: 2019-11-10 10:49:37.3
Extended results:

51
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0

# Display the history records of the DNS operation.


[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test) history records:
Index Response Status Time
1 62 Succeeded 2019-11-10 10:49:37.3

The output shows that it took Device A 62 milliseconds to translate domain name host.com into an
IP address.

Example: Configuring the FTP operation


Network configuration
As shown in Figure 11, configure an FTP operation to test the time required for Device A to upload a
file to the FTP server. The login username and password are admin and systemtest, respectively.
The file to be transferred to the FTP server is config.txt.
Figure 11 Network diagram
NQA client FTP server
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
# Assign IP addresses to interfaces, as shown in Figure 11. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create an FTP operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type ftp

# Specify the URL of the FTP server.


[DeviceA-nqa-admin-test-ftp] url ftp://10.2.2.2

# Specify 10.1.1.1 as the source IP address.


[DeviceA-nqa-admin-test1-ftp] source ip 10.1.1.1

# Configure the device to upload file config.txt to the FTP server.


[DeviceA-nqa-admin-test1-ftp] operation put
[DeviceA-nqa-admin-test1-ftp] filename config.txt

# Set the username to admin for the FTP operation.


[DeviceA-nqa-admin-test1-ftp] username admin

# Set the password to systemtest for the FTP operation.


[DeviceA-nqa-admin-test1-ftp] password simple systemtest

# Enable the saving of history records.


[DeviceA-nqa-admin-test1-ftp] history-record enable

52
[DeviceA-nqa-admin-test1-ftp] quit

# Start the FTP operation.


[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the FTP operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1

Verifying the configuration


# Display the most recent result of the FTP operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 173/173/173
Square-Sum of round trip time: 29929
Last succeeded probe time: 2019-11-22 10:07:28.6
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to internal error: 0
Failures due to other errors: 0

# Display the history records of the FTP operation.


[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history records:
Index Response Status Time
1 173 Succeeded 2019-11-22 10:07:28.6

The output shows that it took Device A 173 milliseconds to upload a file to the FTP server.

Example: Configuring the HTTP operation


Network configuration
As shown in Figure 12, configure an HTTP operation on the NQA client to test the time required to
obtain data from the HTTP server.
Figure 12 Network diagram
NQA client HTTP server
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
# Assign IP addresses to interfaces, as shown in Figure 12. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create an HTTP operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1

53
[DeviceA-nqa-admin-test1] type http

# Specify the URL of the HTTP server.


[DeviceA-nqa-admin-test1-http] url http://10.2.2.2/index.htm

# Configure the HTTP operation to get data from the HTTP server.
[DeviceA-nqa-admin-test1-http] operation get

# Configure the operation to use HTTP version 1.0.


[DeviceA-nqa-admin-test1-http] version v1.0

# Enable the saving of history records.


[DeviceA-nqa-admin-test1-http] history-record enable
[DeviceA-nqa-admin-test1-http] quit

# Start the HTTP operation.


[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the HTTP operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1

Verifying the configuration


# Display the most recent result of the HTTP operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 64/64/64
Square-Sum of round trip time: 4096
Last succeeded probe time: 2019-11-22 10:12:47.9
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to internal error: 0
Failures due to other errors: 0

# Display the history records of the HTTP operation.


[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history records:
Index Response Status Time
1 64 Succeeded 2019-11-22 10:12:47.9

The output shows that it took Device A 64 milliseconds to obtain data from the HTTP server.

Example: Configuring the UDP jitter operation


Network configuration
As shown in Figure 13, configure a UDP jitter operation to test the jitter, delay, and round-trip time
between Device A and Device B.

54
Figure 13 Network diagram
NQA client NQA server
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
1. Assign IP addresses to interfaces, as shown in Figure 13. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen to UDP port 9000 on IP address 10.2.2.2.
[DeviceB] nqa server udp-echo 10.2.2.2 9000
4. Configure Device A:
# Create a UDP jitter operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type udp-jitter
# Specify 10.2.2.2 as the destination address of the operation.
[DeviceA-nqa-admin-test1-udp-jitter] destination ip 10.2.2.2
# Set the destination port number to 9000.
[DeviceA-nqa-admin-test1-udp-jitter] destination port 9000
# Configure the operation to repeat every 1000 milliseconds.
[DeviceA-nqa-admin-test1-udp-jitter] frequency 1000
[DeviceA-nqa-admin-test1-udp-jitter] quit
# Start the UDP jitter operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the UDP jitter operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1

Verifying the configuration


# Display the most recent result of the UDP jitter operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 10 Receive response times: 10
Min/Max/Average round trip time: 15/32/17
Square-Sum of round trip time: 3235
Last packet received time: 2019-05-29 13:56:17.6
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0

55
Packets arrived late: 0
UDP-jitter results:
RTT number: 10
Min positive SD: 4 Min positive DS: 1
Max positive SD: 21 Max positive DS: 28
Positive SD number: 5 Positive DS number: 4
Positive SD sum: 52 Positive DS sum: 38
Positive SD average: 10 Positive DS average: 10
Positive SD square-sum: 754 Positive DS square-sum: 460
Min negative SD: 1 Min negative DS: 6
Max negative SD: 13 Max negative DS: 22
Negative SD number: 4 Negative DS number: 5
Negative SD sum: 38 Negative DS sum: 52
Negative SD average: 10 Negative DS average: 10
Negative SD square-sum: 460 Negative DS square-sum: 754
SD average: 10 DS average: 10
One way results:
Max SD delay: 15 Max DS delay: 16
Min SD delay: 7 Min DS delay: 7
Number of SD delay: 10 Number of DS delay: 10
Sum of SD delay: 78 Sum of DS delay: 85
Square-Sum of SD delay: 666 Square-Sum of DS delay: 787
SD lost packets: 0 DS lost packets: 0
Lost packets for unknown reason: 0
# Display the statistics of the UDP jitter operation.
[DeviceA] display nqa statistics admin test1
NQA entry (admin admin, tag test1) test statistics:
NO. : 1
Start time: 2019-05-29 13:56:14.0
Life time: 47 seconds
Send operation times: 410 Receive response times: 410
Min/Max/Average round trip time: 1/93/19
Square-Sum of round trip time: 206176
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
UDP-jitter results:
RTT number: 410
Min positive SD: 3 Min positive DS: 1
Max positive SD: 30 Max positive DS: 79
Positive SD number: 186 Positive DS number: 158
Positive SD sum: 2602 Positive DS sum: 1928
Positive SD average: 13 Positive DS average: 12
Positive SD square-sum: 45304 Positive DS square-sum: 31682

56
Min negative SD: 1 Min negative DS: 1
Max negative SD: 30 Max negative DS: 78
Negative SD number: 181 Negative DS number: 209
Negative SD sum: 181 Negative DS sum: 209
Negative SD average: 13 Negative DS average: 14
Negative SD square-sum: 46994 Negative DS square-sum: 3030
SD average: 9 DS average: 1
One way results:
Max SD delay: 46 Max DS delay: 46
Min SD delay: 7 Min DS delay: 7
Number of SD delay: 410 Number of DS delay: 410
Sum of SD delay: 3705 Sum of DS delay: 3891
Square-Sum of SD delay: 45987 Square-Sum of DS delay: 49393
SD lost packets: 0 DS lost packets: 0
Lost packets for unknown reason: 0

Example: Configuring the SNMP operation


Network configuration
As shown in Figure 14, configure an SNMP operation to test the time the NQA client uses to get a
response from the SNMP agent.
Figure 14 Network diagram
NQA client SNMP agent
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
1. Assign IP addresses to interfaces, as shown in Figure 14. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure the SNMP agent (Device B):
# Set the SNMP version to all.
<DeviceB> system-view
[DeviceB] snmp-agent sys-info version all
# Set the read community to public.
[DeviceB] snmp-agent community read public
# Set the write community to private.
[DeviceB] snmp-agent community write private
4. Configure Device A:
# Create an SNMP operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type snmp
# Specify 10.2.2.2 as the destination IP address of the SNMP operation.
[DeviceA-nqa-admin-test1-snmp] destination ip 10.2.2.2
# Enable the saving of history records.

57
[DeviceA-nqa-admin-test1-snmp] history-record enable
[DeviceA-nqa-admin-test1-snmp] quit
# Start the SNMP operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the SNMP operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1

Verifying the configuration


# Display the most recent result of the SNMP operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 50/50/50
Square-Sum of round trip time: 2500
Last succeeded probe time: 2019-11-22 10:24:41.1
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
# Display the history records of the SNMP operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history records:
Index Response Status Time
1 50 Succeeded 2019-11-22 10:24:41.1
The output shows that it took Device A 50 milliseconds to receive a response from the SNMP
agent.

Example: Configuring the TCP operation


Network configuration
As shown in Figure 15, configure a TCP operation to test the time required for Device A to establish
a TCP connection with Device B.
Figure 15 Network diagram
NQA client NQA server
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
1. Assign IP addresses to interfaces, as shown in Figure 15. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable

58
# Configure a listening service to listen to TCP port 9000 on IP address 10.2.2.2.
[DeviceB] nqa server tcp-connect 10.2.2.2 9000
4. Configure Device A:
# Create a TCP operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type tcp
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqa-admin-test1-tcp] destination ip 10.2.2.2
# Set the destination port number to 9000.
[DeviceA-nqa-admin-test1-tcp] destination port 9000
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-tcp] history-record enable
[DeviceA-nqa-admin-test1-tcp] quit
# Start the TCP operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the TCP operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1

Verifying the configuration


# Display the most recent result of the TCP operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 13/13/13
Square-Sum of round trip time: 169
Last succeeded probe time: 2019-11-22 10:27:25.1
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to internal error: 0
Failures due to other errors: 0
# Display the history records of the TCP operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history records:
Index Response Status Time
1 13 Succeeded 2019-11-22 10:27:25.1
The output shows that it took Device A 13 milliseconds to establish a TCP connection to port
9000 on the NQA server.

Example: Configuring the UDP echo operation


Network configuration
As shown in Figure 16, configure a UDP echo operation on the NQA client to test the round-trip time
to Device B. The destination port number is 8000.

59
Figure 16 Network diagram
NQA client NQA server
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
1. Assign IP addresses to interfaces, as shown in Figure 16. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen to UDP port 8000 on IP address 10.2.2.2.
[DeviceB] nqa server udp-echo 10.2.2.2 8000
4. Configure Device A:
# Create a UDP echo operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type udp-echo
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqa-admin-test1-udp-echo] destination ip 10.2.2.2
# Set the destination port number to 8000.
[DeviceA-nqa-admin-test1-udp-echo] destination port 8000
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-udp-echo] history-record enable
[DeviceA-nqa-admin-test1-udp-echo] quit
# Start the UDP echo operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the UDP echo operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1

Verifying the configuration


# Display the most recent result of the UDP echo operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 25/25/25
Square-Sum of round trip time: 625
Last succeeded probe time: 2019-11-22 10:36:17.9
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
# Display the history records of the UDP echo operation.

60
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history records:
Index Response Status Time
1 25 Succeeded 2019-11-22 10:36:17.9
The output shows that the round-trip time between Device A and port 8000 on Device B is 25
milliseconds.

Example: Configuring the UDP tracert operation


Network configuration
As shown in Figure 17, configure a UDP tracert operation to determine the routing path from Device
A to Device B.
Figure 17 Network diagram
NQA client
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
1. Assign IP addresses to interfaces, as shown in Figure 17. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Execute the ip ttl-expires enable command on the intermediate devices and execute
the ip unreachables enable command on Device B.
4. Configure Device A:
# Create a UDP tracert operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type udp-tracert
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqa-admin-test1-udp-tracert] destination ip 10.2.2.2
# Set the destination port number to 33434.
[DeviceA-nqa-admin-test1-udp-tracert] destination port 33434
# Configure Device A to perform three probes to each hop.
[DeviceA-nqa-admin-test1-udp-tracert] probe count 3
# Set the probe timeout time to 500 milliseconds.
[DeviceA-nqa-admin-test1-udp-tracert] probe timeout 500
# Configure the UDP tracert operation to repeat every 5000 milliseconds.
[DeviceA-nqa-admin-test1-udp-tracert] frequency 5000
# Specify GigabitEthernet 1/0/1 as the output interface for UDP packets.
[DeviceA-nqa-admin-test1-udp-tracert] out interface gigabitethernet 1/0/1
# Enable the no-fragmentation feature.
[DeviceA-nqa-admin-test1-udp-tracert] no-fragment enable
# Set the maximum number of consecutive probe failures to 6.
[DeviceA-nqa-admin-test1-udp-tracert] max-failure 6
# Set the TTL value to 1 for UDP packets in the start round of the UDP tracert operation.

61
[DeviceA-nqa-admin-test1-udp-tracert] init-ttl 1
# Start the UDP tracert operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the UDP tracert operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1

Verifying the configuration


# Display the most recent result of the UDP tracert operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 6 Receive response times: 6
Min/Max/Average round trip time: 1/1/1
Square-Sum of round trip time: 1
Last succeeded probe time: 2019-09-09 14:46:06.2
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
UDP-tracert results:
TTL Hop IP Time
1 3.1.1.1 2019-09-09 14:46:03.2
2 10.2.2.2 2019-09-09 14:46:06.2
# Display the history records of the UDP tracert operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history records:
Index TTL Response Hop IP Status Time
1 2 2 10.2.2.2 Succeeded 2019-09-09 14:46:06.2
1 2 1 10.2.2.2 Succeeded 2019-09-09 14:46:05.2
1 2 2 10.2.2.2 Succeeded 2019-09-09 14:46:04.2
1 1 1 3.1.1.1 Succeeded 2019-09-09 14:46:03.2
1 1 2 3.1.1.1 Succeeded 2019-09-09 14:46:02.2
1 1 1 3.1.1.1 Succeeded 2019-09-09 14:46:01.2

Example: Configuring the voice operation


Network configuration
As shown in Figure 18, configure a voice operation to test jitters, delay, MOS, and ICPIF between
Device A and Device B.
Figure 18 Network diagram
NQA client NQA server
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
1. Assign IP addresses to interfaces, as shown in Figure 18. (Details not shown.)

62
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen to UDP port 9000 on IP address 10.2.2.2.
[DeviceB] nqa server udp-echo 10.2.2.2 9000
4. Configure Device A:
# Create a voice operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type voice
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqa-admin-test1-voice] destination ip 10.2.2.2
# Set the destination port number to 9000.
[DeviceA-nqa-admin-test1-voice] destination port 9000
[DeviceA-nqa-admin-test1-voice] quit
# Start the voice operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the voice operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1

Verifying the configuration


# Display the most recent result of the voice operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1000 Receive response times: 1000
Min/Max/Average round trip time: 31/1328/33
Square-Sum of round trip time: 2844813
Last packet received time: 2019-06-13 09:49:31.1
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
Voice results:
RTT number: 1000
Min positive SD: 1 Min positive DS: 1
Max positive SD: 204 Max positive DS: 1297
Positive SD number: 257 Positive DS number: 259
Positive SD sum: 759 Positive DS sum: 1797
Positive SD average: 2 Positive DS average: 6
Positive SD square-sum: 54127 Positive DS square-sum: 1691967
Min negative SD: 1 Min negative DS: 1
Max negative SD: 203 Max negative DS: 1297

63
Negative SD number: 255 Negative DS number: 259
Negative SD sum: 759 Negative DS sum: 1796
Negative SD average: 2 Negative DS average: 6
Negative SD square-sum: 53655 Negative DS square-sum: 1691776
SD average: 2 DS average: 6
One way results:
Max SD delay: 343 Max DS delay: 985
Min SD delay: 343 Min DS delay: 985
Number of SD delay: 1 Number of DS delay: 1
Sum of SD delay: 343 Sum of DS delay: 985
Square-Sum of SD delay: 117649 Square-Sum of DS delay: 970225
SD lost packets: 0 DS lost packets: 0
Lost packets for unknown reason: 0
Voice scores:
MOS value: 4.38 ICPIF value: 0
# Display the statistics of the voice operation.
[DeviceA] display nqa statistics admin test1
NQA entry (admin admin, tag test1) test statistics:
NO. : 1

Start time: 2019-06-13 09:45:37.8


Life time: 331 seconds
Send operation times: 4000 Receive response times: 4000
Min/Max/Average round trip time: 15/1328/32
Square-Sum of round trip time: 7160528
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
Voice results:
RTT number: 4000
Min positive SD: 1 Min positive DS: 1
Max positive SD: 360 Max positive DS: 1297
Positive SD number: 1030 Positive DS number: 1024
Positive SD sum: 4363 Positive DS sum: 5423
Positive SD average: 4 Positive DS average: 5
Positive SD square-sum: 497725 Positive DS square-sum: 2254957
Min negative SD: 1 Min negative DS: 1
Max negative SD: 360 Max negative DS: 1297
Negative SD number: 1028 Negative DS number: 1022
Negative SD sum: 1028 Negative DS sum: 1022
Negative SD average: 4 Negative DS average: 5
Negative SD square-sum: 495901 Negative DS square-sum: 5419
SD average: 16 DS average: 2
One way results:

64
Max SD delay: 359 Max DS delay: 985
Min SD delay: 0 Min DS delay: 0
Number of SD delay: 4 Number of DS delay: 4
Sum of SD delay: 1390 Sum of DS delay: 1079
Square-Sum of SD delay: 483202 Square-Sum of DS delay: 973651
SD lost packets: 0 DS lost packets: 0
Lost packets for unknown reason: 0
Voice scores:
Max MOS value: 4.38 Min MOS value: 4.38
Max ICPIF value: 0 Min ICPIF value: 0

Example: Configuring the DLSw operation


Network configuration
As shown in Figure 19, configure a DLSw operation to test the response time of the DLSw device.
Figure 19 Network diagram
NQA client DLSw device
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
# Assign IP addresses to interfaces, as shown in Figure 19. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create a DLSw operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type dlsw

# Specify 10.2.2.2 as the destination IP address.


[DeviceA-nqa-admin-test1-dlsw] destination ip 10.2.2.2

# Enable the saving of history records.


[DeviceA-nqa-admin-test1-dlsw] history-record enable
[DeviceA-nqa-admin-test1-dlsw] quit

# Start the DLSw operation.


[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the DLSw operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1

Verifying the configuration


# Display the most recent result of the DLSw operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 19/19/19
Square-Sum of round trip time: 361

65
Last succeeded probe time: 2019-11-22 10:40:27.7
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to internal error: 0
Failures due to other errors: 0

# Display the history records of the DLSw operation.


[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history records:
Index Response Status Time
1 19 Succeeded 2019-11-22 10:40:27.7

The output shows that the response time of the DLSw device is 19 milliseconds.

Example: Configuring the path jitter operation


Network configuration
As shown in Figure 20, configure a path jitter operation to test the round trip time and jitters from
Device A to Device B and Device C.
Figure 20 Network diagram
NQA client

10.1.1.1/24 10.1.1.2/24 10.2.2.2/24

Device A Device B Device C

Procedure
# Assign IP addresses to interfaces, as shown in Figure 20. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Execute the ip ttl-expires enable command on Device B and execute the ip
unreachables enable command on Device C.
# Create a path jitter operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type path-jitter

# Specify 10.2.2.2 as the destination IP address of ICMP echo requests.


[DeviceA-nqa-admin-test1-path-jitter] destination ip 10.2.2.2

# Configure the path jitter operation to repeat every 10000 milliseconds.


[DeviceA-nqa-admin-test1-path-jitter] frequency 10000
[DeviceA-nqa-admin-test1-path-jitter] quit

# Start the path jitter operation.


[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the path jitter operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1

66
Verifying the configuration
# Display the most recent result of the path jitter operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Hop IP 10.1.1.2
Basic Results
Send operation times: 10 Receive response times: 10
Min/Max/Average round trip time: 9/21/14
Square-Sum of round trip time: 2419
Extended Results
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
Path-Jitter Results
Jitter number: 9
Min/Max/Average jitter: 1/10/4
Positive jitter number: 6
Min/Max/Average positive jitter: 1/9/4
Sum/Square-Sum positive jitter: 25/173
Negative jitter number: 3
Min/Max/Average negative jitter: 2/10/6
Sum/Square-Sum positive jitter: 19/153

Hop IP 10.2.2.2
Basic Results
Send operation times: 10 Receive response times: 10
Min/Max/Average round trip time: 15/40/28
Square-Sum of round trip time: 4493
Extended Results
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
Path-Jitter Results
Jitter number: 9
Min/Max/Average jitter: 1/10/4
Positive jitter number: 6
Min/Max/Average positive jitter: 1/9/4
Sum/Square-Sum positive jitter: 25/173
Negative jitter number: 3
Min/Max/Average negative jitter: 2/10/6
Sum/Square-Sum positive jitter: 19/153

67
Example: Configuring NQA collaboration
Network configuration
As shown in Figure 21, configure a static route to Switch C with Switch B as the next hop on Switch
A. Associate the static route, a track entry, and an ICMP echo operation to monitor the state of the
static route.
Figure 21 Network diagram
Switch B

Vlan-int3 Vlan-int2
10.2.1.1/24 10.1.1.1/24

Vlan-int3 Vlan-int2
10.2.1.2/24 10.1.1.2/24

Switch A Switch C

Procedure
1. Assign IP addresses to interfaces, as shown in Figure 21. (Details not shown.)
2. On Switch A, configure a static route, and associate the static route with track entry 1.
<SwitchA> system-view
[SwitchA] ip route-static 10.1.1.2 24 10.2.1.1 track 1
3. On Switch A, configure an ICMP echo operation:
# Create an NQA operation with administrator name admin and operation tag test1.
[SwitchA] nqa entry admin test1
# Configure the NQA operation type as ICMP echo.
[SwitchA-nqa-admin-test1] type icmp-echo
# Specify 10.2.1.1 as the destination IP address.
[SwitchA-nqa-admin-test1-icmp-echo] destination ip 10.2.1.1
# Configure the operation to repeat every 100 milliseconds.
[SwitchA-nqa-admin-test1-icmp-echo] frequency 100
# Create reaction entry 1. If the number of consecutive probe failures reaches 5, collaboration is
triggered.
[SwitchA-nqa-admin-test1-icmp-echo] reaction 1 checked-element probe-fail
threshold-type consecutive 5 action-type trigger-only
[SwitchA-nqa-admin-test1-icmp-echo] quit
# Start the ICMP operation.
[SwitchA] nqa schedule admin test1 start-time now lifetime forever
4. On Switch A, create track entry 1, and associate it with reaction entry 1 of the NQA operation.
[SwitchA] track 1 nqa entry admin test1 reaction 1

Verifying the configuration


# Display information about all the track entries on Switch A.
[SwitchA] display track all
Track ID: 1
State: Positive
Duration: 0 days 0 hours 0 minutes 0 seconds

68
Notification delay: Positive 0, Negative 0 (in seconds)
Tracked object:
NQA entry: admin test1
Reaction: 1

# Display brief information about active routes in the routing table on Switch A.
[SwitchA] display ip routing-table

Destinations : 13 Routes : 13

Destination/Mask Proto Pre Cost NextHop Interface


0.0.0.0/32 Direct 0 0 127.0.0.1 InLoop0
10.1.1.0/24 Static 60 0 10.2.1.1 Vlan3
10.2.1.0/24 Direct 0 0 10.2.1.2 Vlan3
10.2.1.0/32 Direct 0 0 10.2.1.2 Vlan3
10.2.1.2/32 Direct 0 0 127.0.0.1 InLoop0
10.2.1.255/32 Direct 0 0 10.2.1.2 Vlan3
127.0.0.0/8 Direct 0 0 127.0.0.1 InLoop0
127.0.0.0/32 Direct 0 0 127.0.0.1 InLoop0
127.0.0.1/32 Direct 0 0 127.0.0.1 InLoop0
127.255.255.255/32 Direct 0 0 127.0.0.1 InLoop0
224.0.0.0/4 Direct 0 0 0.0.0.0 NULL0
224.0.0.0/24 Direct 0 0 0.0.0.0 NULL0
255.255.255.255/32 Direct 0 0 127.0.0.1 InLoop0

The output shows that the static route with the next hop 10.2.1.1 is active, and the status of the track
entry is positive.
# Remove the IP address of VLAN-interface 3 on Switch B.
<SwitchB> system-view
[SwitchB] interface vlan-interface 3
[SwitchB-Vlan-interface3] undo ip address

# Display information about all the track entries on Switch A.


[SwitchA] display track all
Track ID: 1
State: Negative
Duration: 0 days 0 hours 0 minutes 0 seconds
Notification delay: Positive 0, Negative 0 (in seconds)
Tracked object:
NQA entry: admin test1
Reaction: 1

# Display brief information about active routes in the routing table on Switch A.
[SwitchA] display ip routing-table

Destinations : 12 Routes : 12

Destination/Mask Proto Pre Cost NextHop Interface


0.0.0.0/32 Direct 0 0 127.0.0.1 InLoop0
10.2.1.0/24 Direct 0 0 10.2.1.2 Vlan3
10.2.1.0/32 Direct 0 0 10.2.1.2 Vlan3

69
10.2.1.2/32 Direct 0 0 127.0.0.1 InLoop0
10.2.1.255/32 Direct 0 0 10.2.1.2 Vlan3
127.0.0.0/8 Direct 0 0 127.0.0.1 InLoop0
127.0.0.0/32 Direct 0 0 127.0.0.1 InLoop0
127.0.0.1/32 Direct 0 0 127.0.0.1 InLoop0
127.255.255.255/32 Direct 0 0 127.0.0.1 InLoop0
224.0.0.0/4 Direct 0 0 0.0.0.0 NULL0
224.0.0.0/24 Direct 0 0 0.0.0.0 NULL0
255.255.255.255/32 Direct 0 0 127.0.0.1 InLoop0

The output shows that the static route does not exist, and the status of the track entry is negative.

Example: Configuring the ICMP template


Network configuration
As shown in Figure 22, configure an ICMP template for a feature to perform the ICMP echo operation
from Device A to Device B.
Figure 22 Network diagram
Device C

10.1.1.2/24 10.2.2.1/24

NQA client
10.1.1.1/24 10.2.2.2/24

10.3.1.1/24 10.4.1.2/24
Device A Device B

10.3.1.2/24 10.4.1.1/24

Device D

Procedure
# Assign IP addresses to interfaces, as shown in Figure 22. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create ICMP template icmp.
<DeviceA> system-view
[DeviceA] nqa template icmp icmp

# Specify 10.2.2.2 as the destination IP address of ICMP echo requests.


[DeviceA-nqatplt-icmp-icmp] destination ip 10.2.2.2

# Set the probe timeout time to 500 milliseconds for the ICMP echo operation.
[DeviceA-nqatplt-icmp-icmp] probe timeout 500

70
# Configure the ICMP echo operation to repeat every 3000 milliseconds.
[DeviceA-nqatplt-icmp-icmp] frequency 3000

# Configure the NQA client to notify the feature of the successful operation event if the number of
consecutive successful probes reaches 2.
[DeviceA-nqatplt-icmp-icmp] reaction trigger probe-pass 2

# Configure the NQA client to notify the feature of the operation failure if the number of consecutive
failed probes reaches 2.
[DeviceA-nqatplt-icmp-icmp] reaction trigger probe-fail 2

Example: Configuring the DNS template


Network configuration
As shown in Figure 23, configure a DNS template for a feature to perform the DNS operation. The
operation tests whether Device A can perform the address resolution through the DNS server.
Figure 23 Network diagram

NQA client DNS server


10.1.1.1/16 10.2.2.2/16
IP network

Device A

Procedure
# Assign IP addresses to interfaces, as shown in Figure 23. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create DNS template dns.
<DeviceA> system-view
[DeviceA] nqa template dns dns

# Specify the IP address of the DNS server (10.2.2.2) as the destination IP address.
[DeviceA-nqatplt-dns-dns] destination ip 10.2.2.2

# Specify host.com as the domain name to be translated.


[DeviceA-nqatplt-dns-dns] resolve-target host.com

# Set the domain name resolution type to type A.


[DeviceA-nqatplt-dns-dns] resolve-type A

# Specify 3.3.3.3 as the expected IP address.


[DeviceA-nqatplt-dns-dns] expect ip 3.3.3.3

# Configure the NQA client to notify the feature of the successful operation event if the number of
consecutive successful probes reaches 2.
[DeviceA-nqatplt-dns-dns] reaction trigger probe-pass 2

# Configure the NQA client to notify the feature of the operation failure if the number of consecutive
failed probes reaches 2.
[DeviceA-nqatplt-dns-dns] reaction trigger probe-fail 2

71
Example: Configuring the TCP template
Network configuration
As shown in Figure 24, configure a TCP template for a feature to perform the TCP operation. The
operation tests whether Device A can establish a TCP connection to Device B.
Figure 24 Network diagram
NQA client NQA server
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
1. Assign IP addresses to interfaces, as shown in Figure 24. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen to TCP port 9000 on IP address 10.2.2.2.
[DeviceB] nqa server tcp-connect 10.2.2.2 9000
4. Configure Device A:
# Create TCP template tcp.
<DeviceA> system-view
[DeviceA] nqa template tcp tcp
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqatplt-tcp-tcp] destination ip 10.2.2.2
# Set the destination port number to 9000.
[DeviceA-nqatplt-tcp-tcp] destination port 9000
# Configure the NQA client to notify the feature of the successful operation event if the number
of consecutive successful probes reaches 2.
[DeviceA-nqatplt-tcp-tcp] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of
consecutive failed probes reaches 2.
[DeviceA-nqatplt-tcp-tcp] reaction trigger probe-fail 2

Example: Configuring the TCP half open template


Network configuration
As shown in Figure 25, configure a TCP half open template for a feature to test whether Device B can
provide the TCP service for Device A.

72
Figure 25 Network diagram
NQA client NQA server
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
1. Assign IP addresses to interfaces, as shown in Figure 25. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure Device A:
# Create TCP half open template test.
<DeviceA> system-view
[DeviceA] nqa template tcphalfopen test
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqatplt-tcphalfopen-test] destination ip 10.2.2.2
# Configure the NQA client to notify the feature of the successful operation event if the number
of consecutive successful probes reaches 2.
[DeviceA-nqatplt-tcphalfopen-test] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of
consecutive failed probes reaches 2.
[DeviceA-nqatplt-tcphalfopen-test] reaction trigger probe-fail 2

Example: Configuring the UDP template


Network configuration
As shown in Figure 26, configure a UDP template for a feature to perform the UDP operation. The
operation tests whether Device A can receive a response from Device B.
Figure 26 Network diagram
NQA client NQA server
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
1. Assign IP addresses to interfaces, as shown in Figure 26. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen to UDP port 9000 on IP address 10.2.2.2.
[DeviceB] nqa server udp-echo 10.2.2.2 9000
4. Configure Device A:
# Create UDP template udp.

73
<DeviceA> system-view
[DeviceA] nqa template udp udp
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqatplt-udp-udp] destination ip 10.2.2.2
# Set the destination port number to 9000.
[DeviceA-nqatplt-udp-udp] destination port 9000
# Configure the NQA client to notify the feature of the successful operation event if the number
of consecutive successful probes reaches 2.
[DeviceA-nqatplt-udp-udp] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of
consecutive failed probes reaches 2.
[DeviceA-nqatplt-udp-udp] reaction trigger probe-fail 2

Example: Configuring the HTTP template


Network configuration
As shown in Figure 27, configure an HTTP template for a feature to perform the HTTP operation. The
operation tests whether the NQA client can get data from the HTTP server.
Figure 27 Network diagram
NQA client HTTP server
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
# Assign IP addresses to interfaces, as shown in Figure 27. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create HTTP template http.
<DeviceA> system-view
[DeviceA] nqa template http http

# Specify http://10.2.2.2/index.htm as the URL of the HTTP server.


[DeviceA-nqatplt-http-http] url http://10.2.2.2/index.htm

# Set the HTTP operation type to get.


[DeviceA-nqatplt-http-http] operation get

# Configure the NQA client to notify the feature of the successful operation event if the number of
consecutive successful probes reaches 2.
[DeviceA-nqatplt-http-http] reaction trigger probe-pass 2

# Configure the NQA client to notify the feature of the operation failure if the number of consecutive
failed probes reaches 2.
[DeviceA-nqatplt-http-http] reaction trigger probe-fail 2

74
Example: Configuring the HTTPS template
Network configuration
As shown in Figure 28, configure an HTTPS template for a feature to test whether the NQA client can
get data from the HTTPS server (Device B).
Figure 28 Network diagram
NQA client HTTPS server
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
# Assign IP addresses to interfaces, as shown in Figure 28. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Configure an SSL client policy named abc on Device A, and make sure Device A can use the policy
to connect to the HTTPS server. (Details not shown.)
# Create HTTPS template test.
<DeviceA> system-view
[DeviceA] nqa template https https

# Specify http://10.2.2.2/index.htm as the URL of the HTTPS server.


[DeviceA-nqatplt-https-https] url https://10.2.2.2/index.htm

# Specify SSL client policy abc for the HTTPS template.


[DeviceA-nqatplt-https- https] ssl-client-policy abc

# Set the HTTPS operation type to get (the default HTTPS operation type).
[DeviceA-nqatplt-https-https] operation get

# Set the HTTPS version to 1.0 (the default HTTPS version).


[DeviceA-nqatplt-https-https] version v1.0

# Configure the NQA client to notify the feature of the successful operation event if the number of
consecutive successful probes reaches 2.
[DeviceA-nqatplt-https-https] reaction trigger probe-pass 2

# Configure the NQA client to notify the feature of the operation failure if the number of consecutive
failed probes reaches 2.
[DeviceA-nqatplt-https-https] reaction trigger probe-fail 2

Example: Configuring the FTP template


Network configuration
As shown in Figure 29, configure an FTP template for a feature to perform the FTP operation. The
operation tests whether Device A can upload a file to the FTP server. The login username and
password are admin and systemtest, respectively. The file to be transferred to the FTP server is
config.txt.

75
Figure 29 Network diagram
NQA client FTP server
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
# Assign IP addresses to interfaces, as shown in Figure 29. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create FTP template ftp.
<DeviceA> system-view
[DeviceA] nqa template ftp ftp

# Specify the URL of the FTP server.


[DeviceA-nqatplt-ftp-ftp] url ftp://10.2.2.2

# Specify 10.1.1.1 as the source IP address.


[DeviceA-nqatplt-ftp-ftp] source ip 10.1.1.1

# Configure the device to upload file config.txt to the FTP server.


[DeviceA-nqatplt-ftp-ftp] operation put
[DeviceA-nqatplt-ftp-ftp] filename config.txt

# Set the username to admin for the FTP server login.


[DeviceA-nqatplt-ftp-ftp] username admin

# Set the password to systemtest for the FTP server login.


[DeviceA-nqatplt-ftp-ftp] password simple systemtest

# Configure the NQA client to notify the feature of the successful operation event if the number of
consecutive successful probes reaches 2.
[DeviceA-nqatplt-ftp-ftp] reaction trigger probe-pass 2

# Configure the NQA client to notify the feature of the operation failure if the number of consecutive
failed probes reaches 2.
[DeviceA-nqatplt-ftp-ftp] reaction trigger probe-fail 2

Example: Configuring the RADIUS template


Network configuration
As shown in Figure 30, configure a RADIUS template for a feature to test whether the RADIUS
server (Device B) can provide authentication service for Device A. The username and password are
admin and systemtest, respectively. The shared key is 123456 for secure RADIUS authentication.
Figure 30 Network diagram
NQA client RADIUS server
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
# Assign IP addresses to interfaces, as shown in Figure 30. (Details not shown.)

76
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Configure the RADIUS server. (Details not shown.)
# Create RADIUS template radius.
<DeviceA> system-view
[DeviceA] nqa template radius radius

# Specify 10.2.2.2 as the destination IP address of the operation.


[DeviceA-nqatplt-radius-radius] destination ip 10.2.2.2

# Set the username to admin.


[DeviceA-nqatplt-radius-radius] username admin

# Set the password to systemtest.


[DeviceA-nqatplt-radius-radius] password simple systemtest

# Set the shared key to 123456 in plain text for secure RADIUS authentication.
[DeviceA-nqatplt-radius-radius] key simple 123456

# Configure the NQA client to notify the feature of the successful operation event if the number of
consecutive successful probes reaches 2.
[DeviceA-nqatplt-radius-radius] reaction trigger probe-pass 2

# Configure the NQA client to notify the feature of the operation failure if the number of consecutive
failed probes reaches 2.
[DeviceA-nqatplt-radius-radius] reaction trigger probe-fail 2

Example: Configuring the SSL template


Network configuration
As shown in Figure 31, configure an SSL template for a feature to test whether Device A can
establish an SSL connection to the SSL server on Device B.
Figure 31 Network diagram
NQA client SSL server
10.1.1.1/16 10.2.2.2/16
IP network

Device A Device B

Procedure
# Assign IP addresses to interfaces, as shown in Figure 31. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Configure an SSL client policy named abc on Device A, and make sure Device A can use the policy
to connect to the SSL server on Device B. (Details not shown.)
# Create SSL template ssl.
<DeviceA> system-view
[DeviceA] nqa template ssl ssl

# Set the destination IP address and port number to 10.2.2.2 and 9000, respectively.
[DeviceA-nqatplt-ssl-ssl] destination ip 10.2.2.2
[DeviceA-nqatplt-ssl-ssl] destination port 9000

# Specify SSL client policy abc for the SSL template.

77
[DeviceA-nqatplt-ssl-ssl] ssl-client-policy abc

# Configure the NQA client to notify the feature of the successful operation event if the number of
consecutive successful probes reaches 2.
[DeviceA-nqatplt-ssl-ssl] reaction trigger probe-pass 2

# Configure the NQA client to notify the feature of the operation failure if the number of consecutive
failed probes reaches 2.
[DeviceA-nqatplt-ssl-ssl] reaction trigger probe-fail 2

78
Configuring iNQA
About iNQA
Intelligent Network Quality Analyzer (iNQA) allows you to measure network performance quickly in
large-scale IP networks. iNQA supports measuring packet loss on forward, backward, and
bidirectional flows. The packet loss data includes number of lost packets, packet loss rate, number of
lost bytes, and byte loss rate. The measurement results help you to know when and where the
packet loss occurs and the event severity level.

iNQA benefits
iNQA provides the following benefits:
• True measurement results—iNQA measures the service packets directly to calculate packet
loss results, thus reflecting the real network quality.
• Wide application range—Applicable to Layer 2 network and Layer 3 IP network. iNQA
supports the network-level and direct link measurement flexibly.
• Fast fault location—iNQA obtains the packet loss time, packet loss location, and number of
lost packets in real time.
• Applicable to different applications—You can apply iNQA to multiple scenarios, such as
point-to-point, point-to-multipoint, and multipoint-to-multipoint.

Basic concepts
Figure 32 shows the important iNQA concepts including MP, collector, analyzer, and AMS.
Figure 32 Network diagram
Analyzer
MP 100 MP 200 + MP 300
In-point Mid-point Collector 2 Out-point
Inbound Collector 1 Inbound Collector 3 Outbound

IP network IP network

Device 1 Device 2 Device 3

End-to-end measurement
AMS 1 (point-to-point AMS 2 (point-to-point
measurement) measurement)

MP Target flow (real service traffic) Data reporting path

Collector
The collector manages MPs, collects data from MPs, and reports the data to the analyzer.
Analyzer
The analyzer collects the data from collector instances and summarizes the data.
A device supports both the collector and analyzer functionalities. You can enable the collector and
analyzer functionalities on different devices or the same device.

79
Target flow
A target flow is vital for iNQA measurement. You can specify a flow by using any combination of the
following items: source IPv4 address/segment, destination IPv4 address/segment, protocol type,
source port number, destination port number, and DSCP value. Using more items defines a more
explicit flow and generates more accurate analysis data.
iNQA measures the flow according to the flow direction. After you define a forward flow, the flow in
the opposite direction is a backward flow. The bidirectional flows refers to a forward flow and a
backward flow. As shown in Figure 33, if you define the flow from Device 1 to Device 2 as the forward
flow, then a flow from Device 2 to Device 1 is the backward flow. If you want to measure the packet
loss on forward and backward flows between Device 1 and Device 2, you can specify bidirectional
flows. The intermediate devices between the ingress and egress devices of the bidirectional flows
can be the same or different.
Figure 33 Flow direction

Forward

Device 1 Bidirection=Forward+Backward Device 2

Backward

MP
MP is a logical concept. An MP counts statistics and generates data for a flow. To measure packet
loss on an interface on a collector, an MP must be bound to the interface.
An MP contains the following attributes:
• Measurement location of the flow.
 An ingress point refers to the point that the flow enters the network.
 An egress point refers to the point that the flow leaves the network.
 A middle point refers to the point between an ingress point and egress point.
• Flow direction on the measurement point.
A flow entering the MP is an inbound flow, and a flow leaving the MP is an outbound flow.
 As shown in Figure 34, MP 100 is marked as in-point/inbound. MP 100 is the ingress point of
the flow and the flow enters MP 100.
 As shown in Figure 34, MP 110 is marked as in-point/outbound. MP 110 is the ingress point
of the flow and the flow leaves MP 110.
 As shown in Figure 34, MP 200 is marked as out-point/outbound. MP 200 is the egress point
of the flow and the flow leaves MP 200.
 As shown in Figure 34, MP 210 is marked as out-point/inbound. MP 210 is the egress point
of the flow and the flow enters MP 210.

80
Figure 34 MP

MP 110 MP 210
In-point Out-point
Outbound Inbound
Device 1 Device 2

MP 100 MP 200
In-point Out-point
Inbound Outbound

MP Target flow (real service traffic)

AMS
Configured on the analyzer, an AMS defines a measurement span for point-to-point performance
measurement. You can configure multiple AMSs for an instance, and each AMS can be bound to
MPs on any collector of the same instance. Therefore, iNQA can measure and summarize the data
of the forward flow, backward flow, or bidirectional flows in any AMS.
Each AMS has an ingress MP group and egress MP group. The ingress MP group is the set of the
ingress MPs in the AMS and the egress MP group is the set of the egress MPs.
As shown in Figure 32:
• To measure the packet loss between MP 100 and MP 300, AMS is not needed.
• If packet loss occurs between MP 100 and MP 300, configure AMS 1 and AMS 2 on the
analyzer to locate on which span the packet loss occurs.
 Bind MP 100 of collector 1 and MP 200 of collector 2 to AMS 1. Add MP 100 to the ingress
MP group of the AMS 1 and MP 200 to the egress MP group of the AMS 1.
 Bind MP 200 of collector 2 and MP 300 of collector 3 to AMS 2. Add MP 200 to the ingress
MP group of the AMS 2 and MP 300 to the egress MP group of the AMS 2.
Instance
The instance allows measurement on a per-flow basis. In an instance, you can configure the target
flow, flow direction, MPs, and measurement interval.
On the collector and analyzer, create an instance of the same ID for the same target flow. An
instance can be bound to only one target flow. On the same device, you can configure multiple
instances to measure and collect the packet loss rate of different target flows.
Flag bit
Flag bits, also called color bits, are used to distinguish target flows from unintended traffic flows.
iNQA uses ToS field bits 5 to 7 in the IPv4 packet header as the flag bit. This device supports using
only bit 5 as the flag bit.
The ToS field consist of a 6-bit (bits 0 to 5) DSCP filed and a 2-bit (bits 6 to 7) ECN field. If bit 5 is
used as the flag bit, to avoid inaccurate packet loss statistics, do not use it for DSCP purposes.

Application scenarios
End-to-end packet loss measurement
iNQA measures whether packet loss occurs between the ingress points (where the target flow enters
the IP network) and the egress points (where the flow leaves the network).
• Scenario 1: The target flow has only one ingress point and one egress point , and the two points
are on the same network, for example, the local area network or the core network.

81
As shown in Figure 35, to measure packet loss for the flow in the network, deploy iNQA on
Device 1 and Device 3.
Figure 35 Scenario 1

Device 2

MP 100 MP 300
In-point Out-point
Inbound Outbound
Device 1 Device 3

Interface 1 Interface 2

Device 4

MP Target flow (real service traffic)

• Scenario 2: The target flow can have multiple ingress points and egress points, and the points
are on the same network, for example, the local area network or the core network.
As shown in Figure 36, to measure packet loss for the flow in the network, deploy iNQA on
Device 1, Device 2, Device 3, and Device 4.
Figure 36 Scenario 2
MP 100 MP 300
In-point Out-point
Inbound Outbound
Device 1 Device 3

Interface 1 Interface 3

MP 200 MP 400
In-point 0ut-point
Inbound Outbound

Interface 2 Interface 4
Device 2 Device 4

MP Target flow (real service traffic)

• Scenario 3: The ingress points and the egress points of a target flow are on the different
networks.
As shown in Figure 37, to measure whether packet loss occurs when the flow crosses over the
IP network, deploy iNQA on the egress devices of the headquarters and branches to measure.

82
Figure 37 Scenario 3
MP 200 Branch 1
Out-point
Inbound Device 2

Headquarters
MP 100
In-point
Outbound Interface 2
Device 1

IP network
Interface 1 Branch 2
Interface 3

MP 300
Out-point
Inbound Device 3

MP Target flow (real service traffic)

Point-to-point packet loss measurement


This measurement is based on atomic measurement spans (AMSs), which define smaller ranges
than the end-to-end measurement. This method is helpful if you want to find out between which
specific devices the packet loss occurs.
As shown in Figure 38, to configure point-to-point measurement, define AMSs, specify the flow
direction in each AMS, and configure the ingress MP group and egress MP group for each AMS.
Figure 38 Point-to-point packet loss measurement
MP 100 MP 200 MP 300
In-point Mid-point IP network Out-point
Inbound Inbound Outbound

Device 1 Device 2 Device 3


End-to-end measurement

AMS 1 (point-to- AMS 2 (point-to-point


point measurement) measurement)

MP Target flow (real service traffic)

Operating mechanism
iNQA uses the model of multi-point collection and single-point calculation. Multiple collectors collect
and report the packet data periodically and one analyzer calculates the data periodically.
Before starting the iNQA packet loss measurement, make sure all collectors are time synchronized
through NTP or PTP. Therefore, all collectors can use the same measurement interval to color the
flow and report the packet statistics to the analyzer. As a best practice, the analyzer and all collectors
are time synchronized to facilitate management and maintenance. For more information about NTP,
see "Configuring NTP." For more information about PTP, see "Configuring PTP."
The number of incoming packets and that of outgoing packets in a network should be equal within a
time period. If they are not equal, packet loss occurs in the network.

83
As shown in Figure 39, the flow enters the network from MP 100, passes through MP 200, and
leaves the network from MP 310. The devices where the flow passes are collectors and NTP clients,
and the aggregation device is the analyzer and NTP server.
The iNQA measurement works as follows:
1. The analyzer synchronizes the time with all collectors through the NTP protocol.
2. The ingress MP on collector 1 identifies the target flow . It colors and decolors the packets in the
flow alternatively at intervals, and periodically reports the packet statistics to the analyzer.
3. The middle MP on collector 2 identifies the target flow and reports the packet statistics to the
analyzer periodically.
4. The egress MP on collector 3 identifies the target flow. It decolors the colored packets and
reports the packet statistics to the analyzer periodically.
5. The analyzer calculates packet loss for the flow of the same period and same instance as
follows:
Number of lost packets = Number of incoming packets on the MP – Number of outgoing
packets on the MP
Packet loss rate = (Number of incoming packets on the MP – Number of outgoing packets on
the MP) / Number of incoming packets on the MP
The analyzer calculates the byte loss in the similar way the packet loss is calculated.
For the end-to-end measurement, the data from the ingress MP and egress MP is used.
For the point-to-point measurement, the analyzer calculates the result on a per-AMS basis.
• In AMS 1: Packet loss = Number of packets at MP 100 – Number of packets at MP 200
• In AMS 2: Packet loss = Number of packets at MP 200 – Number of packets at MP 300
Figure 39 Network diagram
5
Analyzer
(NTP server)

1 1 1

MP 100 MP 200 MP 300


In-point Mid-point Out-point
2 3 4
2 3 4

Collector 1 Collector 2 Collector 3


(NTP client) (NTP client) (NTP client)

End-to-end measurement

AMS 1 AMS 2

MP Target flow (real service traffic)

NTP packets Data reporting path

84
Restrictions and guidelines: iNQA configuration
Instance restrictions and guidelines
If an analyzer instance is not correctly bound to a collector, or the bound collector does not report the
data on time, the analyzer will uses 0 to calculate the packet loss rate. For example, if an analyzer
instance is bound to only ingress MPs but no egress MPs, the packet loss rate is 100 % because the
analyzer uses 0 as the packet count on egress MPs.
A collector is uniquely identified by its ID. For the same collector, the specified collector ID must be
the same as the collector ID bound to an analyzer instance on the analyzer. The collector ID is the
IPv4 address of the collector, and must be routable from the analyzer. As a best practice, configure
the Router ID of the device as the collector ID.
An analyzer is uniquely identified by its ID. For the same analyzer, the specified analyzer ID must be
the same as the analyzer ID bound with a collector instance on the collector. The analyzer ID is the
IPv4 address of the analyzer, and must be routable from the collector. As a best practice, configure
the Router ID of the device as the analyzer ID.
To measure the same flow, configure the same instance ID, the target flow attributes, and
measurement interval on the analyzer and collectors.

Network and target flow requirements


• To measure the same target flow, make sure the physical interfaces bound to all MPs are in the
same type of networks. For example, the interfaces are all in IP networks or VXLAN networks.
When a flow travels through different networks, the packet header might be modified, making
the packet count and byte count for the entire packet different at different MPs. If the packets
are encapsulated or decapsulated during the transmission over different networks, iNQA
cannot identify the target flow at different MPs for measurement.
• The measured packets are known IPv4 unicast packets. For unknown IP unicast, broadcast,
and multicast packets, one packet entering an MP might be copied as multiple packets leaving
the MP.

Collaborating with other features


iNQA on an aggregate interface might function incorrectly after a new member port joins the
aggregation group and causes insufficient ACL resources. To solve the problem, follow either of the
methods:
• Reduce the number of member ports.
• Use the display qos-acl resource command to view the QoS and ACL resource usage.
Delete unnecessary settings to release ACL resources and then add a new member port again.
For more information about the display qos-acl resource command, see ACL
commands in ACL and QoS Command Reference.
With iNQA enabled on a Layer 2 aggregate interface, do not execute the port s-mlag group
command to assign the Layer 2 aggregate interface to an S-MLAG group. For more information
about the port s-mlag group command, see Ethernet link aggregation commands in Layer
2—LAN Switching Command Reference.
Make sure ECN and coloring bit of 6 or 7 are not both enabled on the same device. For more
information about ECN, see congestion avoidance configuration in ACL and QoS Configuration
Guide.

85
iNQA tasks at a glance
To configure iNQA, perform the following tasks:
• Configure a collector
 Configuring parameters for a collector
 Configuring a collector instance
 Configuring an MP
 Enabling packet loss measurement
• Configure the analyzer
 Configuring parameters for the analyzer
 Configuring an analyzer instance
 Configuring an AMS
No AMSs are required in the end-to-end packet loss measurements. Configure an AMS in
the point-to-point packet loss measurements.
 Enabling the measurement functionality
• Configuring iNQA logging

Prerequisites
Before configuring iNQA, configure NTP or PTP to synchronize the clock between the analyzer and
all collectors. For more information about NTP, see "Configuring NTP." For more information about
PTP, see "Configuring PTP."

Configure a collector
Configuring parameters for a collector
1. Enter system view.
system-view
2. Enable the collector functionality and enter its view.
inqa collector
By default, the collector functionality is disabled.
3. Specify a collector ID.
collector id collector-id
By default, no collector ID is specified.
4. Specify a ToS field bit to flag packet loss measurement.
flag loss-measure tos-bit tos-bit
By default, no ToS field bit is specified to flag packet loss measurement.
5. Bind an analyzer to the collector.
analyzer analyzer-id [ udp-port port-number ] [ vpn-instance
vpn-instance-name ]
By default, no analyzer is bound to a collector.
An analyzer specified in collector view is bound to all collector instances on the collector.

86
An analyzer specified in collector instance view is bound to the specified collector instance. The
analyzer ID in collector instance view takes precedence over that in collector view. A collector
and a collector instance can be bound to only one analyzer. If you execute this command
multiple times in the same view, the most recent configuration takes effect.

Configuring a collector instance


Restrictions and guidelines
To collect packet loss statistics on the same target flow passing through multiple collectors, create an
instance on the analyzer and each collector, and make sure the instance IDs are the same.
A collector instance can monitor a maximum of two flows. Follow these guidelines when you specify
the target flows:
• To monitor only one flow, specify the forward keyword. The collector instance does not
support monitoring only one backward flow.
• To monitor two flows:
 If the endpoint devices of the two flows are the same, specify the bidirection keyword.
In addition, specify both the destination IPv4 address and source IPv4 address.
 If the endpoint devices of the two flows are not identical, specify a forward flow and then a
backward flow in the collector instance. Alternatively, you can create two collector instances
and specify a forward flow for each collector instance.
For the flows to be monitored by different collector instances, the flow attributes must not be
identical.
For a forward flow and backward flow monitored by a collector instance, the flow attributes cannot be
all identical. If they have the same attributes except the direction, define them as bidirectional flows.
Procedure
1. Enter system view.
system-view
2. Enter collector view.
inqa collector
3. Create a collector instance and enter its view.
instance instance-id
4. Bind the collector instance to an analyzer.
analyzer analyzer-id [ udp-port port-number ] [ vpn-instance
vpn-instance-name ]
By default, no analyzer is bound to a collector instance.
An analyzer specified in collector view is bound to all collector instances. An analyzer specified
in collector instance view is bound to the specific collector instance. The analyzer ID in collector
instance view takes precedence over that in collector view. A collector and a collector instance
can be bound to only one analyzer. If you execute this command multiple times in the same
view, the most recent configuration takes effect.
5. Specify a flow to be monitored by the collector instance.
flow { backward | bidirection | forward } { destination-ip
dest-ip-address [ dest-mask-length ] | dscp dscp-value | protocol
{ { tcp | udp } { destination-port dest-port-number1 [ to
dest-port-number2 ] | source-port src-port-number1 [ to
src-port-number2 ] } * | protocol-number } | source-ip src-ip-address
[ src-mask-length ] } *
By default, no flow is specified for a collector instance.

87
6. Specify the measurement interval for the collector instance.
interval interval
By default, the measurement interval for a collector instance is 10 seconds.
Make sure the measurement interval for the same collector instance on all collectors are the
same.
To modify the measurement interval for an enabled collector instance, first disable the
measurement and then modify the measurement interval for the same collector instance in all
collectors.
7. (Optional.) Configure a description for a collector instance.
description text
By default, a collector instance does not have a description.

Configuring an MP
1. Enter system view.
system-view
2. Enter collector view.
inqa collector
3. Create a collector instance and enter its view.
instance instance-id
4. Configure an MP.
mp mp-id { in-point | mid-point | out-point } port-direction { inbound
| outbound }
By default, no MP is configured.
5. Return to collector view.
quit
6. Return to system view.
quit
7. Enter interface view.
interface interface-type interface-number
8. Bind an interface to the MP.
inqa mp mp-id
By default, no interface is bound to an MP.

Enabling packet loss measurement


About this task
Enable a packet loss measure for a collector instance as follows:
• Enable the fixed duration packet loss measurement to measure the network performance in a
time period or to accurately locate the fault points for packet loss.
• Enable the continual packet loss measurement to avoid unperceived packet loss. Once starting,
it does not stop unless you disable it manually.
Restrictions and guidelines
For a collector instance, the fixed duration and continual packet loss measurements cannot be both
enabled.

88
Procedure
1. Enter system view.
system-view
2. Enter collector view.
inqa collector
3. Create a collector instance and enter its view.
instance instance-id
4. Enable the fixed duration packet loss measurement for the collector instance.
On non-middle points:
loss-measure enable duration [ duration ]
On middle points:
loss-measure enable mid-point duration [ duration ]
By default, the fixed duration packet loss measurement is disabled.
Enable the fixed duration packet loss measurement on middle points only in the point-to-point
performance measurements.
5. Enable continual packet loss measurement for the collector instance.
loss-measure enable continual
By default, the continual packet loss measurement is disabled.

Configure the analyzer


Configuring parameters for the analyzer
1. Enter system view.
system-view
2. Enable the analyzer functionality and enter its view.
inqa analyzer
By default, the analyzer functionality is disabled.
3. Specify an analyzer ID.
analyzer id analyzer-id
By default, no analyzer ID is specified.
Make sure the analyzer ID is an IPv4 address that can be routable from the collector.
4. (Optional.) Specify a UDP port for communication between the analyzer and the collectors.
protocol udp-port port-number
By default, the UDP port number used for communication between the analyzer and collectors
is 53312.
If the default UDP port number is used by other services, you can execute this command to
specify a new UDP port number.

Configuring an analyzer instance


1. Enter system view.
system-view
2. Enter analyzer view.

89
inqa analyzer
3. Create an analyzer instance and enter its view.
instance instance-id
4. (Optional.) Configure a description for an analyzer instance.
description text
By default, an analyzer instance does not have a description.
5. Bind a collector to the analyzer instance.
collector collector-id
By default, no collector is bound to an analyzer instance.

Configuring an AMS
About this task
No AMSs are required in the end-to-end packet loss measurements. Configure an AMS in the
point-to-point packet loss measurements.
Procedure
1. Enter system view.
system-view
2. Enter analyzer view.
inqa analyzer
3. Create an analyzer instance and enter its view.
instance instance-id
4. Create an AMS and enter its view.
ams ams-id
5. Specify the flow direction to be measured in the analyzer AMS.
flow { backward | bidirection | forward }
By default, no flow direction is specified to be measured in the analyzer AMS.
6. Add an MP to the ingress MP group for the AMS.
in-group collector collector-id mp mp-id
By default, the ingress MP group for an AMS does not have any MP.
7. Add an MP to the egress MP group for the AMS.
out-group collector collector-id mp mp-id
By default, the egress MP group for an AMS does not have any MP.

Enabling the measurement functionality


1. Enter system view.
system-view
2. Enter analyzer view.
inqa analyzer
3. Create an analyzer instance and enter its view.
instance instance-id
4. Enable the measurement functionality of the analyzer instance.
measure enable

90
By default, the measurement functionality of an analyzer instance is disabled.

Configuring iNQA logging


About this task
iNQA calculates the packet loss rate periodically, and the analyzer generates a log on five
consecutive crossings of a threshold.
• If the packet loss rate exceeds the upper limit for consecutive five measurement intervals, the
analyzer sends a log to the information center.
• If the packet loss rate is less than the lower limit for consecutive five measurement intervals, the
analyzer sends an event clearing log to the information center.
You can configure the information center to determine whether to output logs and the output
destination. For more information about the information center, see "Configuring information center."
Procedure
1. Enter system view.
system-view
2. Enter analyzer view.
inqa analyzer
3. Create an analyzer instance and enter its view.
instance instance-id
4. Configure packet loss logging for the analyzer instance.
loss-measure alarm upper-limit upper-limit lower-limit lower-limit
By default, the packet loss logging is not configured for an analyzer instance and the analyzer
will not generate logs for threshold crossing events.

Display and maintenance commands for iNQA


collector
Execute display commands in any view.

Task Command
Display the collector configuration. display inqa collector
Display the collector instance display inqa collector instance
configuration. { instance-id | all }

Display and maintenance commands for iNQA


analyzer
Execute display commands in any view.

Task Command
Display the analyzer configuration. display inqa analyzer

91
Task Command
Display the analyzer instance display inqa analyzer instance { instance-id
configuration. | all }
Display the AMS configuration in an display inqa analyzer instance instance-id
analyzer instance. ams { ams-id | all }
display inqa statistics loss instance
Display iNQA packet loss statistics.
instance-id [ ams ams-id ]

iNQA configuration examples


Example: Configuring an end-to-end iNQA packet loss
measurement
Network configuration
As shown in Figure 40, Video phone 1 sends a video data flow to Video phone 2 through an IP
network. The Video phone 2 is experiencing an erratic video display problem.
• Enable the collector functionality on Device 1 and Device 2 and enable the analyzer
functionality on Device 2.
• Define the flow from Device 1 to Device 2 as the forward flow.
• Set the packet loss upper limit and packet loss lower limit to 6% and 4%, respectively.
Figure 40 Network diagram

GE1/0/1 IP network Analyzer GE1/0/1


MP 100 + MP 200
In-point Collector 1 Out-point
Collector 2 outbound
inbound

Device 1 Device 2
10.1.1.1 10.2.1.1
End-to-end measurement

Branch A Branch B

Video phone 1 Video phone 2


MP Bidirectional target flows

Prerequisites
1. Assign 10.1.1.1 and 10.2.1.1 to Collector 1 and Collector 2, respectively.
2. Configure OSPF in the IP network to make Collector 1 and the analyzer can reach each other.
(Details not shown.)
3. Configure NTP or PTP on Collector 1 and Collector 2 for clock synchronization. (Details not
shown.)
Procedure
1. Configure Collector 1:
# Specify 10.1.1.1 as the collector ID.

92
<Collector1> system-view
[Collector1] inqa collector
[Collector1-inqa-collector] collector id 10.1.1.1
# Bind the collector to the analyzer with ID 10.2.1.1.
[Collector1-inqa-collector] analyzer 10.2.1.1
# Specify ToS field bit 5 as the flag bit.
[Collector1-inqa-collector] flag loss-measure tos-bit 5
# Configure collector instance 1 to monitor the bidirectional flows entering the network from
GigabitEthernet1/0/1 between 10.1.1.0 and 10.2.1.0.
[Collector1-inqa-collector] instance 1
[Collector1-inqa-collector-instance-1] flow bidirection source-ip 10.1.1.0 24
destination-ip 10.2.1.0 24
[Collector1-inqa-collector-instance-1] mp 100 in-point port-direction inbound
[Collector1-inqa-collector-instance-1] quit
[Collector1-inqa-collector] quit
[Collector1] interface gigabitethernet 1/0/1
[Collector1-GigabitEthernet1/0/1] inqa mp 100
[Collector1-GigabitEthernet1/0/1] quit
# Enable continual packet loss measurement.
[Collector1] inqa collector
[Collector1-inqa-collector] instance 1
[Collector1-inqa-collector-instance-1] loss-measure enable continual
[Collector1-inqa-collector-instance-1] return
<Collector1>
2. Configure Collector 2 and the analyzer:
# Specify 10.2.1.1 as the collector ID.
<AnalyzerColl2> system-view
[AnalyzerColl2] inqa collector
[AnalyzerColl2-inqa-collector] collector id 10.2.1.1
# Bind the collector to the analyzer with ID 10.2.1.1.
[AnalyzerColl2-inqa-collector] analyzer 10.2.1.1
# Specify ToS field bit 5 as the flag bit.
[AnalyzerColl2-inqa-collector] flag loss-measure tos-bit 5
# Configure collector instance 1 to monitor the bidirectional flows entering the network from
GigabitEthernet1/0/1 between 10.1.1.0 and 10.2.1.0.
[AnalyzerColl2-inqa-collector] instance 1
[AnalyzerColl2-inqa-collector-instance-1] flow bidirection source-ip 10.1.1.0 24
destination-ip 10.2.1.0 24
[AnalyzerColl2-inqa-collector-instance-1] mp 200 out-point port-direction outbound
[AnalyzerColl2-inqa-collector-instance-1] quit
[AnalyzerColl2-inqa-collector] quit
[AnalyzerColl2] interface gigabitethernet 1/0/1
[AnalyzerColl2-GigabitEthernet1/0/1] inqa mp 200
[AnalyzerColl2-GigabitEthernet1/0/1] quit
# Enable continual packet loss measurement.
[AnalyzerColl2] inqa collector
[AnalyzerColl2-inqa-collector] instance 1
[AnalyzerColl2-inqa-collector-instance-1] loss-measure enable continual

93
[AnalyzerColl2-inqa-collector-instance-1] quit
[AnalyzerColl2-inqa-collector] quit
# Specify 10.2.1.1 as the analyzer ID.
[AnalyzerColl2] inqa analyzer
[AnalyzerColl2-inqa-analyzer] analyzer id 10.2.1.1
# Bind analyzer instance 1 to Collector 1 and Collector 2.
[AnalyzerColl2-inqa-analyzer] instance 1
[AnalyzerColl2-inqa-analyzer-instance-1] collector 10.1.1.1
[AnalyzerColl2-inqa-analyzer-instance-1] collector 10.2.1.1
# Set the packet loss upper limit and packet loss lower limit to 6% and 4%, respectively.
[AnalyzerColl2-inqa-analyzer-instance-1] loss-measure alarm upper-limit 6
lower-limit 4
# Enable the measurement functionality of analyzer instance 1.
[AnalyzerColl2-inqa-analyzer-instance-1] measure enable
[AnalyzerColl2-inqa-analyzer-instance-1] return
<AnalyzerColl2>

Verifying the configuration


1. On Collector 1:
# Display the collector configuration.
<Collector1> display inqa collector
Collector ID : 10.1.1.1
Loss-measure flag : 5
Analyzer ID : 10.2.1.1
Analyzer UDP-port : 53312
VPN-instance-name : --
Current instance count : 1
# Display the configuration of collector instance 1.
<Collector1> display inqa collector instance 1
Instance ID : 1
Status : Enabled
Duration : --
description : --
Analyzer ID : --
Analyzer UDP-port : --
VPN-instance-name : --
Interval : 10 sec
Flow configuration:
flow bidirection source-ip 10.1.1.0 24 destination-ip 10.2.1.0 24
MP configuration:
mp 100 in-point inbound, GE1/0/1
2. On Collector 2 and the analyzer:
# Display the collector configuration.
<AnalyzerColl2> display inqa collector
Collector ID : 10.2.1.1
Loss-measure flag : 5
Analyzer ID : 10.2.1.1
Analyzer UDP-port : 53312

94
VPN-instance-name : --
Current instance count : 1
# Display the configuration of collector instance 1.
<AnalyzerColl2> display inqa collector instance 1
Instance ID : 1
Status : Enabled
Duration : --
description : --
Analyzer ID : --
Analyzer UDP-port : --
VPN-instance-name : --
Interval : 10 sec
Flow configuration:
flow bidirection source-ip 10.1.1.0 24 destination-ip 10.2.1.0 24
MP configuration:
mp 200 out-point outbound, GE1/0/1
# Display the analyzer configuration.
<AnalyzerColl2> display inqa analyzer
Analyzer ID : 10.2.1.1
Protocol UDP-port : 53312
Current instance count : 1
# Display the configuration of analyzer instance 1.
<AnalyzerColl2> display inqa analyzer instance 1
Instance ID : 1
Status : Enable
Description : --
Alarm upper-limit : 6.000000%
Alarm lower-limit : 4.000000%
Current AMS count : 0
Collectors : 10.1.1.1
10.2.1.1
# Display iNQA packet loss statistics of analyzer instance 1.
<AnalyzerColl2> display inqa statistics loss instance 1
Latest packet loss statistics for forward flow:
Period LostPkts PktLoss% LostBytes ByteLoss%
19122483 15 15.000000% 1500 15.000000%
19122482 15 15.000000% 1500 15.000000%
19122481 15 15.000000% 1500 15.000000%
19122480 15 15.000000% 1500 15.000000%
19122479 15 15.000000% 1500 15.000000%
19122478 15 15.000000% 1500 15.000000%
Latest packet loss statistics for backward flow:
Period LostPkts PktLoss% LostBytes ByteLoss%
19122483 15 15.000000% 1500 15.000000%
19122482 15 15.000000% 1500 15.000000%
19122481 15 15.000000% 1500 15.000000%
19122480 15 15.000000% 1500 15.000000%
19122479 15 15.000000% 1500 15.000000%

95
19122478 15 15.000000% 1500 15.000000%

Example: Configuring an point-to-point iNQA packet loss


measurement
As shown in Figure 41, Video phone 1 sends a video data flow to Video phone 2 through an IP
network. The Video phone 2 is experiencing an erratic video display problem.
• Enable the collector functionality on Device 1, Device 2, and Device 3, and enable the analyzer
functionality on Device 2.
• Define the flow from Device 1 to Device 3 as the forward flow.
• Set the packet loss upper limit and packet loss lower limit to 6% and 4%, respectively.
• Run the packet loss measurement for 15 minutes.
Figure 41 Network diagram
GE1/0/1 GE1/0/1 GE1/0/1
MP 100 MP 200 MP 300
In-point Mid-point Out-point
Inbound Inbound Outbound
Analyzer
IP network 1 IP network 2
+
Branch A Collector 1 Collector 2 Collector 3 Branch B

Device 1 Device 2 Device 3


Video phone 1 10.1.1.1 10.2.1.1 10.3.1.1 Video phone 2

End-to-end measurement
AMS 1 (Point-to-point AMS 2 (Point-to-point
measurement) measurement)

Prerequisites
1. Assign 10.1.1.1, 10.2.1.1, and 10.3.1.1 to Collector 1, Collector 2, and Collector 3, respectively.
2. Configure OSPF in the IP network to make sure the analyzer Collector 1, Collector 3, and the
analyzer can reach each other. (Details not shown.)
3. Configure NTP or PTP on Collector 1, Collector 2, and Collector 3 for clock synchronization.
(Details not shown.)
Procedure
1. Configure Collector 1:
# Specify 10.1.1.1 as the collector ID.
<Collector1> system-view
[Collector1] inqa collector
[Collector1-inqa-collector] collector id 10.1.1.1
# Bind the collector to the analyzer with ID 10.2.1.1.
[Collector1-inqa-collector] analyzer 10.2.1.1
# Specify ToS field bit 5 as the flag bit.
[Collector1-inqa-collector] flag loss-measure tos-bit 5
# Configure collector instance 1 to monitor the forward flow entering the network at
GigabitEthernet1/0/1 from 10.1.1.0 and 10.3.1.0.
[Collector1-inqa-collector] instance 1
[Collector1-inqa-collector-instance-1] flow forward source-ip 10.1.1.0 24
destination-ip 10.3.1.0 24

96
[Collector1-inqa-collector-instance-1] mp 100 in-point port-direction inbound
[Collector1-inqa-collector-instance-1] quit
[Collector1-inqa-collector] quit
[Collector1] interface gigabitethernet 1/0/1
[Collector1-GigabitEthernet1/0/1] inqa mp 100
[Collector1-GigabitEthernet1/0/1] quit
# Run the packet loss measurement for 15 minutes for collector instance 1.
[Collector1] inqa collector
[Collector1-inqa-collector] instance 1
[Collector1-inqa-collector-instance-1] loss-measure enable duration 15
[Collector1-inqa-collector-instance-1] return
<Collector1>
2. Configure Collector 2 and the analyzer:
# Specify 10.2.1.1 as the collector ID.
<AnalyzerColl2> system-view
[AnalyzerColl2] inqa collector
[AnalyzerColl2-inqa-collector] collector id 10.2.1.1
# Bind the collector to the analyzer with ID 10.2.1.1.
[AnalyzerColl2-inqa-collector] analyzer 10.2.1.1
# Specify ToS field bit 5 as the flag bit.
[AnalyzerColl2-inqa-collector] flag loss-measure tos-bit 5
# Configure collector instance 1 to monitor the forward flow entering the network at
GigabitEthernet1/0/1 from 10.1.1.0 and 10.3.1.0.
[AnalyzerColl2-inqa-collector] instance 1
[AnalyzerColl2-inqa-collector-instance-1] flow forward source-ip 10.1.1.0 24
destination-ip 10.3.1.0 24
[AnalyzerColl2-inqa-collector-instance-1] mp 200 mid-point port-direction inbound
[AnalyzerColl2-inqa-collector-instance-1] quit
[AnalyzerColl2-inqa-collector] quit
[AnalyzerColl2] interface gigabitethernet 1/0/1
[AnalyzerColl2-GigabitEthernet1/0/1] inqa mp 200
[AnalyzerColl2-GigabitEthernet1/0/1] quit
# Run the packet loss measurement for 15 minutes for collector instance 1.
[AnalyzerColl2] inqa collector
[AnalyzerColl2-inqa-collector] instance 1
[AnalyzerColl2-inqa-collector-instance-1] loss-measure enable mid-point duration 15
[AnalyzerColl2-inqa-collector-instance-1] quit
[AnalyzerColl2-inqa-collector] quit
# Specify 10.2.1.1 as the analyzer ID.
[AnalyzerColl2] inqa analyzer
[AnalyzerColl2-inqa-analyzer] analyzer id 10.2.1.1
# Bind analyzer instance 1 to Collector 1, Collector 2, and Collector 3.
[AnalyzerColl2-inqa-analyzer] instance 1
[AnalyzerColl2-inqa-analyzer-instance-1] collector 10.1.1.1
[AnalyzerColl2-inqa-analyzer-instance-1] collector 10.2.1.1
[AnalyzerColl2-inqa-analyzer-instance-1] collector 10.3.1.1
# Configure AMS 1 to measure the packet loss rate for the forward flow from MP 100 to MP 200.
[AnalyzerColl2-inqa-analyzer-instance-1] ams 1

97
[AnalyzerColl2-inqa-analyzer-instance-1-ams-1] flow forward
[AnalyzerColl2-inqa-analyzer-instance-1-ams-1] in-group collector 10.1.1.1 mp 100
[AnalyzerColl2-inqa-analyzer-instance-1-ams-1] out-group collector 10.2.1.1 mp 200
[AnalyzerColl2-inqa-analyzer-instance-1-ams-1] quit
# Configure AMS 2 to measure the packet loss rate for the forward flow from MP 200 to MP 300.
[AnalyzerColl2-inqa-analyzer-instance-1] ams 2
[AnalyzerColl2-inqa-analyzer-instance-1-ams-2] flow forward
[AnalyzerColl2-inqa-analyzer-instance-1-ams-2] in-group collector 10.2.1.1 mp 200
[AnalyzerColl2-inqa-analyzer-instance-1-ams-2] out-group collector 10.3.1.1 mp 300
[AnalyzerColl2-inqa-analyzer-instance-1-ams-2] quit
# Set the packet loss upper limit and packet loss lower limit to 6% and 4%, respectively.
[AnalyzerColl2-inqa-analyzer-instance-1] loss-measure alarm upper-limit 6
lower-limit 4
# Enable the measurement functionality of analyzer instance 1.
[AnalyzerColl2-inqa-analyzer-instance-1] measure enable
[AnalyzerColl2-inqa-analyzer-instance-1] return
<AnalyzerColl2>
3. Configure Collector 3:
# Specify 10.3.1.1 as the collector ID.
<Collector3> system-view
[Collector3] inqa collector
[Collector3-inqa-collector] collector id 10.3.1.1
# Bind the collector to the analyzer with ID 10.2.1.1.
[Collector3-inqa-collector] analyzer 10.2.1.1
# Specify ToS field bit 5 as the flag bit.
[Collector3-inqa-collector] flag loss-measure tos-bit 5
# Configure collector instance 1 to monitor the forward flow entering the network at
GigabitEthernet1/0/1 from 10.1.1.0 and 10.3.1.0.
[Collector3-inqa-collector] instance 1
[Collector3-inqa-collector-instance-1] flow forward source-ip 10.1.1.0 24
destination-ip 10.3.1.0 24
[Collector3-inqa-collector-instance-1] mp 300 out-point port-direction outbound
[Collector3-inqa-collector-instance-1] quit
[Collector3-inqa-collector] quit
[Collector3] interface gigabitethernet 1/0/1
[Collector3-GigabitEthernet1/0/1] inqa mp 300
[Collector3-GigabitEthernet1/0/1] quit
# Run the packet loss measurement for 15 minutes for collector instance 1.
[Collector3] inqa collector
[Collector3-inqa-collector] instance 1
[Collector3-inqa-collector-instance-1] loss-measure enable duration 15
[Collector3-inqa-collector-instance-1] return
<Collector3>

Verifying the configuration


1. On Collector 1:
# Display the collector configuration.
<Collector1> display inqa collector

98
Collector ID : 10.1.1.1
Loss-measure flag : 5
Analyzer ID : 10.2.1.1
Analyzer UDP-port : 53312
VPN-instance-name : --
Current instance count : 1
# Display the configuration of collector instance 1.
<Collector1> display inqa collector instance 1
Instance ID : 1
Status : Enabled
Duration : 15 min (Non mid-point)
Remaining time : 14 min 52 sec
description : --
Analyzer ID : --
Analyzer UDP-port : --
VPN-instance-name : --
Interval : 10 sec
Flow configuration:
flow forward source-ip 10.1.1.0 24 destination-ip 10.3.1.0 24
MP configuration:
mp 100 in-point inbound, GE1/0/1
2. On Collector 2 and the analyzer:
# Display the collector configuration.
<AnalyzerColl2> display inqa collector
Collector ID : 10.2.1.1
Loss-measure flag : 5
Analyzer ID : 10.2.1.1
Analyzer UDP-port : 53312
VPN-instance-name : --
Current instance count : 1
# Display the configuration of collector instance 1.
<AnalyzerColl2> display inqa collector instance 1
Instance ID : 1
Status : Enabled
Duration : 15 min (Mid-point)
Remaining time : 14 min 50 sec
description : --
Analyzer ID : --
Analyzer UDP-port : --
VPN-instance-name : --
Interval : 10 sec
Flow configuration:
flow forward source-ip 10.1.1.0 24 destination-ip 10.3.1.0 24
MP configuration:
mp 200 mid-point inbound, GE1/0/1
# Display analyzer configuration.
<AnalyzerColl2> display inqa analyzer
Analyzer ID : 10.2.1.1

99
Protocol UDP-port : 53312
Current instance count : 1
# Display the configuration of analyzer instance 1.
<AnalyzerColl2> display inqa analyzer instance 1
Instance ID : 1
Status : Enabled
Description : --
Alarm upper-limit : 6.000000%
Alarm lower-limit : 4.000000%
Current AMS count : 2
Collectors : 10.1.1.1
10.2.1.1
10.3.1.1
# Display the configuration of all AMSs in analyzer instance 1.
<AnalyzerColl2> display inqa analyzer instance 1 ams all
AMS ID : 1
Flow direction : forward
In-group : collector 10.1.1.1 mp 100
Out-group : collector 10.2.1.1 mp 200

AMS ID : 2
Flow direction : forward
In-group : collector 10.2.1.1 mp 200
Out-group : collector 10.3.1.1 mp 300
# Display iNQA packet loss statistics of analyzer instance 1.
<AnalyzerColl2> display inqa statistics loss instance 1 ams 1
Latest packet loss statistics for forward flow:
Period LostPkts PktLoss% LostBytes ByteLoss%
19122483 15 15.000000% 1500 15.000000%
19122482 15 15.000000% 1500 15.000000%
19122481 15 15.000000% 1500 15.000000%
19122480 15 15.000000% 1500 15.000000%
19122479 15 15.000000% 1500 15.000000%
19122478 15 15.000000% 1500 15.000000%
3. On Collector 3:
# Display the collector configuration.
<Collector3> display inqa collector
Collector ID : 10.3.1.1
Loss-measure flag : 5
Analyzer ID : 10.2.1.1
Analyzer UDP-port : 53312
VPN-instance-name : --
Current instance count : 1
# Display the configuration of collector instance 1.
<Collector3> display inqa collector instance 1
Instance ID : 1
Status : Enabled
Duration : 15 min (Non mid-point)

100
Remaining time : 14 min 51 sec
description : --
Analyzer ID : --
Analyzer UDP-port : --
VPN-instance-name : --
Interval : 10 sec
Flow configuration:
flow forward source-ip 10.1.1.0 24 destination-ip 10.3.1.0 24
MP configuration:
mp 300 out-point outbound, GE1/0/1

101
Configuring NTP
About NTP
NTP is used to synchronize system clocks among distributed time servers and clients on a network.
NTP runs over UDP and uses UDP port 123.

NTP application scenarios


Various tasks, including network management, charging, auditing, and distributed computing
depend on accurate and synchronized system time setting on the network devices. NTP is typically
used in large networks to dynamically synchronize time among network devices.
NTP guarantees higher clock accuracy than manual system clock setting. In a small network that
does not require high clock accuracy, you can keep time synchronized among devices by changing
their system clocks one by one.

NTP working mechanism


Figure 42 shows how NTP synchronizes the system time between two devices (Device A and Device
B, in this example). Assume that:
• Prior to the time synchronization, the time is set to 10:00:00 am for Device A and 11:00:00 am
for Device B.
• Device B is used as the NTP server. Device A is to be synchronized to Device B.
• It takes 1 second for an NTP message to travel from Device A to Device B, and from Device B to
Device A.
• It takes 1 second for Device B to process the NTP message.
Figure 42 Basic work flow

IP network

Device A Device B

NTP message 10:00:00 am


1.

NTP message 10:00:00 am 11:00:01 am

2.

NTP message 10:00:00 am 11:00:01 am 11:00:02 am


3.

NTP message 10:00:00 am 11:00:01 am 11:00:02 am

4. Device A receives the NTP message at 10:00:03 am

The synchronization process is as follows:


1. Device A sends Device B an NTP message, which is timestamped when it leaves Device A.
The time stamp is 10:00:00 am (T1).

102
2. When this NTP message arrives at Device B, Device B adds a timestamp showing the time
when the message arrived at Device B. The timestamp is 11:00:01 am (T2).
3. When the NTP message leaves Device B, Device B adds a timestamp showing the time when
the message left Device B. The timestamp is 11:00:02 am (T3).
4. When Device A receives the NTP message, the local time of Device A is 10:00:03 am (T4).
Up to now, Device A can calculate the following parameters based on the timestamps:
• The roundtrip delay of the NTP message: Delay = (T4 – T1) – (T3 – T2) = 2 seconds.
• Time difference between Device A and Device B: Offset = [ (T2 – T1) + (T3 – T4) ] /2 = 1 hour.
Based on these parameters, Device A can be synchronized to Device B.
This is only a rough description of the work mechanism of NTP. For more information, see the related
protocols and standards.

NTP architecture
NTP uses stratums 1 to 16 to define clock accuracy, as shown in Figure 43. A lower stratum value
represents higher accuracy. Clocks at stratums 1 through 15 are in synchronized state, and clocks at
stratum 16 are not synchronized.
Figure 43 NTP architecture

Authoritative
clock

Primary servers
(Stratum 1)

Secondary servers
(Stratum 2)

Tertiary servers
(Stratum 3)

Quaternary servers
(Stratum 4)

Symmetric Symmetric Broadcast/multicast Broadcast/multicast


Server Client
peer peer server client

A stratum 1 NTP server gets its time from an authoritative time source, such as an atomic clock. It
provides time for other devices as the primary NTP server. A stratum 2 time server receives its time
from a stratum 1 time server, and so on.
To ensure time accuracy and availability, you can specify multiple NTP servers for a device. The
device selects an optimal NTP server as the clock source based on parameters such as stratum. The
clock that the device selects is called the reference source. For more information about clock
selection, see the related protocols and standards.
If the devices in a network cannot synchronize to an authoritative time source, you can perform the
following tasks:

103
• Select a device that has a relatively accurate clock from the network.
• Use the local clock of the device as the reference clock to synchronize other devices in the
network.

NTP association modes


NTP supports the following association modes:
• Client/server mode
• Symmetric active/passive mode
• Broadcast mode
• Multicast mode
You can select one or more association modes for time synchronization. Table 2 provides detailed
description for the four association modes.
In this document, an "NTP server" or a "server" refers to a device that operates as an NTP server in
client/server mode. Time servers refer to all the devices that can provide time synchronization,
including NTP servers, NTP symmetric peers, broadcast servers, and multicast servers.
Table 2 NTP association modes

Mode Working process Principle Application scenario


On the client, specify the IP
address of the NTP server.
A client sends a clock
synchronization message to the
NTP servers. Upon receiving the As Figure 43 shows, this
message, the servers mode is intended for
A client can synchronize
automatically operate in server configurations where
to a server, but a server
Client/server mode and send a reply. devices of a higher
cannot synchronize to a
If the client can be synchronized stratum synchronize to
client.
to multiple time servers, it selects devices with a lower
an optimal clock and stratum.
synchronizes its local clock to the
optimal reference source after
receiving the replies from the
servers.
On the symmetric active peer,
specify the IP address of the
symmetric passive peer.
A symmetric active peer
As Figure 43 shows, this
periodically sends clock A symmetric active peer
mode is most often used
synchronization messages to a and a symmetric
between servers with the
symmetric passive peer. The passive peer can be
same stratum to operate
symmetric passive peer synchronized to each
as a backup for one
Symmetric automatically operates in other. If both of them are
another. If a server fails to
active/passive symmetric passive mode and synchronized, the peer
communicate with all the
sends a reply. with a higher stratum is
servers of a lower stratum,
If the symmetric active peer can synchronized to the
the server can still
be synchronized to multiple time peer with a lower
synchronize to the servers
servers, it selects an optimal stratum.
of the same stratum.
clock and synchronizes its local
clock to the optimal reference
source after receiving the replies
from the servers.

104
Mode Working process Principle Application scenario
A broadcast server sends
clock synchronization
A server periodically sends clock
messages to synchronize
synchronization messages to the
clients in the same
broadcast address
subnet. As Figure 43
255.255.255.255. Clients listen
shows, broadcast mode is
to the broadcast messages from
intended for
the servers to synchronize to the A broadcast client can configurations involving
server according to the synchronize to a one or a few servers and a
broadcast messages. broadcast server, but a
Broadcast potentially large client
When a client receives the first broadcast server cannot population.
broadcast message, the client synchronize to a
broadcast client. The broadcast mode has
and the server start to exchange
lower time accuracy than
messages to calculate the
the client/server and
network delay between them.
symmetric active/passive
Then, only the broadcast server
modes because only the
sends clock synchronization
broadcast servers send
messages.
clock synchronization
messages.
A multicast server can
A multicast server periodically provide time
sends clock synchronization A multicast client can synchronization for clients
messages to the user-configured synchronize to a in the same subnet or in
multicast address. Clients listen multicast server, but a different subnets.
Multicast
to the multicast messages from multicast server cannot The multicast mode has
servers and synchronize to the synchronize to a lower time accuracy than
server according to the received multicast client. the client/server and
messages. symmetric active/passive
modes.
.

NTP security
To improve time synchronization security, NTP provides the access control and authentication
functions.
NTP access control
You can control NTP access by using an ACL. The access rights are in the following order, from the
least restrictive to the most restrictive:
• Peer—Allows time requests and NTP control queries (such as alarms, authentication status,
and time server information) and allows the local device to synchronize itself to a peer device.
• Server—Allows time requests and NTP control queries, but does not allow the local device to
synchronize itself to a peer device.
• Synchronization—Allows only time requests from a system whose address passes the access
list criteria.
• Query—Allows only NTP control queries from a peer device to the local device.
When the device receives an NTP request, it matches the request against the access rights in order
from the least restrictive to the most restrictive: peer, server, synchronization, and query.
• If no NTP access control is configured, the peer access right applies.
• If the IP address of the peer device matches a permit statement in an ACL, the access right is
granted to the peer device. If a deny statement or no ACL is matched, no access right is
granted.
• If no ACL is specified for an access right or the ACL specified for the access right is not created,
the access right is not granted.

105
• If none of the ACLs specified for the access rights is created, the peer access right applies.
• If none of the ACLs specified for the access rights contains rules, no access right is granted.
This feature provides minimal security for a system running NTP. A more secure method is NTP
authentication.
NTP authentication
Use this feature to authenticate the NTP messages for security purposes. If an NTP message
passes authentication, the device can receive it and get time synchronization information. If not, the
device discards the message. This function makes sure the device does not synchronize to an
unauthorized time server.
Figure 44 NTP authentication

Key value
Message Message

Sends to the
receiver Message
Key ID Compute the
Digest
digest
Compute the Digest
digest Key ID

Digest Compare

Sender Key value Receiver

As shown in Figure 44, NTP authentication is performed as follows:


1. The sender uses the key identified by the key ID to calculate a digest for the NTP message
through the MD5/HMAC authentication algorithm. Then it sends the calculated digest together
with the NTP message and key ID to the receiver.
2. Upon receiving the message, the receiver performs the following actions:
a. Finds the key according to the key ID in the message.
b. Uses the key and the MD5/HMAC authentication algorithm to calculate the digest for the
message.
c. Compares the digest with the digest contained in the NTP message.
− If they are different, the receiver discards the message.
− If they are the same and an NTP association is not required to be established, the
receiver provides a response packet. For information about NTP associations, see
"Configuring the maximum number of dynamic associations."
− If they are the same and an NTP association is required to be established or has existed,
the local device determines whether the sender is allowed to use the authentication ID.
If the sender is allowed to use the authentication ID, the receiver accepts the message.
If the sender is not allowed to use the authentication ID, the receiver discards the
message.

Protocols and standards


• RFC 1305, Network Time Protocol (Version 3) Specification, Implementation and Analysis
• RFC 5905, Network Time Protocol Version 4: Protocol and Algorithms Specification

Restrictions and guidelines: NTP configuration


• You cannot configure both NTP and SNTP on the same device.
• NTP is supported only on the following Layer 3 interfaces:

106
 Layer 3 Ethernet interfaces.
 Layer 3 Ethernet subinterfaces.
 Layer 3 aggregate interfaces.
 Layer 3 aggregate subinterfaces.
 VLAN interfaces.
 Tunnel interfaces.
• Do not configure NTP on an aggregate member port.
• The NTP service and SNTP service are mutually exclusive. You can only enable either NTP
service or SNTP service at a time.
• To avoid frequent time changes or even synchronization failures, do not specify more than one
reference source on a network.
• To use NTP for time synchronization, you must use the clock protocol command to specify
NTP for obtaining the time. For more information about the clock protocol command, see
device management commands in Fundamentals Command Reference.

NTP tasks at a glance


To configure NTP, perform the following tasks:
1. Enabling the NTP service
2. Configuring NTP association mode
 Configuring NTP in client/server mode
 Configuring NTP in symmetric active/passive mode
 Configuring NTP in broadcast mode
 Configuring NTP in multicast mode
3. (Optional.) Configuring the local clock as the reference source
4. (Optional.) Configuring access control rights
5. (Optional.) Configuring NTP authentication
 Configuring NTP authentication in client/server mode
 Configuring NTP authentication in symmetric active/passive mode
 Configuring NTP authentication in broadcast mode
 Configuring NTP authentication in multicast mode
6. (Optional.) Controlling NTP message sending and receiving
 Specifying a source address for NTP messages
 Disabling an interface from receiving NTP messages
 Configuring the maximum number of dynamic associations
 Setting a DSCP value for NTP packets
7. (Optional.) Specifying the NTP time-offset thresholds for log and trap outputs

Enabling the NTP service


Restrictions and guidelines
NTP and SNTP are mutually exclusive. Before you enable NTP, make sure SNTP is disabled.
Procedure
1. Enter system view.

107
system-view
2. Enable the NTP service.
ntp-service enable
By default, the NTP service is disabled.

Configuring NTP association mode


Configuring NTP in client/server mode
Restrictions and guidelines
To configure NTP in client/server mode, specify an NTP server for the client.
For a client to synchronize to an NTP server, make sure the server is synchronized by other devices
or uses its local clock as the reference source.
If the stratum level of a server is higher than or equal to a client, the client will not synchronize to that
server.
You can specify multiple servers for a client by executing the ntp-service unicast-server or
ntp-service ipv6 unicast-server command multiple times.
Procedure
1. Enter system view.
system-view
2. Specify an NTP server for the device.
IPv4:
ntp-service unicast-server { server-name | ip-address } [ vpn-instance
vpn-instance-name ] [ authentication-keyid keyid | maxpoll
maxpoll-interval | minpoll minpoll-interval | priority | source
interface-type interface-number | version number ] *
IPv6:
ntp-service ipv6 unicast-server { server-name | ipv6-address }
[ vpn-instance vpn-instance-name ] [ authentication-keyid keyid |
maxpoll maxpoll-interval | minpoll minpoll-interval | priority |
source interface-type interface-number ] *
By default, no NTP server is specified.

Configuring NTP in symmetric active/passive mode


Restrictions and guidelines
To configure NTP in symmetric active/passive mode, specify a symmetric passive peer for the active
peer.
For a symmetric passive peer to process NTP messages from a symmetric active peer, execute the
ntp-service enable command on the symmetric passive peer to enable NTP.
For time synchronization between the symmetric active peer and the symmetric passive peer, make
sure either or both of them are in synchronized state.
You can specify multiple symmetric passive peers by executing the ntp-service
unicast-peer or ntp-service ipv6 unicast-peer command multiple times.

108
Procedure
1. Enter system view.
system-view
2. Specify a symmetric passive peer for the device.
IPv4:
ntp-service unicast-peer { peer-name | ip-address } [ vpn-instance
vpn-instance-name ] [ authentication-keyid keyid | maxpoll
maxpoll-interval | minpoll minpoll-interval | priority | source
interface-type interface-number | version number ] *
IPv6:
ntp-service ipv6 unicast-peer { peer-name | ipv6-address }
[ vpn-instance vpn-instance-name ] [ authentication-keyid keyid |
maxpoll maxpoll-interval | minpoll minpoll-interval | priority |
source interface-type interface-number ] *
By default, no symmetric passive peer is specified.

Configuring NTP in broadcast mode


Restrictions and guidelines
To configure NTP in broadcast mode, you must configure an NTP broadcast client and an NTP
broadcast server.
For a broadcast client to synchronize to a broadcast server, make sure the broadcast server is
synchronized by other devices or uses its local clock as the reference source.
Configuring the broadcast client
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Configure the device to operate in broadcast client mode.
ntp-service broadcast-client
By default, the device does not operate in any NTP association mode.
After you execute the command, the device receives NTP broadcast messages from the
specified interface.
Configuring the broadcast server
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Configure the device to operate in NTP broadcast server mode.
ntp-service broadcast-server [ authentication-keyid keyid | version
number ] *
By default, the device does not operate in any NTP association mode.
After you execute the command, the device sends NTP broadcast messages from the specified
interface.

109
Configuring NTP in multicast mode
Restrictions and guidelines
To configure NTP in multicast mode, you must configure an NTP multicast client and an NTP
multicast server.
For a multicast client to synchronize to a multicast server, make sure the multicast server is
synchronized by other devices or uses its local clock as the reference source.
Configuring a multicast client
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Configure the device to operate in multicast client mode.
IPv4:
ntp-service multicast-client [ ip-address ]
IPv6:
ntp-service ipv6 multicast-client ipv6-address
By default, the device does not operate in any NTP association mode.
After you execute the command, the device receives NTP multicast messages from the
specified interface.
Configuring the multicast server
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Configure the device to operate in multicast server mode.
IPv4:
ntp-service multicast-server [ ip-address ] [ authentication-keyid
keyid | ttl ttl-number | version number ] *
IPv6:
ntp-service ipv6 multicast-server ipv6-address [ authentication-keyid
keyid | ttl ttl-number ] *
By default, the device does not operate in any NTP association mode.
After you execute the command, the device sends NTP multicast messages from the specified
interface.

Configuring the local clock as the reference


source
About this task
This task enables the device to use the local clock as the reference so that the device is
synchronized.

110
Restrictions and guidelines
Make sure the local clock can provide the time accuracy required for the network. After you configure
the local clock as the reference source, the local clock is synchronized, and can operate as a time
server to synchronize other devices in the network. If the local clock is incorrect, timing errors occur.
Devices differ in clock precision. As a best practice to avoid network flapping and clock
synchronization failure, configure only one reference clock on the same network segment and make
sure the clock has high precision.
Prerequisites
Before you configure this feature, adjust the local system time to ensure that it is accurate.
Procedure
1. Enter system view.
system-view
2. Configure the local clock as the reference source.
ntp-service refclock-master [ ip-address ] [ stratum ]
By default, the device does not use the local clock as the reference source.

Configuring access control rights


Prerequisites
Before you configure the right for peer devices to access the NTP services on the local device,
create and configure ACLs associated with the access right. For information about configuring an
ACL, see ACL and QoS Configuration Guide.
Procedure
1. Enter system view.
system-view
2. Configure the right for peer devices to access the NTP services on the local device.
IPv4:
ntp-service access { peer | query | server | synchronization } acl
ipv4-acl-number
IPv6:
ntp-service ipv6 { peer | query | server | synchronization } acl
ipv6-acl-number
By default, the right for peer devices to access the NTP services on the local device is peer.

Configuring NTP authentication


Configuring NTP authentication in client/server mode
Restrictions and guidelines
To ensure a successful NTP authentication in client/server mode, configure the same authentication
key ID, algorithm, and key on the server and client. Make sure the peer device is allowed to use the
key ID for authentication on the local device.
NTP authentication results differ when different configurations are performed on client and server.
For more information, see Table 3. (N/A in the table means that whether the configuration is
performed or not does not make any difference.)

111
Table 3 NTP authentication results

Client Server
Enable NTP Specify the Enable NTP
Trusted key Trusted key
authentication server and key authentication
Successful authentication
Yes Yes Yes Yes Yes

Failed authentication
Yes Yes Yes Yes No
Yes Yes Yes No N/A
Yes Yes No N/A N/A

Authentication not performed


Yes No N/A N/A N/A
No N/A N/A N/A N/A

Configuring NTP authentication for a client


1. Enter system view.
system-view
2. Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is disabled.
3. Configure an NTP authentication key.
ntp-service authentication-keyid keyid authentication-mode
{ hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 }
{ cipher | simple } string [ acl ipv4-acl-number | ipv6 acl
ipv6-acl-number ] *
By default, no NTP authentication key exists.
4. Configure the key as a trusted key.
ntp-service reliable authentication-keyid keyid
By default, no authentication key is configured as a trusted key.
5. Associate the specified key with an NTP server.
IPv4:
ntp-service unicast-server { server-name | ip-address } [ vpn-instance
vpn-instance-name ] authentication-keyid keyid
IPv6:
ntp-service ipv6 unicast-server { server-name | ipv6-address }
[ vpn-instance vpn-instance-name ] authentication-keyid keyid
Configuring NTP authentication for a server
1. Enter system view.
system-view
2. Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is disabled.
3. Configure an NTP authentication key.

112
ntp-service authentication-keyid keyid authentication-mode
{ hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 }
{ cipher | simple } string [ acl ipv4-acl-number | ipv6 acl
ipv6-acl-number ] *
By default, no NTP authentication key exists.
4. Configure the key as a trusted key.
ntp-service reliable authentication-keyid keyid
By default, no authentication key is configured as a trusted key.

Configuring NTP authentication in symmetric active/passive


mode
Restrictions and guidelines
To ensure a successful NTP authentication in symmetric active/passive mode, configure the same
authentication key ID, algorithm, and key on the active peer and passive peer. Make sure the peer
device is allowed to use the key ID for authentication on the local device.
NTP authentication results differ when different configurations are performed on active peer and
passive peer. For more information, see Table 4. (N/A in the table means that whether the
configuration is performed or not does not make any difference.)
Table 4 NTP authentication results

Active peer Passive peer


Enable NTP Specify the Trusted Enable NTP Trusted
Stratum level
authentication peer and key key authentication key
Successful authentication
Yes Yes Yes N/A Yes Yes

Failed authentication
Yes Yes Yes N/A Yes No
Yes Yes Yes N/A No N/A
Yes No N/A N/A Yes N/A
No N/A N/A N/A Yes N/A
Larger than the
Yes Yes No N/A N/A
passive peer
Smaller than the
Yes Yes No Yes N/A
passive peer

Authentication not performed


Yes No N/A N/A No N/A
No N/A N/A N/A No N/A
Smaller than the
Yes Yes No No N/A
passive peer

Configuring NTP authentication for an active peer


1. Enter system view.
system-view
2. Enable NTP authentication.

113
ntp-service authentication enable
By default, NTP authentication is disabled.
3. Configure an NTP authentication key.
ntp-service authentication-keyid keyid authentication-mode
{ hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 }
{ cipher | simple } string [ acl ipv4-acl-number | ipv6 acl
ipv6-acl-number ] *
By default, no NTP authentication key exists.
4. Configure the key as a trusted key.
ntp-service reliable authentication-keyid keyid
By default, no authentication key is configured as a trusted key.
5. Associate the specified key with a passive peer.
IPv4:
ntp-service unicast-peer { ip-address | peer-name } [ vpn-instance
vpn-instance-name ] authentication-keyid keyid
IPv6:
ntp-service ipv6 unicast-peer { ipv6-address | peer-name }
[ vpn-instance vpn-instance-name ] authentication-keyid keyid
Configuring NTP authentication for a passive peer
1. Enter system view.
system-view
2. Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is disabled.
3. Configure an NTP authentication key.
ntp-service authentication-keyid keyid authentication-mode
{ hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 }
{ cipher | simple } string [ acl ipv4-acl-number | ipv6 acl
ipv6-acl-number ] *
By default, no NTP authentication key exists.
4. Configure the key as a trusted key.
ntp-service reliable authentication-keyid keyid
By default, no authentication key is configured as a trusted key.

Configuring NTP authentication in broadcast mode


Restrictions and guidelines
To ensure a successful NTP authentication in broadcast mode, configure the same authentication
key ID, algorithm, and key on the broadcast server and client. Make sure the peer device is allowed
to use the key ID for authentication on the local device.
NTP authentication results differ when different configurations are performed on broadcast client and
server. For more information, see Table 5. (N/A in the table means that whether the configuration is
performed or not does not make any difference.)

114
Table 5 NTP authentication results

Broadcast server Broadcast client


Enable NTP Specify the Enable NTP
Trusted key Trusted key
authentication server and key authentication
Successful authentication
Yes Yes Yes Yes Yes

Failed authentication
Yes Yes Yes Yes No
Yes Yes Yes No N/A
Yes Yes No Yes N/A
Yes No N/A Yes N/A
No N/A N/A Yes N/A

Authentication not performed


Yes Yes No No N/A
Yes No N/A No N/A
No N/A N/A No N/A

Configuring NTP authentication for a broadcast client


1. Enter system view.
system-view
2. Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is disabled.
3. Configure an NTP authentication key.
ntp-service authentication-keyid keyid authentication-mode
{ hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 }
{ cipher | simple } string [ acl ipv4-acl-number | ipv6 acl
ipv6-acl-number ] *
By default, no NTP authentication key exists.
4. Configure the key as a trusted key.
ntp-service reliable authentication-keyid keyid
By default, no authentication key is configured as a trusted key.
Configuring NTP authentication for a broadcast server
1. Enter system view.
system-view
2. Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is disabled.
3. Configure an NTP authentication key.
ntp-service authentication-keyid keyid authentication-mode
{ hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 }
{ cipher | simple } string [ acl ipv4-acl-number | ipv6 acl
ipv6-acl-number ] *

115
By default, no NTP authentication key exists.
4. Configure the key as a trusted key.
ntp-service reliable authentication-keyid keyid
By default, no authentication key is configured as a trusted key.
5. Enter interface view.
interface interface-type interface-number
6. Associate the specified key with the broadcast server.
ntp-service broadcast-server authentication-keyid keyid
By default, the broadcast server is not associated with a key.

Configuring NTP authentication in multicast mode


Restrictions and guidelines
To ensure a successful NTP authentication in multicast mode, configure the same authentication key
ID, algorithm, and key on the multicast server and client. Make sure the peer device is allowed to use
the key ID for authentication on the local device.
NTP authentication results differ when different configurations are performed on broadcast client and
server. For more information, see Table 6. (N/A in the table means that whether the configuration is
performed or not does not make any difference.)
Table 6 NTP authentication results

Multicast server Multicast client


Enable NTP Specify the Enable NTP
Trusted key Trusted key
authentication server and key authentication
Successful authentication
Yes Yes Yes Yes Yes

Failed authentication
Yes Yes Yes Yes No
Yes Yes Yes No N/A
Yes Yes No Yes N/A
Yes No N/A Yes N/A
No N/A N/A Yes N/A

Authentication not performed


Yes Yes No No N/A
Yes No N/A No N/A
No N/A N/A No N/A

Configuring NTP authentication for a multicast client


1. Enter system view.
system-view
2. Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is disabled.
3. Configure an NTP authentication key.

116
ntp-service authentication-keyid keyid authentication-mode
{ hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 }
{ cipher | simple } string [ acl ipv4-acl-number | ipv6 acl
ipv6-acl-number ] *
By default, no NTP authentication key exists.
4. Configure the key as a trusted key.
ntp-service reliable authentication-keyid keyid
By default, no authentication key is configured as a trusted key.
Configuring NTP authentication for a multicast server
1. Enter system view.
system-view
2. Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is disabled.
3. Configure an NTP authentication key.
ntp-service authentication-keyid keyid authentication-mode
{ hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 }
{ cipher | simple } string [ acl ipv4-acl-number | ipv6 acl
ipv6-acl-number ] *
By default, no NTP authentication key exists.
4. Configure the key as a trusted key.
ntp-service reliable authentication-keyid keyid
By default, no authentication key is configured as a trusted key.
5. Enter interface view.
interface interface-type interface-number
6. Associate the specified key with a multicast server.
IPv4:
ntp-service multicast-server [ ip-address ] authentication-keyid keyid
IPv6:
ntp-service ipv6 multicast-server ipv6-multicast-address
authentication-keyid keyid
By default, no multicast server is associated with the specified key.

Controlling NTP message sending and receiving


Specifying a source address for NTP messages
About this task
You can specify a source address for NTP messages directly or by specifying a source interface. If
you specify a source interface for NTP messages, the device uses the IP address of the specified
interface as the source address to send NTP messages.
Restrictions and guidelines
To prevent interface status changes from causing NTP communication failures, specify an interface
that is always up as the source interface, a loopback interface for example.

117
When the device responds to an NTP request, the source IP address of the NTP response is always
the IP address of the interface that has received the NTP request.
If you have specified the source interface for NTP messages in the ntp-service
unicast-server/ntp-service ipv6 unicast-server or ntp-service unicast-peer/ntp-service ipv6
unicast-peer command, the IP address of the specified interface is used as the source IP address
for NTP messages.
If you have configured the ntp-service broadcast-server or ntp-service
multicast-server/ntp-service ipv6 multicast-server command in an interface view, the IP address
of the interface is used as the source IP address for broadcast or multicast NTP messages.
Procedure
1. Enter system view.
system-view
2. Specify the source address for NTP messages.
IPv4:
ntp-service source { interface-type interface-number | ipv4-address }
IPv6:
ntp-service ipv6 source interface-type interface-number
By default, no source address is specified for NTP messages.

Disabling an interface from receiving NTP messages


About this task
When NTP is enabled, all interfaces by default can receive NTP messages. For security purposes,
you can disable some of the interfaces from receiving NTP messages.
Procedure
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Disable the interface from receiving NTP packets.
IPv4:
undo ntp-service inbound enable
IPv6:
undo ntp-service ipv6 inbound enable
By default, an interface receives NTP messages.

Configuring the maximum number of dynamic associations


About this task
Perform this task to restrict the number of dynamic associations to prevent dynamic associations
from occupying too many system resources.
NTP has the following types of associations:
• Static association—A manually created association.
• Dynamic association—Temporary association created by the system during NTP operation. A
dynamic association is removed if no messages are exchanged within about 12 minutes.

118
The following describes how an association is established in different association modes:
• Client/server mode—After you specify an NTP server, the system creates a static association
on the client. The server simply responds passively upon the receipt of a message, rather than
creating an association (static or dynamic).
• Symmetric active/passive mode—After you specify a symmetric passive peer on a
symmetric active peer, static associations are created on the symmetric active peer, and
dynamic associations are created on the symmetric passive peer.
• Broadcast or multicast mode—Static associations are created on the server, and dynamic
associations are created on the client.
Restrictions and guidelines
A single device can have a maximum of 128 concurrent associations, including static associations
and dynamic associations.
Procedure
1. Enter system view.
system-view
2. Configure the maximum number of dynamic sessions.
ntp-service max-dynamic-sessions number
By default, the maximum number of dynamic sessions is 100.

Setting a DSCP value for NTP packets


About this task
The DSCP value determines the sending precedence of an NTP packet.
Procedure
1. Enter system view.
system-view
2. Set a DSCP value for NTP packets.
IPv4:
ntp-service dscp dscp-value
IPv6:
ntp-service ipv6 dscp dscp-value
The default DSCP value is 48 for IPv4 packets and 56 for IPv6 packets.

Specifying the NTP time-offset thresholds for log


and trap outputs
About this task
By default, the system synchronizes the NTP client's time to the server and outputs a log and a trap
when the time offset exceeds 128 ms for multiple times.
After you set the NTP time-offset thresholds for log and trap outputs, the system synchronizes the
client's time to the server when the time offset exceeds 128 ms for multiple times, but outputs logs
and traps only when the time offset exceeds the specified thresholds, respectively.
Procedure
1. Enter system view.

119
system-view
2. Specify the NTP time-offset thresholds for log and trap outputs.
ntp-service time-offset-threshold { log log-threshold | trap
trap-threshold } *
By default, no NTP time-offset thresholds are set for log and trap outputs.

Display and maintenance commands for NTP


Execute display commands in any view.

Task Command
display ntp-service ipv6 sessions
Display information about IPv6 NTP associations.
[ verbose ]
display ntp-service sessions
Display information about IPv4 NTP associations.
[ verbose ]
Display information about NTP service status. display ntp-service status
Display brief information about the NTP servers from display ntp-service trace [ source
the local device back to the primary NTP server. interface-type interface-number ]

NTP configuration examples


Example: Configuring NTP client/server association mode
Network configuration
As shown in Figure 45, perform the following tasks:
• Configure Device A's local clock as its reference source, with stratum level 2.
• Configure Device B to operate in client mode and specify Device A as the NTP server of Device
B.
Figure 45 Network diagram
1.0.1.11/24 1.0.1.12/24

Device A Device B

Procedure

1. Assign an IP address to each interface, and make sure Device A and Device B can reach each
other, as shown in Figure 45. (Details not shown.)
2. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Specify the local clock as the reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2
3. Configure Device B:
# Enable the NTP service.

120
<DeviceB> system-view
[DeviceB] ntp-service enable
# Specify NTP for obtaining the time.
[DeviceB] clock protocol ntp
# Specify Device A as the NTP server of Device B.
[DeviceB] ntp-service unicast-server 1.0.1.11

Verifying the configuration


# Verify that Device B has synchronized its time with Device A, and the clock stratum level of
Device B is 3.
[DeviceB] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 1.0.1.11
Local mode: client
Reference clock ID: 1.0.1.11
Leap indicator: 00
Clock jitter: 0.000977 s
Stability: 0.000 pps
Clock precision: 2^-19
Root delay: 0.00383 ms
Root dispersion: 16.26572 ms
Reference time: d0c6033f.b9923965 Wed, Dec 29 2019 18:58:07.724
System poll interval: 64 s
# Verify that an IPv4 NTP association has been established between Device B and Device A.
[DeviceB] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[12345]1.0.1.11 127.127.1.0 2 1 64 15 -4.0 0.0038 16.262
Notes: 1 source(master), 2 source(peer), 3 selected, 4 candidate, 5 configured.
Total sessions: 1

Example: Configuring IPv6 NTP client/server association


mode
Network configuration
As shown in Figure 46, perform the following tasks:
• Configure Device A's local clock as its reference source, with stratum level 2.
• Configure Device B to operate in client mode and specify Device A as the IPv6 NTP server of
Device B.
Figure 46 Network diagram
NTP server NTP client

3000::34/64 3000::39/64

Device A Device B

121
Procedure

1. Assign an IP address to each interface, and make sure Device A and Device B can reach each
other, as shown in Figure 46. (Details not shown.)
2. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Specify the local clock as the reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2
3. Configure Device B:
# Enable the NTP service.
<DeviceB> system-view
[DeviceB] ntp-service enable
# Specify NTP for obtaining the time.
[DeviceB] clock protocol ntp
# Specify Device A as the IPv6 NTP server of Device B.
[DeviceB] ntp-service ipv6 unicast-server 3000::34

Verifying the configuration


# Verify that Device B has synchronized its time with Device A, and the clock stratum level of
Device B is 3.
[DeviceB] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 3000::34
Local mode: client
Reference clock ID: 163.29.247.19
Leap indicator: 00
Clock jitter: 0.000977 s
Stability: 0.000 pps
Clock precision: 2^-19
Root delay: 0.02649 ms
Root dispersion: 12.24641 ms
Reference time: d0c60419.9952fb3e Wed, Dec 29 2019 19:01:45.598
System poll interval: 64 s
# Verify that an IPv6 NTP association has been established between Device B and Device A.
[DeviceB] display ntp-service ipv6 sessions
Notes: 1 source(master), 2 source(peer), 3 selected, 4 candidate, 5 configured.

Source: [12345]3000::34
Reference: 127.127.1.0 Clock stratum: 2
Reachabilities: 15 Poll interval: 64
Last receive time: 19 Offset: 0.0
Roundtrip delay: 0.0 Dispersion: 0.0

Total sessions: 1

122
Example: Configuring NTP symmetric active/passive
association mode
Network configuration
As shown in Figure 47, perform the following tasks:
• Configure Device A's local clock as its reference source, with stratum level 2.
• Configure Device A to operate in symmetric active mode and specify Device B as the passive
peer of Device A.
Figure 47 Network diagram
Symmetric active peer Symmetric passive peer

3.0.1.31/24 3.0.1.32/24

Device A Device B

Procedure

1. Assign an IP address to each interface, and make sure Device A and Device B can reach each
other, as shown in Figure 47. (Details not shown.)
2. Configure Device B:
# Enable the NTP service.
<DeviceB> system-view
[DeviceB] ntp-service enable
# Specify NTP for obtaining the time.
[DeviceB] clock protocol ntp
3. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Specify NTP for obtaining the time.
[DeviceA] clock protocol ntp
# Specify the local clock as the reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2
# Configure Device B as its symmetric passive peer.
[DeviceA] ntp-service unicast-peer 3.0.1.32

Verifying the configuration


# Verify that Device B has synchronized its time with Device A.
[DeviceB] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 3.0.1.31
Local mode: sym_passive
Reference clock ID: 3.0.1.31
Leap indicator: 00
Clock jitter: 0.000916 s
Stability: 0.000 pps

123
Clock precision: 2^-19
Root delay: 0.00609 ms
Root dispersion: 1.95859 ms
Reference time: 83aec681.deb6d3e5 Wed, Jan 8 2019 14:33:11.081
System poll interval: 64 s
# Verify that an IPv4 NTP association has been established between Device B and Device A.
[DeviceB] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[12]3.0.1.31 127.127.1.0 2 62 64 34 0.4251 6.0882 1392.1
Notes: 1 source(master), 2 source(peer), 3 selected, 4 candidate, 5 configured.
Total sessions: 1

Example: Configuring IPv6 NTP symmetric active/passive


association mode
Network configuration
As shown in Figure 48, perform the following tasks:
• Configure Device A's local clock as its reference source, with stratum level 2.
• Configure Device A to operate in symmetric active mode and specify Device B as the IPv6
passive peer of Device A.
Figure 48 Network diagram
Symmetric active peer Symmetric passive peer

3000::35/64 3000::36/64

Device A Device B

Procedure

1. Assign an IP address to each interface, and make sure Device A and Device B can reach each
other, as shown in Figure 48. (Details not shown.)
2. Configure Device B:
# Enable the NTP service.
<DeviceB> system-view
[DeviceB] ntp-service enable
# Specify NTP for obtaining the time.
[DeviceB] clock protocol ntp
3. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Specify NTP for obtaining the time.
[DeviceA] clock protocol ntp
# Specify the local clock as the reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2

124
# Configure Device B as the IPv6 symmetric passive peer.
[DeviceA] ntp-service ipv6 unicast-peer 3000::36

Verifying the configuration


# Verify that Device B has synchronized its time with Device A.
[DeviceB] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 3000::35
Local mode: sym_passive
Reference clock ID: 251.73.79.32
Leap indicator: 11
Clock jitter: 0.000977 s
Stability: 0.000 pps
Clock precision: 2^-19
Root delay: 0.01855 ms
Root dispersion: 9.23483 ms
Reference time: d0c6047c.97199f9f Wed, Dec 29 2019 19:03:24.590
System poll interval: 64 s
# Verify that an IPv6 NTP association has been established between Device B and Device A.
[DeviceB] display ntp-service ipv6 sessions
Notes: 1 source(master), 2 source(peer), 3 selected, 4 candidate, 5 configured.

Source: [1234]3000::35
Reference: 127.127.1.0 Clock stratum: 2
Reachabilities: 15 Poll interval: 64
Last receive time: 19 Offset: 0.0
Roundtrip delay: 0.0 Dispersion: 0.0

Total sessions: 1

Example: Configuring NTP broadcast association mode


Network configuration
As shown in Figure 49, configure Switch C as the NTP server for multiple devices on the same
network segment so that these devices synchronize the time with Switch C.
• Configure Switch C's local clock as its reference source, with stratum level 2.
• Configure Switch C to operate in broadcast server mode and send broadcast messages from
VLAN-interface 2.
• Configure Switch A and Switch B to operate in broadcast client mode, and listen to broadcast
messages on VLAN-interface 2.

125
Figure 49 Network diagram
Vlan-int2
3.0.1.31/24

Switch C
NTP broadcast server

Vlan-int2
3.0.1.30/24

Switch A
NTP broadcast client

Vlan-int2
3.0.1.32/24

Switch B
NTP broadcast client

Procedure

1. Assign an IP address to each interface, and make sure Switch A, Switch B, and Switch C can
reach each other, as shown in Figure 49. (Details not shown.)
2. Configure Switch C:
# Enable the NTP service.
<SwitchC> system-view
[SwitchC] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchC] clock protocol ntp
# Specify the local clock as the reference source, with stratum level 2.
[SwitchC] ntp-service refclock-master 2
# Configure Switch C to operate in broadcast server mode and send broadcast messages from
VLAN-interface 2.
[SwitchC] interface vlan-interface 2
[SwitchC-Vlan-interface2] ntp-service broadcast-server
3. Configure Switch A:
# Enable the NTP service.
<SwitchA> system-view
[SwitchA] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchA] clock protocol ntp
# Configure Switch A to operate in broadcast client mode and receive broadcast messages on
VLAN-interface 2.
[SwitchA] interface vlan-interface 2
[SwitchA-Vlan-interface2] ntp-service broadcast-client
4. Configure Switch B:
# Enable the NTP service.
<SwitchB> system-view
[SwitchB] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchB] clock protocol ntp

126
# Configure Switch B to operate in broadcast client mode and receive broadcast messages on
VLAN-interface 2.
[SwitchB] interface vlan-interface 2
[SwitchB-Vlan-interface2] ntp-service broadcast-client

Verifying the configuration


The following procedure uses Switch A as an example to verify the configuration.
# Verify that Switch A has synchronized to Switch C, and the clock stratum level is 3 on Switch A and
2 on Switch C.
[SwitchA-Vlan-interface2] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 3.0.1.31
Local mode: bclient
Reference clock ID: 3.0.1.31
Leap indicator: 00
Clock jitter: 0.044281 s
Stability: 0.000 pps
Clock precision: 2^-19
Root delay: 0.00229 ms
Root dispersion: 4.12572 ms
Reference time: d0d289fe.ec43c720 Sat, Jan 8 2019 7:00:14.922
System poll interval: 64 s

# Verify that an IPv4 NTP association has been established between Switch A and Switch C.
[SwitchA-Vlan-interface2] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[1245]3.0.1.31 127.127.1.0 2 1 64 519 -0.0 0.0022 4.1257
Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.
Total sessions: 1

Example: Configuring NTP multicast association mode


Network configuration
As shown in Figure 50, configure Switch C as the NTP server for multiple devices on different
network segments so that these devices synchronize the time with Switch C.
• Configure Switch C's local clock as its reference source, with stratum level 2.
• Configure Switch C to operate in multicast server mode and send multicast messages from
VLAN-interface 2.
• Configure Switch A and Switch D to operate in multicast client mode and receive multicast
messages on VLAN-interface 3 and VLAN-interface 2, respectively.

127
Figure 50 Network diagram
Vlan-int2
3.0.1.31/24

Switch C
NTP multicast server

Vlan-int3 Vlan-int3 Vlan-int2


1.0.1.11/24 1.0.1.10/24 3.0.1.30/24

Switch A Switch B
NTP multicast client

Vlan-int2
3.0.1.32/24

Switch D
NTP multicast client

Procedure

1. Assign an IP address to each interface, and make sure the switches can reach each other, as
shown in Figure 50. (Details not shown.)
2. Configure Switch C:
# Enable the NTP service.
<SwitchC> system-view
[SwitchC] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchC] clock protocol ntp
# Specify the local clock as the reference source, with stratum level 2.
[SwitchC] ntp-service refclock-master 2
# Configure Switch C to operate in multicast server mode and send multicast messages from
VLAN-interface 2.
[SwitchC] interface vlan-interface 2
[SwitchC-Vlan-interface2] ntp-service multicast-server
3. Configure Switch D:
# Enable the NTP service.
<SwitchD> system-view
[SwitchD] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchD] clock protocol ntp
# Configure Switch D to operate in multicast client mode and receive multicast messages on
VLAN-interface 2.
[SwitchD] interface vlan-interface 2
[SwitchD-Vlan-interface2] ntp-service multicast-client
4. Verify the configuration:
# Verify that Switch D has synchronized to Switch C, and the clock stratum level is 3 on Switch
D and 2 on Switch C.
Switch D and Switch C are on the same subnet, so Switch D can receive the multicast
messages from Switch C without being enabled with the multicast functions.
[SwitchD-Vlan-interface2] display ntp-service status
Clock status: synchronized

128
Clock stratum: 3
System peer: 3.0.1.31
Local mode: bclient
Reference clock ID: 3.0.1.31
Leap indicator: 00
Clock jitter: 0.044281 s
Stability: 0.000 pps
Clock precision: 2^-19
Root delay: 0.00229 ms
Root dispersion: 4.12572 ms
Reference time: d0d289fe.ec43c720 Sat, Jan 8 2019 7:00:14.922
System poll interval: 64 s
# Verify that an IPv4 NTP association has been established between Switch D and Switch C.
[SwitchD-Vlan-interface2] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[1245]3.0.1.31 127.127.1.0 2 1 64 519 -0.0 0.0022 4.1257
Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.
Total sessions: 1
5. Configure Switch B:
Because Switch A and Switch C are on different subnets, you must enable the multicast
functions on Switch B before Switch A can receive multicast messages from Switch C.
# Enable IP multicast functions.
<SwitchB> system-view
[SwitchB] multicast routing
[SwitchB-mrib] quit
[SwitchB] interface vlan-interface 2
[SwitchB-Vlan-interface2] pim dm
[SwitchB-Vlan-interface2] quit
[SwitchB] vlan 3
[SwitchB-vlan3] port gigabitethernet 1/0/1
[SwitchB-vlan3] quit
[SwitchB] interface vlan-interface 3
[SwitchB-Vlan-interface3] igmp enable
[SwitchB-Vlan-interface3] igmp static-group 224.0.1.1
[SwitchB-Vlan-interface3] quit
[SwitchB] igmp-snooping
[SwitchB-igmp-snooping] quit
[SwitchB] interface gigabitethernet 1/0/1
[SwitchB-GigabitEthernet1/0/1] igmp-snooping static-group 224.0.1.1 vlan 3
6. Configure Switch A:
# Enable the NTP service.
<SwitchA> system-view
[SwitchA] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchA] clock protocol ntp
# Configure Switch A to operate in multicast client mode and receive multicast messages on
VLAN-interface 3.

129
[SwitchA] interface vlan-interface 3
[SwitchA-Vlan-interface3] ntp-service multicast-client

Verifying the configuration


# Verify that Switch A has synchronized its time with Switch C, and the clock stratum level of
Switch A is 3.
[SwitchA-Vlan-interface3] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 3.0.1.31
Local mode: bclient
Reference clock ID: 3.0.1.31
Leap indicator: 00
Clock jitter: 0.165741 s
Stability: 0.000 pps
Clock precision: 2^-19
Root delay: 0.00534 ms
Root dispersion: 4.51282 ms
Reference time: d0c61289.10b1193f Wed, Dec 29 2019 20:03:21.065
System poll interval: 64 s
# Verify that an IPv4 NTP association has been established between Switch A and Switch C.
[SwitchA-Vlan-interface3] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[1234]3.0.1.31 127.127.1.0 2 247 64 381 -0.0 0.0053 4.5128
Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.
Total sessions: 1

Example: Configuring IPv6 NTP multicast association mode


Network configuration
As shown in Figure 51, configure Switch C as the NTP server for multiple devices on different
network segments so that these devices synchronize the time with Switch C.
• Configure Switch C's local clock as its reference source, with stratum level 2.
• Configure Switch C to operate in IPv6 multicast server mode and send IPv6 multicast
messages from VLAN-interface 2.
• Configure Switch A and Switch D to operate in IPv6 multicast client mode and receive IPv6
multicast messages on VLAN-interface 3 and VLAN-interface 2, respectively.

130
Figure 51 Network diagram
Vlan-int2
3000::2/64

Switch C
NTP multicast server

Vlan-int3 Vlan-int3 Vlan-int2


2000::1/64 2000::2/64 3000::1/64

Switch A Switch B
NTP multicast client

Vlan-int2
3000::3/64

Switch D
NTP multicast client

Procedure

1. Assign an IP address to each interface, and make sure the switches can reach each other, as
shown in Figure 51. (Details not shown.)
2. Configure Switch C:
# Enable the NTP service.
<SwitchC> system-view
[SwitchC] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchC] clock protocol ntp
# Specify the local clock as the reference source, with stratum level 2.
[SwitchC] ntp-service refclock-master 2
# Configure Switch C to operate in IPv6 multicast server mode and send multicast messages
from VLAN-interface 2.
[SwitchC] interface vlan-interface 2
[SwitchC-Vlan-interface2] ntp-service ipv6 multicast-server ff24::1
3. Configure Switch D:
# Enable the NTP service.
<SwitchD> system-view
[SwitchD] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchD] clock protocol ntp
# Configure Switch D to operate in IPv6 multicast client mode and receive multicast messages
on VLAN-interface 2.
[SwitchD] interface vlan-interface 2
[SwitchD-Vlan-interface2] ntp-service ipv6 multicast-client ff24::1
4. Verify the configuration:
# Verify that Switch D has synchronized its time with Switch C, and the clock stratum level of
Switch D is 3.
Switch D and Switch C are on the same subnet, so Switch D can Receive the IPv6 multicast
messages from Switch C without being enabled with the IPv6 multicast functions.
[SwitchD-Vlan-interface2] display ntp-service status
Clock status: synchronized

131
Clock stratum: 3
System peer: 3000::2
Local mode: bclient
Reference clock ID: 165.84.121.65
Leap indicator: 00
Clock jitter: 0.000977 s
Stability: 0.000 pps
Clock precision: 2^-19
Root delay: 0.00000 ms
Root dispersion: 8.00578 ms
Reference time: d0c60680.9754fb17 Wed, Dec 29 2019 19:12:00.591
System poll interval: 64 s
# Verify that an IPv6 NTP association has been established between Switch D and Switch C.
[SwitchD-Vlan-interface2] display ntp-service ipv6 sessions
Notes: 1 source(master), 2 source(peer), 3 selected, 4 candidate, 5 configured.

Source: [1234]3000::2
Reference: 127.127.1.0 Clock stratum: 2
Reachabilities: 111 Poll interval: 64
Last receive time: 23 Offset: -0.0
Roundtrip delay: 0.0 Dispersion: 0.0

Total sessions: 1
5. Configure Switch B:
Because Switch A and Switch C are on different subnets, you must enable the IPv6 multicast
functions on Switch B before Switch A can receive IPv6 multicast messages from Switch C.
# Enable IPv6 multicast functions.
<SwitchB> system-view
[SwitchB] ipv6 multicast routing
[SwitchB-mrib6] quit
[SwitchB] interface vlan-interface 2
[SwitchB-Vlan-interface2] ipv6 pim dm
[SwitchB-Vlan-interface2] quit
[SwitchB] vlan 3
[SwitchB-vlan3] port gigabitethernet 1/0/1
[SwitchB-vlan3] quit
[SwitchB] interface vlan-interface 3
[SwitchB-Vlan-interface3] mld enable
[SwitchB-Vlan-interface3] mld static-group ff24::1
[SwitchB-Vlan-interface3] quit
[SwitchB] mld-snooping
[SwitchB-mld-snooping] quit
[SwitchB] interface gigabitethernet 1/0/1
[SwitchB-GigabitEthernet1/0/1] mld-snooping static-group ff24::1 vlan 3
6. Configure Switch A:
# Enable the NTP service.
<SwitchA> system-view
[SwitchA] ntp-service enable

132
# Specify NTP for obtaining the time.
[SwitchA] clock protocol ntp
# Configure Switch A to operate in IPv6 multicast client mode and receive IPv6 multicast
messages on VLAN-interface 3.
[SwitchA] interface vlan-interface 3
[SwitchA-Vlan-interface3] ntp-service ipv6 multicast-client ff24::1

Verifying the configuration


# Verify that Switch A has synchronized to Switch C, and the clock stratum level is 3 on Switch A and
2 on Switch C.
[SwitchA-Vlan-interface3] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 3000::2
Local mode: bclient
Reference clock ID: 165.84.121.65
Leap indicator: 00
Clock jitter: 0.165741 s
Stability: 0.000 pps
Clock precision: 2^-19
Root delay: 0.00534 ms
Root dispersion: 4.51282 ms
Reference time: d0c61289.10b1193f Wed, Dec 29 2019 20:03:21.065
System poll interval: 64 s

# Verify that an IPv6 NTP association has been established between Switch A and Switch C.
[SwitchA-Vlan-interface3] display ntp-service ipv6 sessions
Notes: 1 source(master), 2 source(peer), 3 selected, 4 candidate, 5 configured.

Source: [124]3000::2
Reference: 127.127.1.0 Clock stratum: 2
Reachabilities: 2 Poll interval: 64
Last receive time: 71 Offset: -0.0
Roundtrip delay: 0.0 Dispersion: 0.0

Total sessions: 1

Example: Configuring NTP authentication in client/server


association mode
Network configuration
As shown in Figure 52, perform the following tasks:
• Configure Device A's local clock as its reference source, with stratum level 2.
• Configure Device B to operate in client mode and specify Device A as the NTP server of Device
B.
• Configure NTP authentication on both Device A and Device B.

133
Figure 52 Network diagram
NTP server NTP client
1.0.1.11/24 1.0.1.12/24

Device A Device B

Procedure

1. Assign an IP address to each interface, and make sure Device A and Device B can reach each
other, as shown in Figure 52. (Details not shown.)
2. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Specify the local clock as the reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2
3. Configure Device B:
# Enable the NTP service.
<DeviceB> system-view
[DeviceB] ntp-service enable
# Specify NTP for obtaining the time.
[DeviceB] clock protocol ntp
# Enable NTP authentication on Device B.
[DeviceB] ntp-service authentication enable
# Create a plaintext authentication key, with key ID 42 and key value aNiceKey.
[DeviceB] ntp-service authentication-keyid 42 authentication-mode md5 simple
aNiceKey
# Specify the key as a trusted key.
[DeviceB] ntp-service reliable authentication-keyid 42
# Specify Device A as the NTP server of Device B, and associate the server with key 42.
[DeviceB] ntp-service unicast-server 1.0.1.11 authentication-keyid 42
To enable Device B to synchronize its clock with Device A, enable NTP authentication on
Device A.
4. Configure NTP authentication on Device A:
# Enable NTP authentication.
[DeviceA] ntp-service authentication enable
# Create a plaintext authentication key, with key ID 42 and key value aNiceKey.
[DeviceA] ntp-service authentication-keyid 42 authentication-mode md5 simple
aNiceKey
# Specify the key as a trusted key.
[DeviceA] ntp-service reliable authentication-keyid 42

Verifying the configuration


# Verify that Device B has synchronized its time with Device A, and the clock stratum level of Device
B is 3.
[DeviceB] display ntp-service status
Clock status: synchronized
Clock stratum: 3

134
System peer: 1.0.1.11
Local mode: client
Reference clock ID: 1.0.1.11
Leap indicator: 00
Clock jitter: 0.005096 s
Stability: 0.000 pps
Clock precision: 2^-19
Root delay: 0.00655 ms
Root dispersion: 1.15869 ms
Reference time: d0c62687.ab1bba7d Wed, Dec 29 2019 21:28:39.668
System poll interval: 64 s

# Verify that an IPv4 NTP association has been established between Device B and Device A.
[DeviceB] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[1245]1.0.1.11 127.127.1.0 2 1 64 519 -0.0 0.0065 0.0
Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.
Total sessions: 1

Example: Configuring NTP authentication in broadcast


association mode
Network configuration
As shown in Figure 53, configure Switch C as the NTP server for multiple devices on the same
network segment so that these devices synchronize the time with Switch C. Configure Switch A and
Switch B to authenticate the NTP server.
• Configure Switch C's local clock as its reference source, with stratum level 3.
• Configure Switch C to operate in broadcast server mode and send broadcast messages from
VLAN-interface 2.
• Configure Switch A and Switch B to operate in broadcast client mode and receive broadcast
messages on VLAN-interface 2.
• Enable NTP authentication on Switch A, Switch B, and Switch C.
Figure 53 Network diagram
Vlan-int2
3.0.1.31/24

Switch C
NTP broadcast server

Vlan-int2
3.0.1.30/24

Switch A
NTP broadcast client

Vlan-int2
3.0.1.32/24

Switch B
NTP broadcast client

135
Procedure

1. Assign an IP address to each interface, and make sure Switch A, Switch B, and Switch C can
reach each other, as shown in Figure 53. (Details not shown.)
2. Configure Switch A:
# Enable the NTP service.
<SwitchA> system-view
[SwitchA] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchA] clock protocol ntp
# Enable NTP authentication on Switch A. Create a plaintext NTP authentication key, with key
ID of 88 and key value of 123456. Specify it as a trusted key.
[SwitchA] ntp-service authentication enable
[SwitchA] ntp-service authentication-keyid 88 authentication-mode md5 simple 123456
[SwitchA] ntp-service reliable authentication-keyid 88
# Configure Switch A to operate in NTP broadcast client mode and receive NTP broadcast
messages on VLAN-interface 2.
[SwitchA] interface vlan-interface 2
[SwitchA-Vlan-interface2] ntp-service broadcast-client
3. Configure Switch B:
# Enable the NTP service.
<SwitchB> system-view
[SwitchB] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchB] clock protocol ntp
# Enable NTP authentication on Switch B. Create a plaintext NTP authentication key, with key
ID of 88 and key value of 123456. Specify it as a trusted key.
[SwitchB] ntp-service authentication enable
[SwitchB] ntp-service authentication-keyid 88 authentication-mode md5 simple 123456
[SwitchB] ntp-service reliable authentication-keyid 88
# Configure Switch B to operate in broadcast client mode and receive NTP broadcast
messages on VLAN-interface 2.
[SwitchB] interface vlan-interface 2
[SwitchB-Vlan-interface2] ntp-service broadcast-client
4. Configure Switch C:
# Enable the NTP service.
<SwitchC> system-view
[SwitchC] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchC] clock protocol ntp
# Specify the local clock as the reference source, with stratum level 3.
[SwitchC] ntp-service refclock-master 3
# Configure Switch C to operate in NTP broadcast server mode and use VLAN-interface 2 to
send NTP broadcast packets.
[SwitchC] interface vlan-interface 2
[SwitchC-Vlan-interface2] ntp-service broadcast-server
[SwitchC-Vlan-interface2] quit
5. Verify the configuration:

136
NTP authentication is enabled on Switch A and Switch B, but not on Switch C, so Switch A and
Switch B cannot synchronize their local clocks to Switch C.
[SwitchB-Vlan-interface2] display ntp-service status
Clock status: unsynchronized
Clock stratum: 16
Reference clock ID: none
6. Enable NTP authentication on Switch C:
# Enable NTP authentication on Switch C. Create a plaintext NTP authentication key, with key
ID of 88 and key value of 123456. Specify it as a trusted key.
[SwitchC] ntp-service authentication enable
[SwitchC] ntp-service authentication-keyid 88 authentication-mode md5 simple 123456
[SwitchC] ntp-service reliable authentication-keyid 88
# Specify Switch C as an NTP broadcast server, and associate key 88 with Switch C.
[SwitchC] interface vlan-interface 2
[SwitchC-Vlan-interface2] ntp-service broadcast-server authentication-keyid 88

Verifying the configuration


# Verify that Switch B has synchronized its time with Switch C, and the clock stratum level of
Switch B is 4.
[SwitchB-Vlan-interface2] display ntp-service status
Clock status: synchronized
Clock stratum: 4
System peer: 3.0.1.31
Local mode: bclient
Reference clock ID: 3.0.1.31
Leap indicator: 00
Clock jitter: 0.006683 s
Stability: 0.000 pps
Clock precision: 2^-19
Root delay: 0.00127 ms
Root dispersion: 2.89877 ms
Reference time: d0d287a7.3119666f Sat, Jan 8 2019 6:50:15.191
System poll interval: 64 s
# Verify that an IPv4 NTP association has been established between Switch B and Switch C.
[SwitchB-Vlan-interface2] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[1245]3.0.1.31 127.127.1.0 3 3 64 68 -0.0 0.0000 0.0
Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.
Total sessions: 1

137
Configuring SNTP
About SNTP
SNTP is a simplified, client-only version of NTP specified in RFC 4330. It uses the same packet
format and packet exchange procedure as NTP, but provides faster synchronization at the price of
time accuracy.

SNTP working mode


SNTP supports only the client/server mode. An SNTP-enabled device can receive time from NTP
servers, but cannot provide time services to other devices.
If you specify multiple NTP servers for an SNTP client, the server with the best stratum is selected. If
multiple servers are at the same stratum, the NTP server whose time packet is first received is
selected.

Protocols and standards


RFC 4330, Simple Network Time Protocol (SNTP) Version 4 for IPv4, IPv6 and OSI

Restrictions and guidelines: SNTP configuration


When you configure SNTP, follow these restrictions and guidelines:
• You cannot configure both NTP and SNTP on the same device.
• To use NTP for time synchronization, you must use the clock protocol command to specify
NTP for obtaining the time. For more information about the clock protocol command, see
device management commands in Fundamentals Configuration Guide.

SNTP tasks at a glance


To configure SNTP, perform the following tasks:
1. Enabling the SNTP service
2. Specifying an NTP server for the device
3. (Optional.) Configuring SNTP authentication
4. (Optional.) Specifying the SNTP time-offset thresholds for log and trap outputs

Enabling the SNTP service


Restrictions and guidelines
The NTP service and SNTP service are mutually exclusive. Before you enable SNTP, make sure
NTP is disabled.
Procedure
1. Enter system view.
system-view

138
2. Enable the SNTP service.
sntp enable
By default, the SNTP service is disabled.

Specifying an NTP server for the device


Restrictions and guidelines
To use an NTP server as the time source, make sure its clock has been synchronized. If the stratum
level of the NTP server is greater than or equal to that of the client, the client does not synchronize
with the NTP server.
Procedure
1. Enter system view.
system-view
2. Specify an NTP server for the device.
IPv4:
sntp unicast-server { server-name | ip-address } [ vpn-instance
vpn-instance-name ] [ authentication-keyid keyid | source
interface-type interface-number | version number ] *
IPv6:
sntp ipv6 unicast-server { server-name | ipv6-address } [ vpn-instance
vpn-instance-name ] [ authentication-keyid keyid | source
interface-type interface-number ] *
By default, no NTP server is specified for the device.
You can specify multiple NTP servers for the client by repeating this step.
To perform authentication, you need to specify the authentication-keyid keyid option.

Configuring SNTP authentication


About this task
SNTP authentication ensures that an SNTP client is synchronized only to an authenticated
trustworthy NTP server.
Restrictions and guidelines
Enable authentication on both the NTP server and the SNTP client.
Use the same authentication key ID, algorithm, and key on the NTP server and SNTP client. Specify
the key as a trusted key on both the NTP server and the SNTP client. For information about
configuring NTP authentication on an NTP server, see "Configuring NTP."
On the SNTP client, associate the specified key with the NTP server. Make sure the server is allowed
to use the key ID for authentication on the client.
With authentication disabled, the SNTP client can synchronize with the NTP server regardless of
whether the NTP server is enabled with authentication.
Procedure
1. Enter system view.
system-view
2. Enable SNTP authentication.
sntp authentication enable

139
By default, SNTP authentication is disabled.
3. Configure an SNTP authentication key.
sntp authentication-keyid keyid authentication-mode { hmac-sha-1 |
hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 } { cipher | simple } string
[ acl ipv4-acl-number | ipv6 acl ipv6-acl-number ] *
By default, no SNTP authentication key exists.
4. Specify the key as a trusted key.
sntp reliable authentication-keyid keyid
By default, no trusted key is specified.
5. Associate the SNTP authentication key with an NTP server.
IPv4:
sntp unicast-server { server-name | ip-address } [ vpn-instance
vpn-instance-name ] authentication-keyid keyid
IPv6:
sntp ipv6 unicast-server { server-name | ipv6-address } [ vpn-instance
vpn-instance-name ] authentication-keyid keyid
By default, no NTP server is specified.

Specifying the SNTP time-offset thresholds for log


and trap outputs
About this task
By default, the system synchronizes the SNTP client's time to the server and outputs a log and a trap
when the time offset exceeds 128 ms for multiple times.
After you set the SNTP time-offset thresholds for log and trap outputs, the system synchronizes the
client's time to the server when the time offset exceeds 128 ms for multiple times, but outputs logs
and traps only when the time offset exceeds the specified thresholds, respectively.
Procedure
1. Enter system view.
system-view
2. Specify the SNTP time-offset thresholds for log and trap outputs.
sntp time-offset-threshold { log log-threshold | trap trap-threshold }
*
By default, no SNTP time-offset thresholds are set for log and trap outputs.

Display and maintenance commands for SNTP


Execute display commands in any view.

Task Command
Display information about all IPv6 SNTP associations. display sntp ipv6 sessions
Display information about all IPv4 SNTP associations. display sntp sessions

140
SNTP configuration examples
Example: Configuring SNTP
Network configuration
As shown in Figure 54, perform the following tasks:
• Configure Device A's local clock as its reference source, with stratum level 2.
• Configure Device B to operate in SNTP client mode, and specify Device A as the NTP server.
• Configure NTP authentication on Device A and SNTP authentication on Device B.
Figure 54 Network diagram
NTP server SNTP client

1.0.1.11/24 1.0.1.12/24

Device A Device B

Procedure

1. Assign an IP address to each interface, and make sure Device A and Device B can reach each
other, as shown in Figure 54. (Details not shown.)
2. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Specify NTP for obtaining the time.
[DeviceA] clock protocol ntp
# Configure the local clock as the reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2
# Enable NTP authentication on Device A.
[DeviceA] ntp-service authentication enable
# Configure a plaintext NTP authentication key, with key ID of 10 and key value of aNiceKey.
[DeviceA] ntp-service authentication-keyid 10 authentication-mode md5 simple
aNiceKey
# Specify the key as a trusted key.
[DeviceA] ntp-service reliable authentication-keyid 10
3. Configure Device B:
# Enable the SNTP service.
<DeviceB> system-view
[DeviceB] sntp enable
# Specify NTP for obtaining the time.
[DeviceB] clock protocol ntp
# Enable SNTP authentication on Device B.
[DeviceB] sntp authentication enable
# Configure a plaintext authentication key, with key ID of 10 and key value of aNiceKey.
[DeviceB] sntp authentication-keyid 10 authentication-mode md5 simple aNiceKey
# Specify the key as a trusted key.

141
[DeviceB] sntp reliable authentication-keyid 10
# Specify Device A as the NTP server of Device B, and associate the server with key 10.
[DeviceB] sntp unicast-server 1.0.1.11 authentication-keyid 10

Verifying the configuration


# Verify that an SNTP association has been established between Device B and Device A, and
Device B has synchronized its time with Device A.
[DeviceB] display sntp sessions
NTP server Stratum Version Last receive time
1.0.1.11 2 4 Tue, May 17 2019 9:11:20.833 (Synced)

142
Configuring PoE
About PoE
Power over Ethernet (PoE) enables a network device to supply power to terminals over twisted pair
cables.

PoE system
As shown in Figure 55, a PoE system includes the following elements:
• PoE power supply—A PoE power supply provides power for the entire PoE system.
• PSE—A power sourcing equipment (PSE) supplies power to PDs. PSE devices are classified
into single-PSE devices and multiple-PSE devices.
 A single-PSE device has only one piece of PSE firmware.
 A multiple-PSE device has multiple PSEs. A multiple-PSE device uses PSE IDs to identify
different PSEs. To view the mapping between the ID and slot number of a PSE, execute the
display poe device command.
• PI—A power interface (PI) is a PoE-capable Ethernet interface on a PSE.
• PD—A powered device (PD) receives power from a PSE. PDs include IP telephones, APs,
portable chargers, POS terminals, and Web cameras. You can also connect a PD to a
redundant power source for reliability.
Figure 55 PoE system diagram

PD A

PI A
PI B
PI C
PD B
PoE power supply PSE

PD C

AI-driven PoE
AI-driven PoE innovatively integrates AI technologies into PoE switches and offers the following
benefits:
• Adaptive power supply management
AI-driven PoE can adaptively adjust the power supply parameters to fit power needs in various
scenarios such as multiple types of PDs and address issues such as no power supply to a PD
and power failure, minimizing human intervention.
• Priority-based power management

143
When the power demanded by PDs exceeds the power that can be supplied by the PoE switch,
the system supplies power to PDs based on the PI priorities to ensure power supply to critical
businesses and reduce power supply to PIs of lower priorities.
• Smart power module management
For a PoE switch with a dual power module architecture, AI-driven PoE can automatically
calculate and regulate power output of each power module based on the type and quality of the
power modules, maximizing the power usage of each power module. When a power module is
removed, AI-driven PoE recalculates to preferentially guarantee power supply to PDs that are
being powered.
• Alarms and logs
AI-driven PoE constantly monitors the PoE power supply status. If an exception occurs, it
automatically analyzes, recovers, or even resets the PoE function, and outputs alarms and logs
for reference.
• High PoE security
When an exception such as short circuit or circuit break occurs, AI-driven PoE immediately
starts self-protection to protect the PoE switch from being damaged or burned.
• Fast reboot
Immediately when the PoE switch restarts upon a power failure, AI-driven PoE starts to supply
power to PDs before the PoE switch completes startup, shortening the PD recovery time from a
power failure.
• Perpetual PoE
During a hot reboot of the PoE switch from the reboot command, AI-driven PoE continuously
monitors the PD states and ensures continued power supply to PDs, maintaining terminal
service stability.
• Remote PoE
AI-driven PoE adaptively adjusts the network bandwidth based on the transmission distance
between a PSE and PD. When the transmission distance exceeds 100 m (328.08 ft), the
system automatically reduces the port rate to reduce the line loss and ensure the signal and
power transmission quality. With AI-driven PoE enabled, a PSE can transmit power to a PD of
200 m (656.17 ft) to 250 m (820.21 ft) away.

Protocols and standards


• 802.3af-2003, IEEE Standard for Information Technology - Telecommunications and
Information Exchange Between Systems - Local and Metropolitan Area Networks - Specific
Requirements - Part 3: Carrier Sense Multiple Access with Collision Detection (CSMA/CD)
Access Method and Physical Layer Specifications - Data Terminal Equipment (DTE) Power Via
Media Dependent Interface (MDI)
• 802.3at-2009, IEEE Standard for Information technology-- Local and metropolitan area
networks-- Specific requirements-- Part 3: CSMA/CD Access Method and Physical Layer
Specifications Amendment 3: Data Terminal Equipment (DTE) Power via the Media Dependent
Interface (MDI) Enhancements

Restrictions: Hardware compatibility with PoE


Only the PoE models support the PoE feature.

Restrictions and guidelines: PoE configuration


You can configure a PI through either of the following ways:

144
• Configure the settings directly on the PI.
• Configure a PoE profile and apply it to the PI. If you apply a PoE profile to multiple PIs, these PIs
have the same PoE features. If you connect a PD to another PI, you can apply the PoE profile of
the original PI to the new PI. This method relieves the task of configuring PoE on the new PI.
You can only use one way to configure a parameter for a PI. To use the other way to reconfigure a
parameter, you must first remove the original configuration.
You must use the same configuration method for the poe max-power max-power and poe
priority { critical | high | low } commands.

PoE configuration tasks at a glance


To configure PoE, perform the following tasks:
1. Enabling PoE for a PI
2. (Optional.) Enabling AI-driven PoE
3. (Optional.) Configuring PoE delay
4. (Optional.) Disabling PoE on shutdown interfaces
5. (Optional.) Enabling nonstandard PD detection
6. (Optional.) Configuring the PoE power
 Configuring the maximum PSE power
 Configuring the maximum PI power
7. (Optional.) Configuring the PI priority policy
8. (Optional.) Configuring PSE power monitoring
9. (Optional.) Configuring a PI by using a PoE profile
To use a PoE profile to enable PoE and configure the priority and maximum power for a PI, see
"Configuring a PI by using a PoE profile."
10. (Optional.) Upgrading PSE firmware in service
11. (Optional.) Enabling forced power supply

Prerequisites for configuring PoE


Before you configure PoE, make sure the PoE power supply and PSEs are operating correctly.

Enabling PoE for a PI


About this task
After you enable PoE for a PI, the PI supplies power to the connected PD if the PI will not result in
PSE power overload. PSE overload occurs when the sum of the power consumption of all PIs
exceeds the maximum power of the PSE.
If the PI will result in PSE power overload, the following restrictions apply:
• If the PI priority policy is not enabled, the PI does not supply power to the connected PD.
• If the PI priority policy is enabled, whether the PDs can be powered depends on the priority of
the PI.
For more information about the PI priority policy, see "Configuring the PI priority policy."
Power can be transmitted over a twisted pair cable in the following modes:

145
• Signal pair mode—Signal pairs (on pins 1, 2, 3, and 6) of the twisted pair cable are used for
power transmission.
• Spare pair mode—Spare pairs (on pins 4, 5, 7, and 8) of the twisted pair cable are used for
power transmission.
Restrictions and guidelines
A PI can supply power to a PD only when the PI and PD use the same power transmission mode.
Procedure
1. Enter system view.
system-view
2. Enter PI view.
interface interface-type interface-number
3. (Optional.) Configure a description for the PD connected to the PI.
poe pd-description text
By default, no description is configured for the PD connected to the PI.
4. Enable PoE for the PI.
poe enable
By default, PoE is disabled on a PI.

Enabling AI-driven PoE


Restrictions and guidelines
AI-driven PoE takes effect only on PSEs that run firmware V147 or later.
• Firmware earlier than V147—You must use the poe update command to upgrade the PSE
firmware and then enable AI-driven PoE on it.
• Firmware V147 or later—You do not need to re-enable AI-driven PoE after upgrading the
firmware.
Procedure
1. Enter system view.
system-view
2. Enable AI-driven PoE.
poe ai enable
By default, AI-driven PoE is disabled.
AI-driven PoE automatically adjusts the power supply parameters to fit the power needs. If you
disable AI-driven PoE, the system reverts the parameters to the settings before the adjustment.

Configuring PoE delay


About this task
The device executes the poe enable command on an interface when any one of the following
conditions is met:
• The poe enable command is configured on the interface.
• The device reboots with the poe enable command in the configuration file.
• The interface comes up and the PoE module resumes PoE power supply to the interface.

146
By default, the PoE module supplies power to an interface immediately after the poe enable
command is executed. This task creates a PoE delay timer after the poe enable command is
executed and allows the PoE module to supply power to the PI only after the timer expires.
Restrictions and guidelines
The undo poe enable command is executed immediately upon configuration and is not affected
by this task.
Procedure
1. Enter system view.
system-view
2. Enable PoE delay.
poe power-delay time
By default, PoE delay is disabled.

Disabling PoE on shutdown interfaces


About this task
By default, the device continues power supply to an interface after the interface is shut down by the
shutdown command or by an upper layer module such as monitor link. As a result, the PD
connected to the shutdown interface operates continuously but fails to access the network.
This task disables the PoE module from supplying power to an interface after the interface is shut
down. After the interface comes up or you disable this task, the PoE module resumes power supply
to the interface.
Restrictions and guidelines
The command does not power off an interface that has been shut down but is supplying power to a
PD.
Procedure
1. Enter system view.
system-view
2. Disable PoE on shutdown interfaces.
poe track-shutdown
By default, PoE is not disabled on shutdown interfaces.

Enabling nonstandard PD detection


About this task
PDs are classified into standard PDs and nonstandard PDs. Standard PDs are compliant with IEEE
802.3af and IEEE 802.3at. A PSE supplies power to a nonstandard PD only after nonstandard PD
detection is enabled.
The device supports PSE-based and PI-based nonstandard PD detection. Enabling nonstandard PD
detection for a PSE enables this feature for all PIs on the PSE. As a best practice for disabling
nonstandard PD detection for all PIs successfully in one operation, disable this feature in both
system view and interface view.
Procedure
1. Enter system view.
system-view

147
2. Enable nonstandard PD detection. Choose one option as needed.
 Enable nonstandard PD detection for the PSE.
poe legacy enable pse pse-id
By default, nonstandard PD detection is disabled for a PSE.
 Execute the following commands in sequence to enable nonstandard PD detection for a PI:
interface interface-type interface-number
poe legacy enable
By default, nonstandard PD detection is disabled for a PI.

Configuring the PoE power


Configuring the maximum PSE power
About this task
The maximum power of a PSE is the maximum power that the PSE can provide to all its attached
PDs.
Restrictions and guidelines
To avoid the PoE power overload situation, make sure the total power of all PSEs is less than the
maximum PoE power.
The maximum power of the PSE must be greater than or equal to the total maximum power of all its
PIs of critical priority .
Procedure
1. Enter system view.
system-view
2. Configure the maximum power for a PSE.
poe pse pse-id max-power max-power
The default maximum power of a PSE varies by PoE power supply.

Configuring the maximum PI power


About this task
The maximum PI power is the maximum power that a PI can provide to the connected PD. If the PD
requires more power than the maximum PI power, the PI does not supply power to the PD.
Procedure
1. Enter system view.
system-view
2. Enter PI view.
interface interface-type interface-number
3. Configure the maximum power for the PI.
poe max-power max-power
By default, the maximum PI power is 30000 W.

148
Configuring the PI priority policy
About this task
The PI priority policy enables the PSE to perform priority-based power allocation to PIs when PSE
power overload occurs. The priority levels for PIs are critical, high, and low in descending order.
When PSE power overload occurs, the PSE supplies power to PDs as follows:
• If the PI priority policy is disabled, the PSE supplies power to PDs depending on whether you
have configured the maximum PSE power.
 If you have configured the maximum PSE power, the PSE does not supply power to the
newly-added or existing PD that causes PSE power overload.
 If you have not configured the maximum PSE power, the PoE self-protection mechanism is
triggered. The PSE stops supplying power to all PDs.
• If the PI priority policy is enabled, the PSE supplies power to PDs as follows:
 If a PD being powered causes PSE power overload, the PSE stops supplying power to the
PD.
 If a newly-added PD causes PSE power overload, the PSE supplies power to PDs in priority
descending order of the PIs to which they are connected. If the newly-added PD and a PD
being powered have the same priority, the PD being powered takes precedence. If multiple
PIs being powered have the same priority, the PIs with smaller IDs takes precedence.
Restrictions and guidelines
Before you configure a PI with critical priority, make sure the remaining power from the maximum
PSE power minus the maximum powers of the existing PIs with critical priority is greater than
maximum power of the PI.
Configuration for a PI whose power is preempted remains unchanged.
Procedure
1. Enter system view.
system-view
2. Enable the PI priority policy.
poe pd-policy priority
By default, the PI priority policy is disabled.
3. Enter PI view.
interface interface-type interface-number
4. (Optional.) Configure a priority for the PI.
poe priority { critical | high | low }
By default, the priority for a PI is low.

Configuring PSE power monitoring


About this task
The system monitors PSE power utilization and sends notification messages when PSE power
utilization exceeds or drops below the threshold. If PSE power utilization crosses the threshold
multiple times in succession, the system sends notification messages only for the first crossing. For
more information about the notification message, see "Configuring SNMP."
Procedure
1. Enter system view.

149
system-view
2. Configure a power alarm threshold for a PSE.
poe utilization-threshold value pse pse-id
By default, the power alarm threshold for a PSE is 80%.

Configuring a PI by using a PoE profile


Restrictions and guidelines
To modify a PoE profile applied on a PI, first remove the PoE profile from the PI.
You can configure a PI either on the PI or by using a PoE profile. The poe max-power max-power
and poe priority { critical | high | low } commands must be configured using the same
method.

Configuring a PoE profile


1. Enter system view.
system-view
2. Create a PoE profile and enter its view.
poe-profile profile-name [ index ]
By default, no PoE profiles exist.
3. Enable PoE.
poe enable
By default, PoE is disabled.
4. (Optional.) Configure the maximum PI power.
poe max-power max-power
By default, the maximum PI power is 30000 W.
5. (Optional.) Configure a PI priority.
poe priority { critical | high | low }
The default priority is low.
This command takes effect only after the PI priority policy is enabled.

Applying a PoE profile


Restrictions and guidelines
You can apply a PoE profile to multiple PIs in system view or a single PI in PI view. If you perform the
operation in both views for the same PI, the most recent operation takes effect.
You can apply only one PoE profile to a PI.
Applying a PoE profile in system view
1. Enter system view.
system-view
2. Apply a PoE profile to PIs.
apply poe-profile { index index | name profile-name } interface
interface-range

150
By default, a PoE profile is not applied to a PI.
Applying a PoE profile in PI view
1. Enter system view.
system-view
2. Enter PI view.
interface interface-type interface-number
3. Apply the PoE profile to the interface.
apply poe-profile { index index | name profile-name }
By default, a PoE profile is not applied to a PI.

Upgrading PSE firmware in service


About this task
You can upgrade the PSE firmware in service in the following modes:
• Refresh mode—Updates the PSE firmware without deleting it. You can use the refresh mode
in most cases.
• Full mode—Deletes the current PSE firmware and reloads a new one. Use the full mode if the
PSE firmware is damaged and you cannot execute any PoE commands.
Restrictions and guidelines
If the PSE firmware upgrade fails because of interruption such as a device reboot, you can restart the
device and upgrade it in full mode again. After the upgrade, restart the device manually for the new
PSE firmware to take effect.
Procedure
1. Enter system view.
system-view
2. Upgrade the PSE firmware in service.
poe update { full | refresh } filename [ pse pse-id ]

Enabling forced power supply


About this task
Before supplying power to a PD, the device performs a detection of the PD. It supplies power to the
PD only after the PD passes the detection. If the PD fails the detection but the power provided by the
device meets the PD specifications, you can configure this task to enable forced power supply to the
PD.
Restrictions and guidelines
This task enables the device to supply power to a PD directly without performing a detection of the
PD. To avoid damaging the PD, make the power provided by the device meets the PD specifications
before configuring this task.
Procedure
1. Enter system view.
system-view
2. Enter PI view.
interface interface-type interface-number

151
3. Enable forced power supply.
poe force-power
By default, forced power supply is disabled.

Display and maintenance commands for PoE


Execute display commands in any view.

Task Command
Display general PSE information. display poe device [ slot slot-number ]
Display the power supplying information for display poe interface [ interface-type
the specified PI. interface-number ]
display poe interface power
Display power information for PIs.
[ interface-type interface-number ]
Display detailed PSE information. display poe pse [ pse-id ]
Display the power supplying information for
display poe pse pse-id interface
all PIs on a PSE.
Display power information for all PIs on a
display poe pse pse-id interface power
PSE.

display poe-profile [ index index | name


Display all information about the PoE profile.
profile-name ]
Display all information about the PoE profile display poe-profile interface
applied to the specified PI. interface-type interface-number

PoE configuration examples


Example: Configuring PoE
Network configuration
As shown in Figure 56, configure PoE to meet the following requirements:
• The device supplies power to IP phones and APs.
• IP phones take precedence to receive power when power overload occurs.
• Power supplied to AP B does not exceed 9000 milliwatts.
Figure 56 Network diagram
PSE
GE1/0/1 GE1/0/4

GE AP A
/2 1/0
1/0
GE1/0/3

/5
IP phone A GE

AP B
IP phone B IP phone C

152
Procedure
# Enable the PI priority policy.
<PSE> system-view
[PSE] poe pd-policy priority

# Enable PoE on GigabitEthernet 1/0/1, GigabitEthernet 1/0/2, and GigabitEthernet 1/0/3, and
configure their power supply priority as critical.
[PSE] interface gigabitethernet 1/0/1
[PSE-GigabitEthernet1/0/1] poe enable
[PSE-GigabitEthernet1/0/1] poe priority critical
[PSE-GigabitEthernet1/0/1] quit
[PSE] interface gigabitethernet 1/0/2
[PSE-GigabitEthernet1/0/2] poe enable
[PSE-GigabitEthernet1/0/2] poe priority critical
[PSE-GigabitEthernet1/0/2] quit
[PSE] interface gigabitethernet 1/0/3
[PSE-GigabitEthernet1/0/3] poe enable
[PSE-GigabitEthernet1/0/3] poe priority critical
[PSE-GigabitEthernet1/0/3] quit

# Enable PoE on GigabitEthernet 1/0/4 and GigabitEthernet 1/0/5, and set the maximum power of
GigabitEthernet 1/0/5 to 9000 milliwatts.
[PSE] interface gigabitethernet 1/0/4
[PSE-GigabitEthernet1/0/4] poe enable
[PSE-GigabitEthernet1/0/4] quit
[PSE] interface gigabitethernet 1/0/5
[PSE-GigabitEthernet1/0/5] poe enable
[PSE-GigabitEthernet1/0/5] poe max-power 9000

Verifying the configuration


# Verify that the device is supplying power correctly to the IP phones and APs, and the IP phones and
APs are operating correctly.

Troubleshooting PoE
Failure to set the priority of a PI to critical
Symptom
Power supply priority configuration for a PI failed.
Solution
To resolve the issue:
1. Identify whether the remaining guaranteed power of the PSE is lower than the maximum power
of the PI. If it is, increase the maximum PSE power or reduce the maximum power of the PI.
2. Identify whether the priority has been configured through other methods. If the priority has been
configured, remove the configuration.
3. If the issue persists, contact Hewlett Packard Enterprise Support.

153
Failure to apply a PoE profile to a PI
Symptom
PoE profile application for a PI failed.
Solution
To resolve the issue:
1. Identify whether some settings in the PoE profile have been configured. If they have been
configured, remove the configuration.
2. Identify whether the settings in the PoE profile meet the requirements of the PI. If they do not,
modify the settings in the PoE profile.
3. Identify whether another PoE profile is already applied to the PI. If it is, remove the application.
4. If the issue persists, contact Hewlett Packard Enterprise Support.

154
Configuring SNMP
About SNMP
Simple Network Management Protocol (SNMP) is used for a management station to access and
operate the devices on a network, regardless of their vendors, physical characteristics, and
interconnect technologies.
SNMP enables network administrators to read and set the variables on managed devices for state
monitoring, troubleshooting, statistics collection, and other management purposes.

SNMP framework
The SNMP framework contains the following elements:
• SNMP manager—Works on an NMS to monitor and manage the SNMP-capable devices in the
network. It can get and set values of MIB objects on an agent.
• SNMP agent—Works on a managed device to receive and handle requests from the NMS, and
sends notifications to the NMS when events, such as an interface state change, occur.
• Management Information Base (MIB)—Specifies the variables (for example, interface status
and CPU usage) maintained by the SNMP agent for the SNMP manager to read and set.
Figure 57 Relationship between NMS, agent, and MIB

Get/Set operations MIB

NMS Traps and informs Agent

MIB and view-based MIB access control


A MIB stores variables called "nodes" or "objects" in a tree hierarchy and identifies each node with a
unique OID. An OID is a dotted numeric string that uniquely identifies the path from the root node to
a leaf node. For example, object B in Figure 58 is uniquely identified by the OID {1.2.1.1}.
Figure 58 MIB tree
Root
1

1 2

1 2

1 B 2

5 6
A

A MIB view represents a set of MIB objects (or MIB object hierarchies) with certain access privileges
and is identified by a view name. The MIB objects included in the MIB view are accessible while
those excluded from the MIB view are inaccessible.
A MIB view can have multiple view records each identified by a view-name oid-tree pair.
You control access to the MIB by assigning MIB views to SNMP groups or communities.

155
SNMP operations
SNMP provides the following basic operations:
• Get—NMS retrieves the value of an object node in an agent MIB.
• Set—NMS modifies the value of an object node in an agent MIB.
• Notification—SNMP notifications include traps and informs. The SNMP agent sends traps or
informs to report events to the NMS. The difference between these two types of notification is
that informs require acknowledgment but traps do not. Informs are more reliable but are also
resource-consuming. Traps are available in SNMPv1, SNMPv2c, and SNMPv3. Informs are
available only in SNMPv2c and SNMPv3.

Protocol versions
The device supports SNMPv1, SNMPv2c, and SNMPv3 in non-FIPS mode and supports only
SNMPv3 in FIPS mode. An NMS and an SNMP agent must use the same SNMP version to
communicate with each other.
• SNMPv1—Uses community names for authentication. To access an SNMP agent, an NMS
must use the same community name as set on the SNMP agent. If the community name used
by the NMS differs from the community name set on the agent, the NMS cannot establish an
SNMP session to access the agent or receive traps from the agent.
• SNMPv2c—Uses community names for authentication. SNMPv2c is compatible with SNMPv1,
but supports more operation types, data types, and error codes.
• SNMPv3—Uses a user-based security model (USM) to secure SNMP communication. You can
configure authentication and privacy mechanisms to authenticate and encrypt SNMP packets
for integrity, authenticity, and confidentiality.

Access control modes


SNMP uses the following modes to control access to MIB objects:
• View-based Access Control Model—VACM mode controls access to MIB objects by
assigning MIB views to SNMP communities or users.
• Role based access control—RBAC mode controls access to MIB objects by assigning user
roles to SNMP communities or users.
 SNMP communities or users with predefined user role network-admin or level-15 have read
and write access to all MIB objects.
 SNMP communities or users with predefined user role network-operator have read-only
access to all MIB objects.
 SNMP communities or users with a user-defined user role have access rights to MIB objects
as specified by the rule command.
RBAC mode controls access on a per MIB object basis, and VACM mode controls access on a MIB
view basis. As a best practice to enhance MIB security, use the RBAC mode.
If you create the same SNMP community or user with both modes multiple times, the most recent
configuration takes effect. For more information about RBAC, see Fundamentals Command
Reference.

FIPS compliance
The device supports the FIPS mode that complies with NIST FIPS 140-2 requirements. Support for
features, commands, and parameters might differ in FIPS mode and non-FIPS mode. For more
information about FIPS mode, see Security Configuration Guide.

156
SNMP tasks at a glance
To configure SNMP, perform the following tasks:
1. (Optional.) Setting the interval at which the SNMP module examines the system configuration
for changes
2. Enabling the SNMP agent
3. Enabling SNMP versions
4. Configuring SNMP basic parameters
 (Optional.) Configuring SNMP common parameters
 Configuring an SNMPv1 or SNMPv2c community
 Configuring an SNMPv3 group and user
5. (Optional.) Configuring SNMP notifications
6. (Optional.) Configuring SNMP logging
7. (Optional.) Setting the interval at which the SNMP module examines the system configuration
for changes

Enabling the SNMP agent


Restrictions and guidelines
The SNMP agent is enabled when you use any command that begins with snmp-agent except for
the snmp-agent calculate-password command.
The SNMP agent will fail to be enabled when the port that the agent will listen on is used by another
service. You can use the snmp-agent port command to specify a listening port. To view the UDP
port use information, execute the display udp verbose command. For more information about
the display udp verbose command, see IP performance optimization commands in Layer 3—IP
Services Configuration Guide.
If you disable the SNMP agent, the SNMP settings do not take effect. The display
current-configuration command does not display the SNMP settings and the SNMP
settings will not be saved in the configuration file. For the SNMP settings to take effect, enable the
SNMP agent.
Procedure
1. Enter system view.
system-view
2. Enable the SNMP agent.
snmp-agent
By default, the SNMP agent is disabled.

Enabling SNMP versions


Restrictions and guidelines
The device supports SNMPv1, SNMPv2c, and SNMPv3 in non-FIPS mode and supports only
SNMPv3 in FIPS mode. An NMS and an SNMP agent must use the same SNMP version to
communicate with each other.
The community name and data carried in SNMPv1 and SNMPv2c messages are in plaintext form,
vulnerable to security risks. As a best practice, use SNMPv3.

157
To use SNMP notifications in IPv6, enable SNMPv2c or SNMPv3.
Procedure
1. Enter system view.
system-view
2. Enable SNMP versions.
In non-FIPS mode:
snmp-agent sys-info version { all | { v1 | v2c | v3 } * }
In FIPS mode:
snmp-agent sys-info version { all | v3 }
By default, SNMPv3 is enabled.
If you execute the command multiple times with different options, all the configurations take
effect, but only one SNMP version is used by the agent and NMS for communication.

Configuring SNMP common parameters


Restrictions and guidelines
An SNMP engine ID uniquely identifies a device in an SNMP managed network. Make sure the local
SNMP engine ID is unique within your SNMP managed network to avoid communication problems.
By default, the device is assigned a unique SNMP engine ID.
If you have configured SNMPv3 users, change the local SNMP engine ID only when necessary. The
change can void the SNMPv3 usernames and encrypted keys you have configured.
The SNMP agent will fail to be enabled when the port that the agent will listen on is used by another
service. You can use the snmp-agent port command to change the SNMP listening port. As a
best practice, execute the display udp verbose command to view the UDP port use information
before specifying a new SNMP listening port. For more information about the display udp
verbose command, see IP performance optimization commands in Layer 3—IP Services
Configuration Guide.
Procedure
1. Enter system view.
system-view
2. Specify an SNMP listening port.
snmp-agent port port-number
By default, the SNMP listening port is UDP port 161.
3. Set a local SNMP engine ID.
snmp-agent local-engineid engineid
By default, the local SNMP engine ID is the company ID plus the device ID.
4. Set an engine ID for a remote SNMP entity.
snmp-agent remote { ipv4-address | ipv6 ipv6-address } [ vpn-instance
vpn-instance-name ] engineid engineid
By default, no remote entity engine IDs exist.
This step is required for the device to send SNMPv3 notifications to a host, typically NMS.
5. Create or update a MIB view.
snmp-agent mib-view { excluded | included } view-name oid-tree [ mask
mask-value ]

158
By default, the MIB view ViewDefault is predefined. In this view, all the MIB objects in the iso
subtree but the snmpUsmMIB, snmpVacmMIB, and snmpModules.18 subtrees are
accessible.
Each view-name oid-tree pair represents a view record. If you specify the same record with
different MIB sub-tree masks multiple times, the most recent configuration takes effect.
6. Configure the system management information.
 Configure the system contact.
snmp-agent sys-info contact sys-contact
By default, the system contact is Hewlett Packard Enterprise Company.
 Configure the system location.
snmp-agent sys-info location sys-location
By default, no system location is configured.
7. Create an SNMP context.
snmp-agent context context-name
By default, no SNMP contexts exist.
8. Configure the maximum SNMP packet size (in bytes) that the SNMP agent can handle.
snmp-agent packet max-size byte-count
By default, an SNMP agent can process SNMP packets with a maximum size of 1500 bytes.
9. Set the DSCP value for SNMP responses.
snmp-agent packet response dscp dscp-value
By default, the DSCP value for SNMP responses is 0.

Configuring an SNMPv1 or SNMPv2c community


About configuring an SNMPv1 or SNMPv2c community
You can create an SNMPv1 or SNMPv2c community by using a community name or by creating an
SNMPv1 or SNMPv2c user. After you create an SNMPv1 or SNMPv2c user, the system
automatically creates a community by using the username as the community name.

Restrictions and guidelines for configuring an SNMPv1 or


SNMPv2c community
SNMPv1 and SNMPv2c settings are not supported in FIPS mode.
Make sure the NMS and agent use the same SNMP community name.
Only users with the network-admin or level-15 user role can create SNMPv1 or SNMPv2c
communities, users, or groups. Users with other user roles cannot create SNMPv1 or SNMPv2c
communities, users, or groups even if these roles are granted access to related commands or
commands of the SNMPv1 or SNMPv2c feature.

Configuring an SNMPv1/v2c community by a community


name
1. Enter system view.
system-view

159
2. Create an SNMPv1/v2c community. Choose one option as needed.
 In VACM mode:
snmp-agent community { read | write } [ simple | cipher ]
community-name [ mib-view view-name ] [ acl { ipv4-acl-number | name
ipv4-acl-name } | acl ipv6 { ipv6-acl-number | name ipv6-acl-name } ]
*
 In RBAC mode:
snmp-agent community [ simple | cipher ] community-name user-role
role-name [ acl { ipv4-acl-number | name ipv4-acl-name } | acl ipv6
{ ipv6-acl-number | name ipv6-acl-name } ] *
3. (Optional.) Map the SNMP community name to an SNMP context.
snmp-agent community-map community-name context context-name

Configuring an SNMPv1/v2c community by creating an


SNMPv1/v2c user
1. Enter system view.
system-view
2. Create an SNMPv1/v2c group.
snmp-agent group { v1 | v2c } group-name [ notify-view view-name |
read-view view-name | write-view view-name ] * [ acl { ipv4-acl-number |
name ipv4-acl-name } | acl ipv6 { ipv6-acl-number | name
ipv6-acl-name } ] *
3. Add an SNMPv1/v2c user to the group.
snmp-agent usm-user { v1 | v2c } user-name group-name [ acl
{ ipv4-acl-number | name ipv4-acl-name } | acl ipv6 { ipv6-acl-number
| name ipv6-acl-name } ] *
The system automatically creates an SNMP community by using the username as the
community name.
4. (Optional.) Map the SNMP community name to an SNMP context.
snmp-agent community-map community-name context context-name

Configuring an SNMPv3 group and user


Restrictions and guidelines for configuring an SNMPv3 group
and user
Only users with the network-admin or level-15 user role can create SNMPv3 users or groups.Users
with other user roles cannot create SNMPv3 users or groups even if these roles are granted access
to related commands or commands of the SNMPv3 feature.
SNMPv3 users are managed in groups. All SNMPv3 users in a group share the same security model,
but can use different authentication and encryption algorithms and keys. Table 7 describes the basic
configuration requirements for different security models.

160
Table 7 Basic configuration requirements for different security models

Keyword for the Parameters for the


Security model Remarks
group user
For an NMS to access
Authentication and the agent, make sure the
Authentication with
privacy encryption algorithms NMS and agent use the
privacy
and keys same authentication and
encryption keys.

For an NMS to access


Authentication without Authentication algorithm the agent, make sure the
authentication
privacy and key NMS and agent use the
same authentication key.
The authentication and
No authentication, no encryption keys, if
N/A N/A
privacy configured, do not take
effect.

Configuring an SNMPv3 group and user in non-FIPS mode


1. Enter system view.
system-view
2. Create an SNMPv3 group.
snmp-agent group v3 group-name [ authentication | privacy ]
[ notify-view view-name | read-view view-name | write-view view-name ] *
[ acl { ipv4-acl-number | name ipv4-acl-name } | acl ipv6
{ ipv6-acl-number | name ipv6-acl-name } ] *
3. (Optional.) Calculate the encrypted form for the key in plaintext form.
snmp-agent calculate-password plain-password mode { 3desmd5 | 3dessha |
aes192md5 | aes192sha | aes256md5 | aes256sha | md5 | sha }
{ local-engineid | specified-engineid engineid }
4. Create an SNMPv3 user. Choose one option as needed.
 In VACM mode:
snmp-agent usm-user v3 user-name group-name [ remote { ipv4-address |
ipv6 ipv6-address } [ vpn-instance vpn-instance-name ] ] [ { cipher |
simple } authentication-mode { md5 | sha } auth-password
[ privacy-mode { 3des | aes128 | aes192 | aes256 | des56 }
priv-password ] ] [ acl { ipv4-acl-number | name ipv4-acl-name } | acl
ipv6 { ipv6-acl-number | name ipv6-acl-name } ] *
 In RBAC mode:
snmp-agent usm-user v3 user-name user-role role-name [ remote
{ ipv4-address | ipv6 ipv6-address } [ vpn-instance
vpn-instance-name ] ] [ { cipher | simple } authentication-mode { md5 |
sha } auth-password [ privacy-mode { 3des | aes128 | aes192 | aes256 |
des56 } priv-password ] ] [ acl { ipv4-acl-number | name
ipv4-acl-name } | acl ipv6 { ipv6-acl-number | name ipv6-acl-name } ]
*
To send notifications to an SNMPv3 NMS, you must specify the remote keyword.
5. (Optional.) Assign a user role to the SNMPv3 user created in RBAC mode.
snmp-agent usm-user v3 user-name user-role role-name
By default, an SNMPv3 user has the user role assigned to it at its creation.

161
Configuring an SNMPv3 group and user in FIPS mode
1. Enter system view.
system-view
2. Create an SNMPv3 group.
snmp-agent group v3 group-name { authentication | privacy }
[ notify-view view-name | read-view view-name | write-view view-name ]
* [ acl { ipv4-acl-number | name ipv4-acl-name } | acl ipv6
{ ipv6-acl-number | name ipv6-acl-name } ] *
3. (Optional.) Calculate the encrypted form for the key in plaintext form.
snmp-agent calculate-password plain-password mode { aes192sha |
aes256sha | sha } { local-engineid | specified-engineid engineid }
4. Create an SNMPv3 user. Choose one option as needed.
 In VACM mode:
snmp-agent usm-user v3 user-name group-name [ remote { ipv4-address |
ipv6 ipv6-address } [ vpn-instance vpn-instance-name ] ] { cipher |
simple } authentication-mode sha auth-password [ privacy-mode
{ aes128 | aes192 | aes256 } priv-password ] [ acl { ipv4-acl-number |
name ipv4-acl-name } | acl ipv6 { ipv6-acl-number | name
ipv6-acl-name } ] *
 In RBAC mode:
snmp-agent usm-user v3 user-name user-role role-name [ remote
{ ipv4-address | ipv6 ipv6-address } [ vpn-instance
vpn-instance-name ] ] { cipher | simple } authentication-mode sha
auth-password [ privacy-mode { aes128 | aes192 | aes256 }
priv-password ] [ acl { ipv4-acl-number | name ipv4-acl-name } | acl
ipv6 { ipv6-acl-number | name ipv6-acl-name } ] *
To send notifications to an SNMPv3 NMS, you must specify the remote keyword.
5. (Optional.) Assign a user role to the SNMPv3 user created in RBAC mode.
snmp-agent usm-user v3 user-name user-role role-name
By default, an SNMPv3 user has the user role assigned to it at its creation.

Configuring SNMP notifications


About SNMP notifications
The SNMP agent sends notifications (traps and informs) to inform the NMS of significant events,
such as link state changes and user logins or logouts. After you enable notifications for a module, the
module sends the generated notifications to the SNMP agent. The SNMP agent sends the received
notifications as traps or informs based on the current configuration. Unless otherwise stated, the
trap keyword in the command line includes both traps and informs.

Enabling SNMP notifications


Restrictions and guidelines
Enable an SNMP notification only if necessary. SNMP notifications are memory-intensive and might
affect device performance.

162
To generate linkUp or linkDown notifications when the link state of an interface changes, you must
perform the following tasks:
• Enable linkUp or linkDown notification globally by using the snmp-agent trap enable
standard [ linkdown | linkup ] * command.
• Enable linkUp or linkDown notification on the interface by using the enable snmp trap
updown command.
After you enable notifications for a module, whether the module generates notifications also depends
on the configuration of the module. For more information, see the configuration guide for each
module.
To use SNMP notifications in IPv6, enable SNMPv2c or SNMPv3.
Procedure
1. Enter system view.
system-view
2. Enable SNMP notifications.
snmp-agent trap enable [ configuration | protocol | standard
[ authentication | coldstart | linkdown | linkup | warmstart ] * |
system ]
By default, SNMP configuration notifications, standard notifications, and system notifications
are enabled. Whether other SNMP notifications are enabled varies by modules.
For the device to send SNMP notifications for a protocol, first enable the protocol.
3. Enter interface view.
interface interface-type interface-number
4. Enable link state notifications.
enable snmp trap updown
By default, link state notifications are enabled.

Configuring parameters for sending SNMP notifications


About this task
You can configure the SNMP agent to send notifications as traps or informs to a host, typically an
NMS, for analysis and management. Traps are less reliable and use fewer resources than informs,
because an NMS does not send an acknowledgment when it receives a trap.
When network congestion occurs or the destination is not reachable, the SNMP agent buffers
notifications in a queue. You can set the queue size and the notification lifetime (the maximum time
that a notification can stay in the queue). When the queue size is reached, the system discards the
new notification it receives. If modification of the queue size causes the number of notifications in the
queue to exceed the queue size, the oldest notifications are dropped for new notifications. A
notification is deleted when its lifetime expires.
You can extend standard linkUp/linkDown notifications to include interface description and interface
type, but must make sure the NMS supports the extended SNMP messages.
Configuring the parameters for sending SNMP traps
1. Enter system view.
system-view
2. Configure a target host.
In non-FIPS mode:
snmp-agent target-host trap address udp-domain { ipv4-target-host |
ipv6 ipv6-target-host } [ udp-port port-number ] [ dscp dscp-value ]

163
[ vpn-instance vpn-instance-name ] params securityname
security-string [ v1 | v2c | v3 [ authentication | privacy ] ]
In FIPS mode:
snmp-agent target-host trap address udp-domain { ipv4-target-host |
ipv6 ipv6-target-host } [ udp-port port-number ] [ dscp dscp-value ]
[ vpn-instance vpn-instance-name ] params securityname
security-string v3 { authentication | privacy }
By default, no target host is configured.
3. (Optional.) Configure a source address for sending traps.
snmp-agent trap source interface-type { interface-number |
interface-number.subnumber }
By default, SNMP uses the IP address of the outgoing routed interface as the source IP
address.
4. (Optional.) Enable SNMP alive traps and set the sending interval.
snmp-agent trap periodical-interval interval
By default, SNMP alive traps is enabled and the sending interval is 60 seconds.
Configuring the parameters for sending SNMP informs
1. Enter system view.
system-view
2. Configure a target host.
In non-FIPS mode:
snmp-agent target-host inform address udp-domain { ipv4-target-host |
ipv6 ipv6-target-host } [ udp-port port-number ] [ vpn-instance
vpn-instance-name ] params securityname security-string { v2c | v3
[ authentication | privacy ] }
In FIPS mode:
snmp-agent target-host inform address udp-domain { ipv4-target-host |
ipv6 ipv6-target-host } [ udp-port port-number ] [ vpn-instance
vpn-instance-name ] params securityname security-string v3
{ authentication | privacy }
By default, no target host is configured.
Only SNMPv2c and SNMPv3 support inform packets.
3. (Optional.) Configure a source address for sending informs.
snmp-agent inform source interface-type { interface-number |
interface-number.subnumber }
By default, SNMP uses the IP address of the outgoing routed interface as the source IP
address.
Configuring common parameters for sending notifications
1. Enter system view.
system-view
2. (Optional.) Enable extended linkUp/linkDown notifications.
snmp-agent trap if-mib link extended
By default, the SNMP agent sends standard linkUp/linkDown notifications.
If the NMS does not support extended linkUp/linkDown notifications, do not use this command.
3. (Optional.) Set the notification queue size.
snmp-agent trap queue-size size

164
By default, the notification queue can hold 100 notification messages.
4. (Optional.) Set the notification lifetime.
snmp-agent trap life seconds
The default notification lifetime is 120 seconds.

Configuring SNMP logging


About this task
The SNMP agent logs Get requests, Set requests, Set responses, SNMP notifications, and SNMP
authentication failures, but does not log Get responses.
• Get operation—The agent logs the IP address of the NMS, name of the accessed node, and
node OID.
• Set operation—The agent logs the NMS' IP address, name of accessed node, node OID,
variable value, and error code and index for the Set operation.
• Notification tracking—The agent logs the SNMP notifications after sending them to the NMS.
• SNMP authentication failure—The agent logs related information when an NMS fails to be
authenticated by the agent.
The SNMP module sends these logs to the information center. You can configure the information
center to output these messages to certain destinations, such as the console and the log buffer. The
total output size for the node field (MIB node name) and the value field (value of the MIB node) in
each log entry is 1024 bytes. If this limit is exceeded, the information center truncates the data in the
fields. For more information about the information center, see "Configuring the information center."
Restrictions and guidelines
Enable SNMP logging only if necessary. SNMP logging is memory-intensive and might impact
device performance.
Procedure
1. Enter system view.
system-view
2. Enable SNMP logging.
snmp-agent log { all | authfail | get-operation | set-operation }
By default, SNMP logging is enabled for set operations and disabled for SNMP authentication
failures and get operations.
3. Enable SNMP notification logging.
snmp-agent trap log
By default, SNMP notification logging is disabled.

Setting the interval at which the SNMP module


examines the system configuration for changes
About this task
This task enables the SNMP module to examine the system configuration for changes at the
specified interval and generate a trap and a log if any change is found.
Procedure
1. Enter system view.

165
system-view
2. Set the interval at which the SNMP module examines the system configuration for changes.
snmp-agent configuration-examine interval interval
By default, the SNMP module examines the system configuration for changes at intervals of
600 seconds.

Display and maintenance commands for SNMP


Execute display commands in any view and reset commands in user view.

Task Command
Display SNMPv1 or SNMPv2c community display snmp-agent community [ read
information. (This command is not supported in FIPS
mode.)
| write ]

display snmp-agent context


Display SNMP contexts.
[ context-name ]
display snmp-agent group
Display SNMP group information.
[ group-name ]
Display the local engine ID. display snmp-agent local-engineid
display snmp-agent mib-node
Display SNMP MIB node information. [ details | index-node | trap-node |
verbose ]
display snmp-agent mib-view
Display MIB view information. [ exclude | include | viewname
view-name ]
display snmp-agent remote
[ { ipv4-address | ipv6
Display remote engine IDs.
ipv6-address } [ vpn-instance
vpn-instance-name ] ]
Display SNMP agent statistics. display snmp-agent statistics
display snmp-agent sys-info
Display SNMP agent system information.
[ contact | location | version ] *
Display basic information about the notification
display snmp-agent trap queue
queue.

Display SNMP notifications drop records. display snmp-agent trapbuffer drop


Display SNMP notifications sending records. display snmp-agent trapbuffer send
Display SNMP notifications enabling status for
display snmp-agent trap-list
modules.

display snmp-agent usm-user


Display SNMPv3 user information. [ engineid engineid | username
user-name | group group-name ] *
Clear all records from the SNMP trap buffer. reset snmp-agent trapbuffer

166
SNMP configuration examples
Example: Configuring SNMPv1/SNMPv2c
The device does not support this configuration example in FIPS mode.
The configuration procedure is the same for SNMPv1 and SNMPv2c. This example uses SNMPv1.
Network configuration
As shown in Figure 59, the NMS (1.1.1.2/24) uses SNMPv1 to manage the SNMP agent (1.1.1.1/24),
and the agent automatically sends notifications to report events to the NMS.
Figure 59 Network diagram

Agent NMS
1.1.1.1/24 1.1.1.2/24

Procedure
1. Configure the SNMP agent:
# Assign IP address 1.1.1.1/24 to the agent and make sure the agent and the NMS can reach
each other. (Details not shown.)
# Specify SNMPv1, and create read-only community public and read and write community
private.
<Agent> system-view
[Agent] snmp-agent sys-info version v1
[Agent] snmp-agent community read public
[Agent] snmp-agent community write private
# Configure contact and physical location information for the agent.
[Agent] snmp-agent sys-info contact Mr.Wang-Tel:3306
[Agent] snmp-agent sys-info location telephone-closet,3rd-floor
# Enable SNMP notifications, specify the NMS at 1.1.1.2 as an SNMP trap destination, and use
public as the community name. (To make sure the NMS can receive traps, specify the same
SNMP version in the snmp-agent target-host command as is configured on the NMS.)
[Agent] snmp-agent trap enable
[Agent] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname
public v1
2. Configure the SNMP NMS:
 Specify SNMPv1.
 Create read-only community public, and create read and write community private.
 Set the timeout timer and maximum number of retries as needed.
For information about configuring the NMS, see the NMS manual.

NOTE:
The SNMP settings on the agent and the NMS must match.

Verifying the configuration


# Try to get the MTU value of the NULL0 interface from the agent. The attempt succeeds.
Send request to 1.1.1.1/161 ...

167
Protocol version: SNMPv1
Operation: Get
Request binding:
1: 1.3.6.1.2.1.2.2.1.4.135471
Response binding:
1: Oid=ifMtu.135471 Syntax=INT Value=1500
Get finished

# Use a wrong community name to get the value of a MIB node on the agent. You can see an
authentication failure trap on the NMS.
1.1.1.1/2934 V1 Trap = authenticationFailure
SNMP Version = V1
Community = public
Command = Trap
Enterprise = 1.3.6.1.4.1.43.1.16.4.3.50
GenericID = 4
SpecificID = 0
Time Stamp = 8:35:25.68

Example: Configuring SNMPv3


Network configuration
As shown in Figure 60, the NMS (1.1.1.2/24) uses SNMPv3 to monitor and manage the agent
(1.1.1.1/24). The agent automatically sends notifications to report events to the NMS. The default
UDP port 162 is used for SNMP notifications.
The NMS and the agent perform authentication when they establish an SNMP session. The
authentication algorithm is SHA-1 and the authentication key is 123456TESTauth&!. The NMS and
the agent also encrypt the SNMP packets between them by using the AES algorithm and encryption
key 123456TESTencr&!.
Figure 60 Network diagram

Agent NMS
1.1.1.1/24 1.1.1.2/24

Configuring SNMPv3 in RBAC mode


1. Configure the agent:
# Assign IP address 1.1.1.1/24 to the agent and make sure the agent and the NMS can reach
each other. (Details not shown.)
#Create user role test, and assign test read-only access to the objects under the snmpMIB
node (OID:1.3.6.1.6.3.1), including the linkUp and linkDown objects.
<Agent> system-view
[Agent] role name test
[Agent-role-test] rule 1 permit read oid 1.3.6.1.6.3.1
# Assign user role test read-only access to the system node (OID:1.3.6.1.2.1.1) and read-write
access to the interfaces node(OID:1.3.6.1.2.1.2).
[Agent-role-test] rule 2 permit read oid 1.3.6.1.2.1.1
[Agent-role-test] rule 3 permit read write oid 1.3.6.1.2.1.2
[Agent-role-test] quit

168
# Create SNMPv3 user RBACtest. Assign user role test to RBACtest. Set the authentication
algorithm to SHA-1, authentication key to 123456TESTauth&!, encryption algorithm to AES,
and encryption key to 123456TESTencr&!.
[Agent] snmp-agent usm-user v3 RBACtest user-role test simple authentication-mode sha
123456TESTauth&! privacy-mode aes128 123456TESTencr&!
#Configure contact and physical location information for the agent.
[Agent] snmp-agent sys-info contact Mr.Wang-Tel:3306
[Agent] snmp-agent sys-info location telephone-closet,3rd-floor
#Enable notifications on the agent. Specify the NMS at 1.1.1.2 as the notification destination,
and RBACtest as the username.
[Agent] snmp-agent trap enable
[Agent] snmp-agent target-host trap address udp-domain 1.1.1.2 params
securitynameRBACtest v3 privacy
2. Configure the NMS:
 Specify SNMPv3.
 Create SNMPv3 user RBACtest.
 Enable authentication and encryption. Set the authentication algorithm to SHA-1,
authentication key to 123456TESTauth&!, encryption algorithm to AES, and encryption key
to 123456TESTencr&!.
 Set the timeout timer and maximum number of retries.
For information about configuring the NMS, see the NMS manual.

NOTE:
The SNMP settings on the agent and the NMS must match.

Configuring SNMPv3 in VACM mode


1. Configure the agent:
# Assign IP address 1.1.1.1/24 to the agent, and make sure the agent and the NMS can reach
each other. (Details not shown.)
# Create SNMPv3 group managev3group and assign managev3group read-only access to
the objects under the snmpMIB node (OID: 1.3.6.1.2.1.2.2) in the test view, including the
linkUp and linkDown objects.
<Agent> system-view
[Agent] undo snmp-agent mib-view ViewDefault
[Agent] snmp-agent mib-view included test snmpMIB
[Agent] snmp-agent group v3 managev3group privacy read-view test
#Assign SNMPv3 group managev3group read-write access to the objects under the system
node (OID: 1.3.6.1.2.1.1) and interfaces node (OID:1.3.6.1.2.1.2) in the test view.
[Agent] snmp-agent mib-view included test 1.3.6.1.2.1.1
[Agent] snmp-agent mib-view included test 1.3.6.1.2.1.2
[Agent] snmp-agent group v3 managev3group privacy read-view test write-view test
# Add user VACMtest to SNMPv3 group managev3group, and set the authentication
algorithm to SHA-1, authentication key to 123456TESTauth&!, encryption algorithm to AES,
and encryption key to 123456TESTencr&!.
[Agent] snmp-agent usm-user v3 VACMtest managev3group simple authentication-mode sha
123456TESTauth&! privacy-mode aes128 123456TESTencr&!
# Configure contact and physical location information for the agent.
[Agent] snmp-agent sys-info contact Mr.Wang-Tel:3306
[Agent] snmp-agent sys-info location telephone-closet,3rd-floor

169
# Enable notifications on the agent. Specify the NMS at 1.1.1.2 as the trap destination, and
VACMtest as the username.
[Agent] snmp-agent trap enable
[Agent] snmp-agent target-host trap address udp-domain 1.1.1.2 params VACMtest v3
privacy
2. Configure the SNMP NMS:
 Specify SNMPv3.
 Create SNMPv3 user VACMtest.
 Enable authentication and encryption. Set the authentication algorithm to SHA-1,
authentication key to 123456TESTauth&!, encryption algorithm to AES, and encryption key
to 123456TESTencr&!.
 Set the timeout timer and maximum number of retries.
For information about configuring the NMS, see the NMS manual.

NOTE:
The SNMP settings on the agent and the NMS must match.

Verifying the configuration


• Use username RBACtest to access the agent.
# Retrieve the value of the sysName node. The value Agent is returned.
# Set the value for the sysName node to Sysname. The operation fails because the NMS does
not have write access to the node.
# Shut down or bring up an interface on the agent. The NMS receives linkUP (OID:
1.3.6.1.6.3.1.1.5.4) or linkDown (OID: 1.3.6.1.6.3.1.1.5.3) notifications.
• Use username VACMtest to access the agent.
# Retrieve the value of the sysName node. The value Agent is returned.
# Set the value for the sysName node to Sysname. The operation succeeds.
# Shut down or bring up an interface on the agent. The NMS receives linkUP (OID:
1.3.6.1.6.3.1.1.5.4) or linkDown (OID: 1.3.6.1.6.3.1.1.5.3) notifications.

170
Configuring RMON
About RMON
Remote Network Monitoring (RMON) is an SNMP-based network management protocol. It enables
proactive remote monitoring and management of network devices.

RMON working mechanism


RMON can periodically or continuously collect traffic statistics for an Ethernet port and monitor the
values of MIB objects on a device. When a value reaches the threshold, the device automatically
logs the event or sends a notification to the NMS. The NMS does not need to constantly poll MIB
variables and compare the results.
RMON uses SNMP notifications to notify NMSs of various alarm conditions. SNMP reports function
and interface operating status changes such as link up, link down, and module failure to the NMS.

RMON groups
Among standard RMON groups, the device implements the statistics group, history group, event
group, alarm group, probe configuration group, and user history group. The Comware system also
implements a private alarm group, which enhances the standard alarm group. The probe
configuration group and user history group are not configurable from the CLI. To configure these two
groups, you must access the MIB.
Statistics group
The statistics group samples traffic statistics for monitored Ethernet interfaces and stores the
statistics in the Ethernet statistics table (ethernetStatsTable). The statistics include:
• Number of collisions.
• CRC alignment errors.
• Number of undersize or oversize packets.
• Number of broadcasts.
• Number of multicasts.
• Number of bytes received.
• Number of packets received.
The statistics in the Ethernet statistics table are cumulative sums.
History group
The history group periodically samples traffic statistics on interfaces and saves the history samples
in the history table (etherHistoryTable). The statistics include:
• Bandwidth utilization.
• Number of error packets.
• Total number of packets.
The history table stores traffic statistics collected for each sampling interval.
Event group
The event group controls the generation and notifications of events triggered by the alarms defined
in the alarm group and the private alarm group. The following are RMON alarm event handling
methods:

171
• Log—Logs event information (including event time and description) in the event log table so the
management device can get the logs through SNMP.
• Trap—Sends an SNMP notification when the event occurs.
• Log-Trap—Logs event information in the event log table and sends an SNMP notification when
the event occurs.
• None—Takes no actions.
Alarm group
The RMON alarm group monitors alarm variables, such as the count of incoming packets
(etherStatsPkts) on an interface. After you create an alarm entry, the RMON agent samples the
value of the monitored alarm variable regularly. If the value of the monitored variable is greater than
or equal to the rising threshold, a rising alarm event is triggered. If the value of the monitored variable
is smaller than or equal to the falling threshold, a falling alarm event is triggered. The event group
defines the action to take on the alarm event.
If an alarm entry crosses a threshold multiple times in succession, the RMON agent generates an
alarm event only for the first crossing. For example, if the value of a sampled alarm variable crosses
the rising threshold multiple times before it crosses the falling threshold, only the first crossing
triggers a rising alarm event, as shown in Figure 61.
Figure 61 Rising and falling alarm events

Private alarm group


The private alarm group enables you to perform basic math operations on multiple variables, and
compare the calculation result with the rising and falling thresholds.
The RMON agent samples variables and takes an alarm action based on a private alarm entry as
follows:
1. Samples the private alarm variables in the user-defined formula.
2. Processes the sampled values with the formula.
3. Compares the calculation result with the predefined thresholds, and then takes one of the
following actions:
 Triggers the event associated with the rising alarm event if the result is equal to or greater
than the rising threshold.
 Triggers the event associated with the falling alarm event if the result is equal to or less than
the falling threshold.
If a private alarm entry crosses a threshold multiple times in succession, the RMON agent generates
an alarm event only for the first crossing. For example, if the value of a sampled alarm variable

172
crosses the rising threshold multiple times before it crosses the falling threshold, only the first
crossing triggers a rising alarm event.

Sample types for the alarm group and the private alarm group
The RMON agent supports the following sample types:
• absolute—RMON compares the value of the monitored variable with the rising and falling
thresholds at the end of the sampling interval.
• delta—RMON subtracts the value of the monitored variable at the previous sample from the
current value, and then compares the difference with the rising and falling thresholds.

Protocols and standards


• RFC 4502, Remote Network Monitoring Management Information Base Version 2
• RFC 2819, Remote Network Monitoring Management Information Base Status of this Memo

Configuring the RMON statistics function


About the RMON statistics function
RMON implements the statistics function through the Ethernet statistics group and the history group.
The Ethernet statistics group provides the cumulative statistic for a variable from the time the
statistics entry is created to the current time.
The history group provides statistics that are sampled for a variable for each sampling interval. The
history group uses the history control table to control sampling, and it stores samples in the history
table.

Creating an RMON Ethernet statistics entry


Restrictions and guidelines
The index of an RMON statistics entry must be globally unique. If the index has been used by
another interface, the creation operation fails.
You can create only one RMON statistics entry for an Ethernet interface.
Procedure
1. Enter system view.
system-view
2. Enter Ethernet interface view.
interface interface-type interface-number
3. Create an RMON Ethernet statistics entry.
rmon statistics entry-number [ owner text ]

Creating an RMON history control entry


Restrictions and guidelines
You can configure multiple history control entries for one interface, but you must make sure their
entry numbers and sampling intervals are different.

173
You can create a history control entry successfully even if the specified bucket size exceeds the
available history table size. RMON will set the bucket size as closely to the expected bucket size as
possible.
Procedure
1. Enter system view.
system-view
2. Enter Ethernet interface view.
interface interface-type interface-number
3. Create an RMON history control entry.
rmon history entry-number buckets number interval interval [ owner
text ]
By default, no RMON history control entries exist.
You can create multiple RMON history control entries for an Ethernet interface.

Configuring the RMON alarm function


Restrictions and guidelines
When you create a new event, alarm, or private alarm entry, follow these restrictions and guidelines:
• The entry must not have the same set of parameters as an existing entry.
• The maximum number of entries is not reached.
Table 8 shows the parameters to be compared for duplication and the entry limits.
Table 8 RMON configuration restrictions

Entry Parameters to be compared Maximum number of entries


• Event description (description
string)
• Event type (log, trap, logtrap, or
Event 60
none)
• Community name
(security-string)
• Alarm variable (alarm-variable)
• Sampling interval
(sampling-interval)
• Sample type (absolute or delta)
Alarm 60
• Rising threshold
(threshold-value1)
• Falling threshold
(threshold-value2)
• Alarm variable formula
(prialarm-formula)
• Sampling interval
(sampling-interval)
Private alarm • Sample type (absolute or delta) 50
• Rising threshold
(threshold-value1)
• Falling threshold
(threshold-value2)

174
Prerequisites
To send notifications to the NMS when an alarm is triggered, configure the SNMP agent as described
in "Configuring SNMP" before configuring the RMON alarm function.
Procedure
1. Enter system view.
system-view
2. (Optional.) Create an RMON event entry.
rmon event entry-number [ description string ] { log | log-trap
security-string | none | trap security-string } [ owner text ]
By default, no RMON event entries exist.
3. Create an RMON alarm entry.
 Create an RMON alarm entry.
rmon alarm entry-number alarm-variable sampling-interval
{ absolute | delta } [ startup-alarm { falling | rising |
rising-falling } ] rising-threshold threshold-value1 event-entry1
falling-threshold threshold-value2 event-entry2 [ owner text ]
 Create an RMON private alarm entry.
rmon prialarm entry-number prialarm-formula prialarm-des
sampling-interval { absolute | delta } [ startup-alarm { falling |
rising | rising-falling } ] rising-threshold threshold-value1
event-entry1 falling-threshold threshold-value2 event-entry2
entrytype { forever | cycle cycle-period } [ owner text ]
By default, no RMON alarm entries or RMON private alarm entries exist.
You can associate an alarm with an event that has not been created yet. The alarm will trigger
the event only after the event is created.

Display and maintenance commands for RMON


Execute display commands in any view.

Task Command
Display RMON alarm entries. display rmon alarm [ entry-number ]
Display RMON event entries. display rmon event [ entry-number ]
Display log information for event
display rmon eventlog [ entry-number ]
entries.

Display RMON history control entries display rmon history [ interface-type


and history samples. interface-number ]
Display RMON private alarm entries. display rmon prialarm [ entry-number ]
display rmon statistics [ interface-type
Display RMON statistics.
interface-number]

175
RMON configuration examples
Example: Configuring the Ethernet statistics function
Network configuration
As shown in Figure 62, create an RMON Ethernet statistics entry on the device to gather cumulative
traffic statistics for GigabitEthernet 1/0/1.
Figure 62 Network diagram

GE1/0/1

NMS
Server Device
1.1.1.2

Procedure
# Create an RMON Ethernet statistics entry for GigabitEthernet 1/0/1.
<Sysname> system-view
[Sysname] interface gigabitethernet 1/0/1
[Sysname-GigabitEthernet1/0/1] rmon statistics 1 owner user1

Verifying the configuration


# Display statistics collected for GigabitEthernet 1/0/1.
<Sysname> display rmon statistics gigabitethernet 1/0/1
EtherStatsEntry 1 owned by user1 is VALID.
Interface : GigabitEthernet1/0/1<ifIndex.3>
etherStatsOctets : 21657 , etherStatsPkts : 307
etherStatsBroadcastPkts : 56 , etherStatsMulticastPkts : 34
etherStatsUndersizePkts : 0 , etherStatsOversizePkts : 0
etherStatsFragments : 0 , etherStatsJabbers : 0
etherStatsCRCAlignErrors : 0 , etherStatsCollisions : 0
etherStatsDropEvents (insufficient resources): 0
Incoming packets by size:
64 : 235 , 65-127 : 67 , 128-255 : 4
256-511: 1 , 512-1023: 0 , 1024-1518: 0

# Get the traffic statistics from the NMS through SNMP. (Details not shown.)

Example: Configuring the history statistics function


Network configuration
As shown in Figure 63, create an RMON history control entry on the device to sample traffic statistics
for GigabitEthernet 1/0/1 every minute.

176
Figure 63 Network diagram

GE1/0/1

NMS
Server Device
1.1.1.2

Procedure
# Create an RMON history control entry to sample traffic statistics every minute for GigabitEthernet
1/0/1. Retain a maximum of eight samples for the interface in the history statistics table.
<Sysname> system-view
[Sysname] interface gigabitethernet 1/0/1
[Sysname-GigabitEthernet1/0/1] rmon history 1 buckets 8 interval 60 owner user1

Verifying the configuration


# Display the history statistics collected for GigabitEthernet 1/0/1.
[Sysname-GigabitEthernet1/0/1] display rmon history
HistoryControlEntry 1 owned by user1 is VALID
Sampled interface : GigabitEthernet1/0/1<ifIndex.3>
Sampling interval : 60(sec) with 8 buckets max
Sampling record 1 :
dropevents : 0 , octets : 834
packets : 8 , broadcast packets : 1
multicast packets : 6 , CRC alignment errors : 0
undersize packets : 0 , oversize packets : 0
fragments : 0 , jabbers : 0
collisions : 0 , utilization : 0
Sampling record 2 :
dropevents : 0 , octets : 962
packets : 10 , broadcast packets : 3
multicast packets : 6 , CRC alignment errors : 0
undersize packets : 0 , oversize packets : 0
fragments : 0 , jabbers : 0
collisions : 0 , utilization : 0

# Get the traffic statistics from the NMS through SNMP. (Details not shown.)

Example: Configuring the alarm function


Network configuration
As shown in Figure 64, configure the device to monitor the incoming traffic statistic on
GigabitEthernet 1/0/1, and send RMON alarms when either of the following conditions is met:
• The 5-second delta sample for the traffic statistic crosses the rising threshold (100).
• The 5-second delta sample for the traffic statistic drops below the falling threshold (50).

177
Figure 64 Network diagram

GE1/0/1

NMS
Server Device
1.1.1.2

Procedure
# Configure the SNMP agent (the device) with the same SNMP settings as the NMS at 1.1.1.2. This
example uses SNMPv1, read community public, and write community private.
<Sysname> system-view
[Sysname] snmp-agent
[Sysname] snmp-agent community read public
[Sysname] snmp-agent community write private
[Sysname] snmp-agent sys-info version v1
[Sysname] snmp-agent trap enable
[Sysname] snmp-agent trap log
[Sysname] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname
public

# Create an RMON Ethernet statistics entry for GigabitEthernet 1/0/1.


[Sysname] interface gigabitethernet 1/0/1
[Sysname-GigabitEthernet1/0/1] rmon statistics 1 owner user1
[Sysname-GigabitEthernet1/0/1] quit

# Create an RMON event entry and an RMON alarm entry to send SNMP notifications when the
delta sample for 1.3.6.1.2.1.16.1.1.1.4.1 exceeds 100 or drops below 50.
[Sysname] rmon event 1 trap public owner user1
[Sysname] rmon alarm 1 1.3.6.1.2.1.16.1.1.1.4.1 5 delta rising-threshold 100 1
falling-threshold 50 1 owner user1

NOTE:
The string 1.3.6.1.2.1.16.1.1.1.4.1 is the object instance for GigabitEthernet 1/0/1. The digits before
the last digit (1.3.6.1.2.1.16.1.1.1.4) represent the object for total incoming traffic statistics. The last
digit (1) is the RMON Ethernet statistics entry index for GigabitEthernet 1/0/1.

Verifying the configuration


# Display the RMON alarm entry.
<Sysname> display rmon alarm 1
AlarmEntry 1 owned by user1 is VALID.
Sample type : delta
Sampled variable : 1.3.6.1.2.1.16.1.1.1.4.1<etherStatsOctets.1>
Sampling interval (in seconds) : 5
Rising threshold : 100(associated with event 1)
Falling threshold : 50(associated with event 1)
Alarm sent upon entry startup : risingOrFallingAlarm
Latest value : 0

# Display statistics for GigabitEthernet 1/0/1.


<Sysname> display rmon statistics gigabitethernet 1/0/1
EtherStatsEntry 1 owned by user1 is VALID.

178
Interface : GigabitEthernet1/0/1<ifIndex.3>
etherStatsOctets : 57329 , etherStatsPkts : 455
etherStatsBroadcastPkts : 53 , etherStatsMulticastPkts : 353
etherStatsUndersizePkts : 0 , etherStatsOversizePkts : 0
etherStatsFragments : 0 , etherStatsJabbers : 0
etherStatsCRCAlignErrors : 0 , etherStatsCollisions : 0
etherStatsDropEvents (insufficient resources): 0
Incoming packets by size :
64 : 7 , 65-127 : 413 , 128-255 : 35
256-511: 0 , 512-1023: 0 , 1024-1518: 0

The NMS receives the notification when the alarm is triggered.

179
Configuring the Event MIB
About the Event MIB
The Event Management Information Base (Event MIB) is an SNMPv3-based network management
protocol and is an enhancement to remote network monitoring (RMON). The Event MIB uses
Boolean tests, existence tests, and threshold tests to monitor MIB objects on a local or remote
system. It triggers the predefined notification or set action when a monitored object meets the trigger
condition.

Trigger
The Event MIB uses triggers to manage and associate the three elements of the Event MIB:
monitored object, trigger condition, and action.

Monitored objects
The Event MIB can monitor the following MIB objects:
• Table node.
• Conceptual row node.
• Table column node.
• Simple leaf node.
• Parent node of a leaf node.
To monitor a single MIB object, specify it by its OID or name. To monitor a set of MIB objects, specify
the common OID or name of the group and enable wildcard matching. For example, specify ifDescr.2
to monitor the description for the interface with index 2. Specify ifDescr and enable wildcard
matching to monitor the descriptions for all interfaces.

Trigger test
A trigger supports Boolean, existence, and threshold tests.
Boolean test
A Boolean test compares the value of the monitored object with the reference value and takes
actions according to the comparison result. The comparison types include unequal, equal, less,
lessorequal, greater, and greaterorequal. For example, if the comparison type is equal, an event
is triggered when the value of the monitored object equals the reference value. The event will not be
triggered again until the value becomes unequal and comes back to equal.
Existence test
An existence test monitors and manages the absence, presence, and change of a MIB object, for
example, interface status. When a monitored object is specified, the system reads the value of the
monitored object regularly.
• If the test type is Absent, the system triggers an alarm event and takes the specified action
when the state of the monitored object changes to absent.
• If the test type is Present, the system triggers an alarm event and takes the specified action
when the state of the monitored object changes to present.
• If the test type is Changed, the system triggers an alarm event and takes the specified action
when the value of the monitored object changes.

180
Threshold test
A threshold test regularly compares the value of the monitored object with the threshold values.
• A rising alarm event is triggered if the value of the monitored object is greater than or equal to
the rising threshold.
• A falling alarm event is triggered if the value of the monitored object is smaller than or equal to
the falling threshold.
• A rising alarm event is triggered if the difference between the current sampled value and the
previous sampled value is greater than or equal to the delta rising threshold.
• A falling alarm event is triggered if the difference between the current sampled value and the
previous sampled value is smaller than or equal to the delta falling threshold.
• A falling alarm event is triggered if the values of the monitored object, the rising threshold, and
the falling threshold are the same.
• A falling alarm event is triggered if the delta rising threshold, the delta falling threshold, and the
difference between the current sampled value and the previous sampled value is the same.
The alarm management module defines the set or notification action to take on alarm events.
If the value of the monitored object crosses a threshold multiple times in succession, the managed
device triggers an alarm event only for the first crossing. For example, if the value of a sampled
object crosses the rising threshold multiple times before it crosses the falling threshold, only the first
crossing triggers a rising alarm event, as shown in Figure 65.
Figure 65 Rising and falling alarm events

Variable value

Rising threshold

Falling threshold

Time

Event actions
The Event MIB triggers one or both of the following actions when the trigger condition is met:
• Set action—Uses SNMP to set the value of the monitored object.
• Notification action—Uses SNMP to send a notification to the NMS. If an object list is specified
for the notification action, the notification will carry the specified objects in the object list.

Object list
An object list is a set of MIB objects. You can specify an object list in trigger view, trigger-test view
(including trigger-Boolean view, trigger existence view, and trigger threshold view), and
action-notification view. If a notification action is triggered, the device sends a notification carrying
the object list to the NMS.

181
If you specify an object list respectively in any two of the views or all the three views, the object lists
are added to the triggered notifications in this sequence: trigger view, trigger-test view, and
action-notification view.

Object owner
Trigger, event, and object list use an owner and name for unique identification. The owner must be
an SNMPv3 user that has been created on the device. If you specify a notification action for a trigger,
you must establish an SNMPv3 connection between the device and NMS by using the SNMPv3
username. For more information about SNMPv3 user, see "SNMP configuration".

Restrictions and guidelines: Event MIB


configuration
The Event MIB and RMON are independent of each other. You can configure one or both of the
features for network management.
You must specify the same owner for a trigger, object lists of the trigger, and events of the trigger.

Event MIB tasks at a glance


To configure the Event MIB, perform the following tasks:
1. Configuring the Event MIB global sampling parameters
2. (Optional.) Configuring Event MIB object lists
Perform this task so that the device sends a notification that carries the specified object list to
the NMS when a notification action is triggered.
3. Configuring an event
The device supports set and notification actions. Choose one or two of the following actions:
 Creating an event
 Configuring a set action for an event
 Configuring a notification action for an event
 Enabling the event
4. Configuring a trigger
A trigger supports Boolean, existence, and threshold tests. Choose one or more of the following
tests:
 Creating a trigger and configuring its basic parameters
 Configuring a Boolean trigger test
 Configuring an existence trigger test
 Configuring a threshold trigger test
 Enabling trigger sampling
5. (Optional.) Enabling SNMP notifications for the Event MIB module

Prerequisites for configuring the Event MIB


Before you configure the Event MIB, perform the following tasks:
• Create an SNMPv3 user. Assign the user the rights to read and set the values of the specified
MIB objects and object lists.

182
• Make sure the SNMP agent and NMS are configured correctly and the SNMP agent can send
notifications to the NMS correctly.

Configuring the Event MIB global sampling


parameters
Restrictions and guidelines
This tasks takes effect only on monitored instances to be created.
Procedure
1. Enter system view.
system-view
2. Set the minimum sampling interval.
snmp mib event sample minimum min-number
By default, the minimum sampling interval is 1 second.
The sampling interval of a trigger must be greater than the minimum sampling interval.
3. Configure the maximum number of object instances that can be concurrently sampled.
snmp mib event sample instance maximum max-number
By default, the value is 0. The maximum number of object instances that can be concurrently
sampled is limited by the available resources.

Configuring Event MIB object lists


About this task
Perform this task so that the device sends a notification that carries the specified objects to the NMS
when a notification action is triggered.
Procedure
1. Enter system view.
system-view
2. Configure an Event MIB object list.
snmp mib event object list owner group-owner name group-name
object-index oid object-identifier [ wildcard ]
The object can be a table node, conceptual row node, table column node, simple leaf node, or
parent node of a leaf node.

Configuring an event
Creating an event
1. Enter system view.
system-view
2. Create an event and enter its view.
snmp mib event owner event-owner name event-name
3. (Optional.) Configure a description for the event.

183
description text
By default, an event does not have a description.

Configuring a set action for an event


1. Enter system view.
system-view
2. Enter event view.
snmp mib event owner event-owner name event-name
3. Enable the set action and enter set action view.
action set
By default, no action is specified for an event.
4. Specify an object by its OID for the set action.
oid object-identifier
By default, no object is specified for a set action.
The object can be a table node, conceptual row node, table column node, simple leaf node, or
parent node of a leaf node.
5. Enable OID wildcarding.
wildcard oid
By default, OID wildcarding is disabled.
6. Set the value for the object.
value integer-value
The default value for the object is 0.
7. (Optional.) Specify a context for the object.
context context-name
By default, no context is specified for an object.
8. (Optional.) Enable context wildcarding.
wildcard context
By default, context wildcarding is disabled.
A wildcard context contains the specified context and the wildcarded part.

Configuring a notification action for an event


1. Enter system view.
system-view
2. Enter event view
snmp mib event owner event-owner name event-name
3. Enable the notification action and enter notification action view.
action notification
By default, no action is specified for an event.
4. Specify an object to execute the notification action by its OID.
oid object-identifier
By default, no object is specified for executing the notification action.
The object must be a notification object.
5. Specify an object list to be added to the notification triggered by the event.

184
object list owner group-owner name group-name
By default, no object list is specified for the notification action.
If you do not specify an object list for the notification action or the specified object list does not
contain variables, no variables will be carried in the notification.

Enabling the event


Restrictions and guidelines
The Boolean, existence, and threshold events can be triggered only after you perform this task.
To change an enabled event, first disable the event.
Procedure
1. Enter system view.
system-view
2. Enter event view.
snmp mib event owner event-owner name event-name
3. Enable the event.
event enable
By default, an event is disabled.

Configuring a trigger
Creating a trigger and configuring its basic parameters
1. Enter system view.
system-view
2. Create a trigger and enter its view.
snmp mib event trigger owner trigger-owner name trigger-name
The trigger owner must be an existing SNMPv3 user.
3. (Optional.) Configure a description for the trigger.
description text
By default, a trigger does not have a description.
4. Set a sampling interval for the trigger.
frequency interval
By default, the sampling interval is 600 seconds.
Make sure the sampling interval is greater than or equal to the Event MIB minimum sampling
interval.
5. Specify a sampling method.
sample { absolute | delta }
The default sampling method is absolute.
6. Specify an object to be sampled by its OID.
oid object-identifier
By default, the OID is 0.0. No object is specified for a trigger.
If you execute this command multiple times, the most recent configuration takes effect.
7. (Optional.) Enable OID wildcarding.

185
wildcard oid
By default, OID wildcarding is disabled.
8. (Optional.) Configure a context for the monitored object.
context context-name
By default, no context is configured for a monitored object.
9. (Optional.) Enable context wildcarding.
wildcard context
By default, context wildcarding is disabled.
10. (Optional.) Specify the object list to be added to the triggered notification.
object list owner group-owner name group-name
By default, no object list is specified for a trigger.

Configuring a Boolean trigger test


1. Enter system view.
system-view
2. Enter trigger view.
snmp mib event trigger owner trigger-owner name trigger-name
3. Specify a Boolean test for the trigger and enter trigger-Boolean view.
test boolean
By default, no test is configured for a trigger.
4. Specify a Boolean test comparison type.
comparison { equal | greater | greaterorequal | less | lessorequal |
unequal }
The default Boolean test comparison type is unequal.
5. Set a reference value for the Boolean trigger test.
value integer-value
The default reference value for a Boolean trigger test is 0.
6. Specify an event for the Boolean trigger test.
event owner event-owner name event-name
By default, no event is specified for a Boolean trigger test.
7. (Optional.) Specify the object list to be added to the notification triggered by the test.
object list owner group-owner name group-name
By default, no object list is specified for a Boolean trigger test.
8. Enable the event to be triggered when the trigger condition is met at the first sampling.
startup enable
By default, the event is triggered when the trigger condition is met at the first sampling.
Before the first sampling, you must enable this command to allow the event to be triggered.

Configuring an existence trigger test


1. Enter system view.
system-view
2. Enter trigger view.

186
snmp mib event trigger owner trigger-owner name trigger-name
3. Specify an existence test for the trigger and enter trigger-existence view.
test existence
By default ,no test is configured for a trigger.
4. Specify an event for the existence trigger test.
event owner event-owner name event-name
By default, no event is specified for an existence trigger test.
5. (Optional.) Specify the object list to be added to the notification triggered by the test.
object list owner group-owner name group-name
By default, no object list is specified for an existence trigger test.
6. Specify an existence trigger test type.
type { absent | changed | present }
The default existence trigger test types are present and absent.
7. Specify an existence trigger test type for the first sampling.
startup { absent | present }
By default, both the present and absent existence trigger test types are allowed for the first
sampling.

Configuring a threshold trigger test


1. Enter system view.
system-view
2. Enter trigger view.
snmp mib event trigger owner trigger-owner name trigger-name
3. Specify a threshold test for the trigger and enter trigger-threshold view.
test boolean
By default ,no test is configured for a trigger.
4. Specify the object list to be added to the notification triggered by the test.
object list owner group-owner name group-name
By default, no object list is specified for a threshold trigger test.
5. (Optional.) Specify the type of the threshold trigger test for the first sampling.
startup { falling | rising | rising-or-falling }
The default threshold trigger test type for the first sampling is rising-or-falling.
6. Specify the delta falling threshold and the falling alarm event triggered when the delta value
(difference between the current sampled value and the previous sampled value) is smaller than
or equal to the delta falling threshold.
delta falling { event owner event-owner name event-name | value
integer-value }
By default, the delta falling threshold is 0, and no falling alarm event is specified.
7. Specify the delta rising threshold and the rising alarm event triggered when the delta value is
greater than or equal to the delta rising threshold.
delta rising { event owner event-owner name event-name | value
integer-value }
By default, the delta rising threshold is 0, and no rising alarm event is specified.

187
8. Specify the falling threshold and the falling alarm event triggered when the sampled value is
smaller than or equal to the threshold.
falling { event owner event-owner name event-name | value
integer-value }
By default, the falling threshold is 0, and no falling alarm event is specified.
9. Specify the rising threshold and the ring alarm event triggered when the sampled value is
greater than or equal to the threshold.
rising { event owner event-owner name event-name | value
integer-value }
By default, the rising threshold is 0, and no rising alarm event is specified.

Enabling trigger sampling


Restrictions and guidelines
Enable trigger sampling after you complete trigger parameters configuration. You cannot modify
trigger parameters after trigger sampling is enabled. To modify trigger parameters, first disable
trigger sampling.
Procedure
1. Enter system view.
system-view
2. Enter trigger view.
snmp mib event trigger owner trigger-owner name trigger-name
3. Enable trigger sampling.
trigger enable
By default, trigger sampling is disabled.

Enabling SNMP notifications for the Event MIB


module
About this task
To report critical Event MIB events to an NMS, enable SNMP notifications for the Event MIB module.
For Event MIB event notifications to be sent correctly, you must also configure SNMP on the device.
For more information about SNMP configuration, see the network management and monitoring
configuration guide for the device.
Procedure
1. Enter system view.
system-view
2. Enable snmp notifications for the Event MIB module.
snmp-agent trap enable event-mib
By default, SNMP notifications are enabled for the Event MIB module.

188
Display and maintenance commands for Event
MIB
Execute display commands in any view.

Task Command
Display Event MIB configuration and
display snmp mib event
statistics.

display snmp mib event event [ owner


Display event information.
event-owner name event-name ]
display snmp mib event object list [ owner
Display object list information.
group-owner name group-name ]
Display global Event MIB configuration
display snmp mib event summary
and statistics.

display snmp mib event trigger [ owner


Display trigger information.
trigger-owner name trigger-name ]

Event MIB configuration examples


Example: Configuring an existence trigger test
Network configuration
As shown in Figure 66, the device acts as the agent. Use the Event MIB to monitor the device. When
interface hot-swap or virtual interface creation or deletion occurs on the device, the agent sends an
mteTriggerFired notification to the NMS.
Figure 66 Network diagram
NMS
192.168.1.26

LAN

Agent

Procedure

1. Enable and configure the SNMP agent on the device:


# Create SNMPv3 group g3 and add SNMPv3 user owner1 to g3.
<Sysname> system-view
[Sysname] snmp-agent usm-user v3 owner1 g3
[Sysname] snmp-agent group v3 g3 read-view a write-view a notify-view a
[Sysname] snmp-agent mib-view included a iso
# Configure context contextnameA for the agent.

189
[Sysname] snmp-agent context contextnameA
# Enable SNMP notifications for the Event MIB module. Specify the NMS at 192.168.1.26 as
the target host for the notifications.
[Sysname] snmp-agent trap enable event-mib
[Sysname] snmp-agent target-host trap address udp-domain 192.168.1.26 params
securityname owner1 v3
2. Configure the Event MIB global sampling parameters.
# Set the Event MIB minimum sampling interval to 50 seconds.
[Sysname] snmp mib event sample minimum 50
# Set the maximum number to 100 for object instances that can be concurrently sampled.
[Sysname] snmp mib event sample instance maximum 100
3. Create and configure a trigger:
# Create a trigger and enter its view. Specify its owner as owner1 and its name as triggerA.
[Sysname] snmp mib event trigger owner owner1 name triggerA
# Set the sampling interval to 60 seconds. Make sure the sampling interval is greater than or
equal to the Event MIB minimum sampling interval.
[Sysname-trigger-owner1-triggerA] frequency 60
# Specify object OID 1.3.6.1.2.1.2.2.1.1 as the monitored object. Enable OID wildcarding.
[Sysname-trigger-owner1-triggerA] oid 1.3.6.1.2.1.2.2.1.1
[Sysname-trigger-owner1-triggerA] wildcard oid
# Configure context contextnameA for the monitored object and enable context wildcarding.
[Sysname-trigger-owner1-triggerA] context contextnameA
[Sysname-trigger-owner1-triggerA] wildcard context
# Specify the existence trigger test for the trigger.
[Sysname-trigger-owner1-triggerA] test existence
[Sysname-trigger-owner1-triggerA-existence] quit
# Enable trigger sampling.
[Sysname-trigger-owner1-triggerA] trigger enable
[Sysname-trigger-owner1-triggerA] quit

Verifying the configuration


# Display Event MIB brief information.
[Sysname] display snmp mib event summary
TriggerFailures : 0
EventFailures : 0
SampleMinimum : 50
SampleInstanceMaximum : 100
SampleInstance : 20
SampleInstancesHigh : 20
SampleInstanceLacks : 0

# Display information about the trigger with owner owner1 and name trigger A.
[Sysname] display snmp mib event trigger owner owner1 name triggerA
Trigger entry triggerA owned by owner1:
TriggerComment : N/A
TriggerTest : existence
TriggerSampleType : absoluteValue
TriggerValueID : 1.3.6.1.2.1.2.2.1.1<ifIndex>
TriggerValueIDWildcard : true

190
TriggerTargetTag : N/A
TriggerContextName : contextnameA
TriggerContextNameWildcard : true
TriggerFrequency(in seconds): 60
TriggerObjOwner : N/A
TriggerObjName : N/A
TriggerEnabled : true
Existence entry:
ExiTest : present | absent
ExiStartUp : present | absent
ExiObjOwner : N/A
ExiObjName : N/A
ExiEvtOwner : N/A
ExiEvtName : N/A

# Create VLAN-interface 2 on the device.


[Sysname] vlan 2
[Sysname-vlan2] quit
[Sysname] interface vlan 2

The NMS receives an mteTriggerFired notification from the device.

Example: Configuring a Boolean trigger test


Network configuration
As shown in Figure 66, the device acts as the agent. The NMS uses SNMPv3 to monitor and
manage the device. Configure a trigger and configure a Boolean trigger test for the trigger. When the
trigger condition is met, the agent sends an mteTriggerFired notification to the NMS.
Figure 67 Network diagram
NMS
192.168.1.26

LAN

Agent

Procedure

1. Enable and configure the SNMP agent on the device:


# Create SNMPv3 group g3 and add SNNPv3 user owner1 to g3.
<Sysname> system-view
[Sysname] snmp-agent usm-user v3 owner1 g3
[Sysname] snmp-agent group v3 g3 read-view a write-view a notify-view a
[Sysname] snmp-agent mib-view included a iso
# Enable SNMP notifications for the Event MIB module. Specify the NMS at 192.168.1.26 as
the target host for the notifications.
[Sysname] snmp-agent trap enable event-mib

191
[Sysname] snmp-agent target-host trap address udp-domain 192.168.1.26 params
securityname owner1 v3
2. Configure the Event MIB global sampling parameters.
# Set the Event MIB minimum sampling interval to 50 seconds.
[Sysname] snmp mib event sample minimum 50
# Set the maximum number to 100 for object instances that can be concurrently sampled.
[Sysname] snmp mib event sample instance maximum 100
3. Configure Event MIB object lists objectA, objectB, and objectC.
[Sysname] snmp mib event object list owner owner1 name objectA 1 oid
1.3.6.1.4.1.25506.2.6.1.1.1.1.6.11
[Sysname] snmp mib event object list owner owner1 name objectB 1 oid
1.3.6.1.4.1.25506.2.6.1.1.1.1.7.11
[Sysname] snmp mib event object list owner owner1 name objectC 1 oid
1.3.6.1.4.1.25506.2.6.1.1.1.1.8.11
4. Configure an event:
# Create an event and enter its view. Specify its owner as owner1 and its name as EventA.
[Sysname] snmp mib event owner owner1 name EventA
# Specify the notification action for the event. Specify object OID 1.3.6.1.4.1.25506.2.6.2.0.5
(hh3cEntityExtMemUsageThresholdNotification) to execute the notification.
[Sysname-event-owner1-EventA] action notification
[Sysname-event-owner1-EventA-notification] oid 1.3.6.1.4.1.25506.2.6.2.0.5
# Specify the object list with owner owner 1 and name objectC to be added to the notification
when the notification action is triggered
[Sysname-event-owner1-EventA-notification] object list owner owner1 name objectC
[Sysname-event-owner1-EventA-notification] quit
# Enable the event.
[Sysname-event-owner1-EventA] event enable
[Sysname-event-owner1-EventA] quit
5. Configure a trigger:
# Create a trigger and enter its view. Specify its owner as owner1 and its name as triggerA.
[Sysname] snmp mib event trigger owner owner1 name triggerA
# Set the sampling interval to 60 seconds. Make sure the interval is greater than or equal to the
global minimum sampling interval.
[Sysname-trigger-owner1-triggerA] frequency 60
# Specify object OID 1.3.6.1.4.1.25506.2.6.1.1.1.1.9.11 as the monitored object.
[Sysname-trigger-owner1-triggerA] oid 1.3.6.1.4.1.25506.2.6.1.1.1.1.9.11
# Specify the object list with owner owner1 and name objectA to be added to the notification
when the notification action is triggered.
[Sysname-trigger-owner1-triggerA] object list owner owner1 name objectA
# Configure a Boolean trigger test. Set its comparison type to greater, reference value to 10,
and specify the event with owner owner1 and name EventA, object list with owner owner1 and
name objectB for the test.
[Sysname-trigger-owner1-triggerA] test boolean
[Sysname-trigger-owner1-triggerA-boolean] comparison greater
[Sysname-trigger-owner1-triggerA-boolean] value 10
[Sysname-trigger-owner1-triggerA-boolean] event owner owner1 name EventA
[Sysname-trigger-owner1-triggerA-boolean] object list owner owner1 name objectB
[Sysname-trigger-owner1-triggerA-boolean] quit

192
# Enable trigger sampling.
[Sysname-trigger-owner1-triggerA] trigger enable
[Sysname-trigger-owner1-triggerA] quit

Verifying the configuration


# Display Event MIB configuration and statistics.
[Sysname] display snmp mib event summary
TriggerFailures : 0
EventFailures : 0
SampleMinimum : 50
SampleInstanceMaximum : 10
SampleInstance : 1
SampleInstancesHigh : 1
SampleInstanceLacks : 0

# Display information about the Event MIB object lists.


[Sysname] display snmp mib event object list
Object list objectA owned by owner1:
ObjIndex : 1
ObjID : 1.3.6.1.4.1.25506.2.6.1.1.1.1.6.11<hh3cEntityExt
CpuUsage.11>
ObjIDWildcard : false
Object list objectB owned by owner1:
ObjIndex : 1
ObjID : 1.3.6.1.4.1.25506.2.6.1.1.1.1.7.11<hh3cEntityExt
CpuUsageThreshold.11>
ObjIDWildcard : false
Object list objectC owned by owner1:
ObjIndex : 1
ObjID : 1.3.6.1.4.1.25506.2.6.1.1.1.1.8.11<hh3cEntityExt
MemUsage.11>
ObjIDWildcard : false

# Display information about the event.


[Sysname]display snmp mib event event owner owner1 name EventA
Event entry EventA owned by owner1:
EvtComment : N/A
EvtAction : notification
EvtEnabled : true
Notification entry:
NotifyOID : 1.3.6.1.4.1.25506.2.6.2.0.5<hh3cEntityExtMemUsag
eThresholdNotification>
NotifyObjOwner : owner1
NotifyObjName : objectC

# Display information about the trigger.


[Sysname] display snmp mib event trigger owner owner1 name triggerA
Trigger entry triggerA owned by owner1:
TriggerComment : N/A
TriggerTest : boolean
TriggerSampleType : absoluteValue

193
TriggerValueID : 1.3.6.1.4.1.25506.2.6.1.1.1.1.9.11<hh3cEntityExt
MemUsageThreshold.11>
TriggerValueIDWildcard : false
TriggerTargetTag : N/A
TriggerContextName : N/A
TriggerContextNameWildcard : false
TriggerFrequency(in seconds): 60
TriggerObjOwner : owner1
TriggerObjName : objectA
TriggerEnabled : true
Boolean entry:
BoolCmp : greater
BoolValue : 10
BoolStartUp : true
BoolObjOwner : owner1
BoolObjName : objectB
BoolEvtOwner : owner1
BoolEvtName : EventA

# When the value of the monitored object 1.3.6.1.4.1.25506.2.6.1.1.1.1.9.11 becomes greater than
10, the NMS receives an mteTriggerFired notification.

Example: Configuring a threshold trigger test


Network configuration
As shown in Figure 66, the device acts as the agent. The NMS uses SNMPv3 to monitor and
manage the device. Configure a trigger and configure a threshold trigger test for the trigger. When
the trigger conditions are met, the agent sent an mteTriggerFired notification to the NMS.
Figure 68 Network diagram
NMS
192.168.1.26

LAN

Agent

Procedure

1. Enable and configure the SNMP agent on the device:


# Create SNMPv3 group g3 and add SNMPv3 user owner1 to g3.
<Sysname> system-view
[Sysname] snmp-agent usm-user v3 owner1 g3
[Sysname] snmp-agent group v3 g3 read-view a write-view a notify-view a
[Sysname] snmp-agent mib-view included a iso
# Enable SNMP notifications for the Event MIB module. Specify the NMS at 192.168.1.26 as
the target host for the notifications.
[Sysname] snmp-agent trap enable event-mib

194
[Sysname] snmp-agent target-host trap address udp-domain 192.168.1.26 params
securityname owner1 v3
[Sysname] snmp-agent trap enable
2. Configure the Event MIB global sampling parameters.
# Set the Event MIB minimum sampling interval to 50 seconds.
[Sysname] snmp mib event sample minimum 50
# Set the maximum number to 10 for object instances that can be concurrently sampled.
[Sysname] snmp mib event sample instance maximum 10
3. Create and configure a trigger:
# Create a trigger and enter its view. Specify its owner as owner1 and its name as triggerA.
[Sysname] snmp mib event trigger owner owner1 name triggerA
# Set the sampling interval to 60 seconds. Make sure the interval is greater than or equal to the
Event MIB minimum sampling interval.
[Sysname-trigger-owner1-triggerA] frequency 60
# Specify object OID 1.3.6.1.4.1.25506.2.6.1.1.1.1.7.11 as the monitored object.
[Sysname-trigger-owner1-triggerA] oid 1.3.6.1.4.1.25506.2.6.1.1.1.1.7.11
# Configure a threshold trigger test. Specify the rising threshold to 80 and the falling threshold
to 10 for the test.
[Sysname-trigger-owner1-triggerA] test threshold
[Sysname-trigger-owner1-triggerA-threshold] rising value 80
[Sysname-trigger-owner1-triggerA-threshold] falling value 10
[Sysname-trigger-owner1-triggerA-threshold] quit
# Enable trigger sampling.
[Sysname-trigger-owner1-triggerA] trigger enable
[Sysname-trigger-owner1-triggerA] quit

Verifying the configuration


# Display Event MIB configuration and statistics.
[Sysname] display snmp mib event summary
TriggerFailures : 0
EventFailures : 0
SampleMinimum : 50
SampleInstanceMaximum : 10
SampleInstance : 1
SampleInstancesHigh : 1
SampleInstanceLacks : 0

# Display information about the trigger.


[Sysname] display snmp mib event trigger owner owner1 name triggerA
Trigger entry triggerA owned by owner1:
TriggerComment : N/A
TriggerTest : threshold
TriggerSampleType : absoluteValue
TriggerValueID : 1.3.6.1.4.1.25506.2.6.1.1.1.1.7.11<hh3cEntityExt
CpuUsageThreshold.11>
TriggerValueIDWildcard : false
TriggerTargetTag : N/A
TriggerContextName : N/A
TriggercontextNameWildcard : false

195
TriggerFrequency(in seconds): 60
TriggerObjOwner : N/A
TriggerObjName : N/A
TriggerEnabled : true
Threshold entry:
ThresStartUp : risingOrFalling
ThresRising : 80
ThresFalling : 10
ThresDeltaRising : 0
ThresDeltaFalling : 0
ThresObjOwner : N/A
ThresObjName : N/A
ThresRisEvtOwner : N/A
ThresRisEvtName : N/A
ThresFalEvtOwner : N/A
ThresFalEvtName : N/A
ThresDeltaRisEvtOwner : N/A
ThresDeltaRisEvtName : N/A
ThresDeltaFalEvtOwner : N/A
ThresDeltaFalEvtName : N/A

# When the rising threshold of the monitored object 1.3.6.1.4.1.25506.2.6.1.1.1.1.7.11 is greater


than 80, the NMS receives an mteTriggerFired notification.

196
Configuring NETCONF
About NETCONF
Network Configuration Protocol (NETCONF) is an XML-based network management protocol. It
provides programmable mechanisms to manage and configure network devices. Through
NETCONF, you can configure device parameters, retrieve parameter values, and collect statistics.
For a network that has devices from vendors, you can develop a NETCONF-based NMS system to
configure and manage devices in a simple and effective way.

NETCONF structure
NETCONF has the following layers: content layer, operations layer, RPC layer, and transport
protocol layer.
Table 9 NETCONF layers and XML layers

NETCONF
XML layer Description
layer
Configuration data, Contains a set of managed objects, which can be configuration data,
Content status data, and status data, and statistics. For information about the operable data,
statistics see the NETCONF XML API reference for the device.

Defines a set of base operations invoked as RPC methods with


XML-encoded parameters. NETCONF base operations include data
<get>, <get-config>,
Operations retrieval operations, configuration operations, lock operations, and
<edit-config>…
session operations. For information about operations supported on
the device, see "Supported NETCONF operations."
Provides a simple, transport-independent framing mechanism for
<rpc> and encoding RPCs. The <rpc> and <rpc-reply> elements are used to
RPC
<rpc-reply> enclose NETCONF requests and responses (data at the operations
layer and the content layer).

Provides reliable, connection-oriented, serial data links.


The following transport layer sessions are available in non-FIPS
mode:
In non-FIPS mode: • CLI sessions, including NETCONF over Telnet sessions,
Console, Telnet, NETCONF over SSH sessions, and NETCONF over console
SSH, HTTP, HTTPS, sessions.
Transport and TLS • NETCONF over SOAP sessions, including NETCONF over
protocol
In FIPS mode: SOAP over HTTP sessions and NETCONF over SOAP over
HTTPS sessions.
Console, SSH,
HTTPS, and TLS The following transport layer sessions are available in FIPS mode:
• CLI sessions, including NETCONF over SSH sessions and
NETCONF over console sessions.
• NETCONF over SOAP over HTTPS sessions.

NETCONF message format


NETCONF
All NETCONF messages are XML-based and comply with RFC 4741. An incoming NETCONF
message must pass XML schema check before it can be processed. If a NETCONF message fails
XML schema check, the device sends an error message to the client.

197
For information about the NETCONF operations supported by the device and the operable data, see
the NETCONF XML API reference for the device.
The following example shows a NETCONF message for getting all parameters of all interfaces on
the device:
<?xml version="1.0" encoding="utf-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-bulk>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface/>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get-bulk>
</rpc>

NETCONF over SOAP


All NETCONF over SOAP messages are XML-based and comply with RFC 4741. NETCONF
messages are contained in the <Body> element of SOAP messages. NETCONF over SOAP
messages also comply with the following rules:
• SOAP messages must use the SOAP Envelope namespaces.
• SOAP messages must use the SOAP Encoding namespaces.
• SOAP messages cannot contain the following information:
 DTD reference.
 XML processing instructions.
The following example shows a NETCONF over SOAP message for getting all parameters of all
interfaces on the device:
<env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope">
<env:Header>
<auth:Authentication env:mustUnderstand="1"
xmlns:auth="http://www.hp.com/netconf/base:1.0">
<auth:AuthInfo>800207F0120020C</auth:AuthInfo>
</auth:Authentication>
</env:Header>
<env:Body>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-bulk>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface/>
</Interfaces>
</Ifmgr>
</top>
</filter>

198
</get-bulk>
</rpc>
</env:Body>
</env:Envelope>

How to use NETCONF


You can use NETCONF to manage and configure the device by using the methods in Table 10.
Table 10 NETCONF methods for configuring the device

Configuration tool Login method Remarks


• Console port
• SSH To perform NETCONF operations, copy valid
CLI
NETCONF messages to the CLI in XML view.
• Telnet
This method requires a NETCONF over SOAP
session or NETCONF over SSH session with the
device. To use a NETCONF over SOAP session, you
Custom user interface N/A
must enable NETCONF over SOAP. Then,
NETCONF messages will be encapsulated in SOAP
messages for transmission.

Protocols and standards


• RFC 3339, Date and Time on the Internet: Timestamps
• RFC 4741, NETCONF Configuration Protocol
• RFC 4742, Using the NETCONF Configuration Protocol over Secure SHell (SSH)
• RFC 4743, Using NETCONF over the Simple Object Access Protocol (SOAP)
• RFC 5277, NETCONF Event Notifications
• RFC 5381, Experience of Implementing NETCONF over SOAP
• RFC 5539, NETCONF over Transport Layer Security (TLS)
• RFC 6241, Network Configuration Protocol

FIPS compliance
The device supports the FIPS mode that complies with NIST FIPS 140-2 requirements. Support for
features, commands, and parameters might differ in FIPS mode (see Security Configuration Guide)
and non-FIPS mode.

NETCONF tasks at a glance


To configure NETCONF, perform the following tasks:
1. Establishing a NETCONF session
a. (Optional.) Setting NETCONF session attributes
b. Establishing NETCONF over SOAP sessions
c. Establishing NETCONF over SSH sessions
d. Establishing NETCONF over Telnet or NETCONF over console sessions
e. Exchanging capabilities

199
2. (Optional.) Retrieving device configuration information
 Retrieving device configuration and state information
 Retrieving non-default settings
 Retrieving NETCONF information
 Retrieving YANG file content
 Retrieving NETCONF session information
3. (Optional.) Filtering data
• Table-based filtering
• Column-based filtering
4. (Optional.) Locking or unlocking the running configuration
a. Locking the running configuration
b. Unlocking the running configuration
5. (Optional.) Modifying the configuration
6. (Optional.) Managing configuration files
 Saving the running configuration
 Loading the configuration
 Rolling back the configuration
7. (Optional.) Performing CLI operations through NETCONF
8. (Optional.) Subscribing to events
 Subscribing to syslog events
 Subscribing to events monitored by NETCONF
 Subscribing to events reported by modules
 Canceling an event subscription
9. (Optional.) Terminating NETCONF sessions
10. (Optional.) Returning to the CLI

Establishing a NETCONF session


Restrictions and guidelines for NETCONF session
establishment
After a NETCONF session is established, the device automatically sends its capabilities to the client.
You must send the capabilities of the client to the device before you can perform any other
NETCONF operations.
Before performing a NETCONF operation, make sure no other users are configuring or managing
the device. If multiple users simultaneously configure or manage the device, the result might be
different from what you expect.
You can use the aaa session-limit command to set the maximum number of NETCONF
sessions that the device can support. If the upper limit is reached, new NETCONF users cannot
access the device. For information about this command, see AAA in Security Configuration Guide.

Setting NETCONF session attributes


About this task
NETCONF supports the following types of namespaces:

200
• Common namespace—The common namespace is shared by all modules. In a packet that
uses the common namespace, the namespace is indicated in the <top> element, and the
modules are listed under the <top> element.
Example:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-bulk>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get-bulk>
</rpc>
• Module-specific namespace—Each module has its own namespace. A packet that uses a
module-specific namespace does not have the <top> element. The namespace follows the
module name.
Example:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-bulk>
<filter type="subtree">
<Ifmgr xmlns="http://www.hp.com/netconf/data:1.0-Ifmgr">
<Interfaces>
</Interfaces>
</Ifmgr>
</filter>
</get-bulk>
</rpc>

The common namespace is incompatible with module-specific namespaces. To set up a NETCONF


session, the device and the client must use the same type of namespaces. By default, the common
namespace is used. If the client does not support the common namespace, use this feature to
configure the device to use module-specific namespaces.
Procedure
1. Enter system view.
system-view
2. Set the NETCONF session idle timeout time.
netconf { agent | soap } idle-timeout minute

Keyword Description
Specifies the following sessions:
• NETCONF over SSH sessions.
agent • NETCONF over Telnet sessions.
• NETCONF over console sessions.
By default, the idle timeout time is 0, and the sessions never time out.

201
Keyword Description
Specifies the following sessions:
• NETCONF over SOAP over HTTP sessions.
soap
• NETCONF over SOAP over HTTPS sessions.
The default setting is 10 minutes.

3. Enable NETCONF logging.


netconf log source { all | { agent | soap } * } { protocol-operation { all
| { action | config | get | set | session | syntax | others } * } |
row-operation | verbose }
By default, the device logs only row operations for <action> and <edit-config> operations.
NETCONF logging is disabled for all the other types of operations.
4. Configure NETCONF to use module-specific namespaces.
netconf capability specific-namespace
By default, the common namespace is used.
For the setting to take effect, you must reestablish the NETCONF session.

Establishing NETCONF over SOAP sessions


About this task
You can use a custom user interface to establish a NETCONF over SOAP session to the device and
perform NETCONF operations. NETCONF over SOAP encapsulates NETCONF messages into
SOAP messages and transmits the SOAP messages over HTTP or HTTPS.
Restrictions and guidelines
You can add an authentication domain to the <UserName> parameter of a SOAP request. The
authentication domain takes effect only on the current request.
The mandatory authentication domain configured by using the netconf soap domain command
takes precedence over the authentication domain specified in the <UserName> parameter of a
SOAP request.
Procedure
1. Enter system view.
system-view
2. Enable NETCONF over SOAP.
In non-FIPS mode:
netconf soap { http | https } enable
In FIPS mode:
netconf soap https enable
By default, the NETCONF over SOAP feature is disabled.
3. Set the DSCP value for NETCONF over SOAP packets.
In non-FIPS mode:
netconf soap { http | https } dscp dscp-value
In FIPS mode:
netconf soap https dscp dscp-value
By default, the DSCP value is 0 for NETCONF over SOAP packets.
4. Use an IPv4 ACL to control NETCONF over SOAP access.

202
In non-FIPS mode:
netconf soap { http | https } acl { ipv4-acl-number | name
ipv4-acl-name }
In FIPS mode:
netconf soap https acl { ipv4-acl-number | name ipv4-acl-name }
By default, no IPv4 ACL is applied to control NETCONF over SOAP access.
Only clients permitted by the IPv4 ACL can establish NETCONF over SOAP sessions.
5. Specify a mandatory authentication domain for NETCONF users.
netconf soap domain domain-name
By default, no mandatory authentication domain is specified for NETCONF users. For
information about authentication domains, see Security Configuration Guide.
6. Use the custom user interface to establish a NETCONF over SOAP session with the device.
For information about the custom user interface, see the user guide for the interface.

Establishing NETCONF over SSH sessions


Prerequisites
Before establishing a NETCONF over SSH session, make sure the custom user interface can
access the device through SSH.
Procedure
1. Enter system view.
system-view
2. Enable NETCONF over SSH.
netconf ssh server enable
By default, NETCONF over SSH is disabled.
3. Specify the listening port for NETCONF over SSH packets.
netconf ssh server port port-number
By default, the listening port number is 830.
4. Use the custom user interface to establish a NETCONF over SSH session with the device. For
information about the custom user interface, see the user guide for the interface.

Establishing NETCONF over Telnet or NETCONF over


console sessions
Restrictions and guidelines
To ensure the format correctness of a NETCONF message, do not enter the message manually.
Copy and paste the message.
While the device is performing a NETCONF operation, do not perform any other operations, such as
pasting a NETCONF message or pressing Enter.
For the device to identify NETCONF messages, you must add end mark ]]>]]> at the end of each
NETCONF message. Examples in this document do not necessarily have this end mark. Do add the
end mark in actual operations.
Prerequisites
To establish a NETCONF over Telnet session or a NETCONF over console session, first log in to the
device through Telnet or the console port.

203
Procedure
To enter XML view, execute the following command in user view:
xml
If the XML view prompt appears, the NETCONF over Telnet session or NETCONF over console
session is established successfully.

Exchanging capabilities
About this task
After a NETCONF session is established, the device sends its capabilities to the client. You must use
a hello message to send the capabilities of the client to the device before you can perform any other
NETCONF operations.
Hello message from the device to the client
<?xml version="1.0" encoding="UTF-8"?><hello
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"><capabilities><capability>urn:ietf:pa
rams:netconf:base:1.1</capability><capability>urn:ietf:params:netconf:writable-runnin
g</capability><capability>urn:ietf:params:netconf:capability:notification:1.0</capabi
lity><capability>urn:ietf:params:netconf:capability:validate:1.1</capability><capabil
ity>urn:ietf:params:netconf:capability:interleave:1.0</capability><capability>urn:hp:
params:netconf:capability:hp-netconf-ext:1.0</capability></capabilities><session-id>1
</session-id></hello>]]>]]>

The <capabilities> element carries the capabilities supported by the device. The supported
capabilities vary by device model.
The <session-id> element carries the unique ID assigned to the NETCONF session.
Hello message from the client to the device
After receiving the hello message from the device, copy the following hello message to notify the
device of the capabilities supported by the client:
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
capability-set
</capability>
</capabilities>
</hello>

Item Description
Specifies a set of capabilities supported by the client.
capability-set Use the <capability> and </capability> tags to enclose each user-defined
capability set.

Retrieving device configuration information


Restrictions and guidelines for device configuration retrieval
During a <get>, <get-bulk>, <get-config>, or <get-bulk-config> operation, NETCONF replaces
unidentifiable characters in the retrieved data with question marks (?) before sending the data to the

204
client. If the process for a relevant module is not started yet, the operation returns the following
message:
<?xml version="1.0"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data/>
</rpc-reply>

The <get><netconf-state/></get> operation does not support data filtering.


For more information about the NETCONF operations, see the NETCONF XML API references for
the device.

Retrieving device configuration and state information


You can use the following NETCONF operations to retrieve device configuration and state
information:
• <get> operation—Retrieves all device configuration and state information that matches the
specified conditions.
• <get-bulk> operation—Retrieves data entries starting from the data entry next to the one with
the specified index. One data entry contains a device configuration entry and a state
information entry. The returned output does not include the index information.
The <get> message and <get-bulk> message share the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<getoperation>
<filter>
<top xmlns="http://www.hp.com/netconf/data:1.0">
Specify the module, submodule, table name, and column name
</top>
</filter>
</getoperation>
</rpc>

Item Description
getoperation Operation name, get or get-bulk.

Specifies the filtering conditions, such as the module name, submodule name, table
name, and column name.
• If you specify a module name, the operation retrieves the data for the specified
module. If you do not specify a module name, the operation retrieves the data for
all modules.
• If you specify a submodule name, the operation retrieves the data for the
specified submodule. If you do not specify a submodule name, the operation
filter
retrieves the data for all submodules.
• If you specify a table name, the operation retrieves the data for the specified
table. If you do not specify a table name, the operation retrieves the data for all
tables.
• If you specify only the index column, the operation retrieves the data for all
columns. If you specify the index column and any other columns, the operation
retrieves the data for the index column and the specified columns.

A <get-bulk> message can carry the count and index attributes.

205
Item Description
Specifies the index.
index
If you do not specify this item, the index value starts with 1 by default.
Specifies the data entry quantity.
The count attribute complies with the following rules:
• The count attribute can be placed in the module node and table node. In
other nodes, it cannot be resolved.
• When the count attribute is placed in the module node, a descendant node
inherits this count attribute if the descendant node does not contain the
count count attribute.
• The <get-bulk> operation retrieves all the rest data entries starting from the
data entry next to the one with the specified index if either of the following
conditions occurs:
 You do not specify the count attribute.
 The number of matching data entries is less than the value of the count
attribute.

The following <get-bulk> message example specifies the count and index attributes:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:xc="http://www.hp.com/netconf/base:1.0">
<get-bulk>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0"
xmlns:base="http://www.hp.com/netconf/base:1.0">
<Syslog>
<Logs xc:count="5">
<Log>
<Index>10</Index>
</Log>
</Logs>
</Syslog>
</top>
</filter>
</get-bulk>
</rpc>

When retrieving interface information, the device cannot identify whether an integer value for the
<IfIndex> element represents an interface name or index. When retrieving VPN instance information,
the device cannot identify whether an integer value for the <vrfindex> element represents a VPN
name or index. To resolve the issue, you can use the valuetype attribute to specify the value type.
The valuetype attribute has the following values:

Value Description
name The element is carrying a name.
index The element is carrying an index.
Default value. The device uses the value of the element as a name for
auto information matching. If no match is found, the device uses the value as an index
for interface or information matching.

The following example specifies an index-type value for the <IfIndex> element:

206
<rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<getoperation>
<filter>
<top xmlns="http://www.hp.com/netconf/config:1.0"
xmlns:base="http://www.hp.com/netconf/base:1.0">
<VLAN>
<TrunkInterfaces>
<Interface>
<IfIndex base:valuetype="index">1</IfIndex>
</Interface>
</TrunkInterfaces>
</VLAN>
</top>
</filter >
</getoperation>
</rpc>

If the <get> or < get-bulk> operation succeeds, the device returns the retrieved data in the following
format:
<?xml version="1.0"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
Device state and configuration data
</data>
</rpc-reply>

Retrieving non-default settings


The <get-config> and <get-bulk-config> operations are used to retrieve all non-default settings. The
<get-config> and <get-bulk-config> messages can contain the <filter> element for filtering data.
The <get-config> and <get-bulk-config> messages are similar. The following is a <get-config>
message example:
<?xml version="1.0"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-config>
<source>
<running/>
</source>
<filter>
<top xmlns="http://www.hp.com/netconf/config:1.0">
Specify the module name, submodule name, table name, and column name
</top>
</filter>
</get-config>
</rpc>

If the <get-config> or <get-bulk-config> operation succeeds, the device returns the retrieved data in
the following format:
<?xml version="1.0"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>

207
Data matching the specified filter
</data>
</rpc-reply>

Retrieving NETCONF information


Use the <get><netconf-state/></get> message to retrieve NETCONF information.
# Copy the following text to the client to retrieve NETCONF information:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="m-641" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get>
<filter type='subtree'>
<netconf-state xmlns='urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring'>
<getType/>
</netconf-state>
</filter>
</get>
</rpc>

If you do not specify a value for getType, the retrieval operation retrieves all NETCONF information.
The value for getType can be one of the following operations:

Operation Description
capabilities Retrieves device capabilities.

datastores Retrieves databases from the device.

schemas Retrieves the list of the YANG file names from the device.

sessions Retrieves session information from the device.

statistics Retrieves NETCONF statistics.

If the <get><netconf-state/></get> operation succeeds, the device returns a response in the


following format:
<?xml version="1.0"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
Retrieved NETCONF information
</data>
</rpc-reply>

Retrieving YANG file content


YANG files save the NETCONF operations supported by the device. A user can know the supported
operations by retrieving and analyzing the content of YANG files.
YANG files are integrated in the device software and are named in the format of
yang_identifier@yang_version.yang. You cannot view the YANG file names by executing the dir
command. For information about how to retrieve the YANG file names, see "Retrieving NETCONF
information."
# Copy the following text to the client to retrieve the YANG file named
[email protected]:

208
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-schema xmlns='urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring'>
<identifier>syslog-data</identifier>
<version>2019-01-01</version>
<format>yang</format>
</get-schema>
</rpc>

If the <get-schema> operation succeeds, the device returns a response in the following format:
<?xml version="1.0"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
Content of the specified YANG file
</data>
</rpc-reply>

Retrieving NETCONF session information


Use the <get-sessions> operation to retrieve NETCONF session information of the device.
# Copy the following message to the client to retrieve NETCONF session information from the
device:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-sessions/>
</rpc>

If the <get-sessions> operation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-sessions>
<Session>
<SessionID>Configuration session ID</SessionID>
<Line>Line information</Line>
<UserName>Name of the user creating the session</UserName>
<Since>Time when the session was created</Since>
<LockHeld>Whether the session holds a lock</LockHeld>
</Session>
</get-sessions>
</rpc-reply>

Example: Retrieving a data entry for the interface table


Network configuration
Retrieve a data entry for the interface table.
Procedure
# Enter XML view.
<Sysname> xml

# Notify the device of the NETCONF capabilities supported on the client.


<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

209
<capabilities>
<capability>urn:ietf:params:netconf:base:1.0</capability>
</capabilities>
</hello>

# Retrieve a data entry for the interface table.


<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:web="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-bulk>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0"
xmlns:web="http://www.hp.com/netconf/base:1.0">
<Ifmgr>
<Interfaces web:count="1">
</Interfaces>
</Ifmgr>
</top>
</filter>
</get-bulk>
</rpc>

Verifying the configuration


If the client receives the following text, the <get-bulk> operation is successful:
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:web="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="100">
<data>
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<IfIndex>3</IfIndex>
<Name>GigabitEthernet1/0/2</Name>
<AbbreviatedName>GE1/0/2</AbbreviatedName>
<PortIndex>3</PortIndex>
<ifTypeExt>22</ifTypeExt>
<ifType>6</ifType>
<Description>GigabitEthernet1/0/2 Interface</Description>
<AdminStatus>2</AdminStatus>
<OperStatus>2</OperStatus>
<ConfigSpeed>0</ConfigSpeed>
<ActualSpeed>100000</ActualSpeed>
<ConfigDuplex>3</ConfigDuplex>
<ActualDuplex>1</ActualDuplex>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</data>
</rpc-reply>

210
Example: Retrieving non-default configuration data
Network configuration
Retrieve all non-default configuration data.
Procedure
# Enter XML view.
<Sysname> xml

# Notify the device of the NETCONF capabilities supported on the client.


<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>

# Retrieve all non-default configuration data.


<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-config>
<source>
<running/>
</source>
</get-config>
</rpc>

Verifying the configuration


If the client receives the following text, the <get-config> operation is successful:
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:web="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="100">
<data>
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<IfIndex>1307</IfIndex>
<Shutdown>1</Shutdown>
</Interface>
<Interface>
<IfIndex>1308</IfIndex>
<Shutdown>1</Shutdown>
</Interface>
<Interface>
<IfIndex>1309</IfIndex>
<Shutdown>1</Shutdown>
</Interface>
<Interface>
<IfIndex>1311</IfIndex>
<VlanType>2</VlanType>

211
</Interface>
<Interface>
<IfIndex>1313</IfIndex>
<VlanType>2</VlanType>
</Interface>
</Interfaces>
</Ifmgr>
<Syslog>
<LogBuffer>
<BufferSize>120</BufferSize>
</LogBuffer>
</Syslog>
<System>
<Device>
<SysName>Sysname</SysName>
<TimeZone>
<Zone>+11:44</Zone>
<ZoneName>beijing</ZoneName>
</TimeZone>
</Device>
</System>
</top>
</data>
</rpc-reply>

Example: Retrieving syslog configuration data


Network configuration
Retrieve configuration data for the Syslog module.
Procedure
# Enter XML view.
<Sysname> xml

# Notify the device of the NETCONF capabilities supported on the client.


<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>

# Retrieve configuration data for the Syslog module.


<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-config>
<source>
<running/>
</source>
<filter type="subtree">

212
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Syslog/>
</top>
</filter>
</get-config>
</rpc>

Verifying the configuration


If the client receives the following text, the <get-config> operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="100">
<data>
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Syslog>
<LogBuffer>
<BufferSize>120</BufferSize>
</LogBuffer>
</Syslog>
</top>
</data>
</rpc-reply>

Example: Retrieving NETCONF session information


Network configuration
Get NETCONF session information.
Procedure
# Enter XML view.
<Sysname> xml

# Copy the following message to the client to exchange capabilities with the device:
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>

# Copy the following message to the client to get the current NETCONF session information on the
device:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-sessions/>
</rpc>

Verifying the configuration


If the client receives a message as follows, the operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="100">

213
<get-sessions>
<Session>
<SessionID>1</SessionID>
<Line>vty0</Line>
<UserName></UserName>
<Since>2019-01-07T00:24:57</Since>
<LockHeld>false</LockHeld>
</Session>
</get-sessions>
</rpc-reply>

The output shows the following information:


• The session ID of an existing NETCONF session is 1.
• The login user type is vty0.
• The login time is 2019-01-07T00:24:57.
• The user does not hold the lock of the configuration.

Filtering data
About data filtering
You can define a filter to filter information when you perform a <get>, <get-bulk>, <get-config>, or
<get-bulk-config> operation. Data filtering includes the following types:
• Table-based filtering—Filters table information.
• Column-based filtering—Filters information for a single column.

Restrictions and guidelines for data filtering


For table-based filtering to take effect, you must configure table-based filtering before column-based
filtering.

Table-based filtering
About this task
The namespace is http://www.hp.com/netconf/base:1.0. The attribute name is filter. For
information about the support for table-based match, see the NETCONF XML API references.
# Copy the following text to the client to retrieve the longest data with IP address 1.1.1.0 and mask
length 24 from the IPv4 routing table:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:hp="http://www.hp.com/netconf/base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Route>
<Ipv4Routes>
<RouteEntry hp:filter="IP 1.1.1.0 MaskLen 24 longer"/>
</Ipv4Routes>
</Route>

214
</top>
</filter>
</get>
</rpc>

Restrictions and guidelines


To use table-based filtering, specify a match criterion for the filter row attribute.

Column-based filtering
About this task
Column-based filtering includes full match filtering, regular expression match filtering, and
conditional match filtering. Full match filtering has the highest priority and conditional match filtering
has the lowest priority. When more than one filtering criterion is specified, the one with the highest
priority takes effect.
Full match filtering
You can specify an element value in an XML message to implement full match filtering. If multiple
element values are provided, the system returns the data that matches all the specified values.
# Copy the following text to the client to retrieve configuration data of all interfaces in UP state:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<AdminStatus>1</AdminStatus>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get>
</rpc>

You can also specify an attribute name that is the same as a column name of the current table at the
row to implement full match filtering. The system returns only configuration data that matches this
attribute name. The XML message equivalent to the above element-value-based full match filtering
is as follows:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get>
<filter type="subtree">
<top
xmlns="http://www.hp.com/netconf/data:1.0"xmlns:data="http://www.hp.com/netconf/data:
1.0">
<Ifmgr>
<Interfaces>
<Interface data:AdminStatus="1"/>
</Interfaces>
</Ifmgr>

215
</top>
</filter>
</get>
</rpc>

The above examples show that both element-value-based full match filtering and
attribute-name-based full match filtering can retrieve the same index and column information for all
interfaces in up state.
Regular expression match filtering
To implement a complex data filtering with characters, you can add a regExp attribute for a specific
element.
The supported data types include integer, date and time, character string, IPv4 address, IPv4 mask,
IPv6 address, MAC address, OID, and time zone.
# Copy the following text to the client to retrieve the descriptions of interfaces, of which all the
characters must be upper-case letters from A to Z:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:hp="http://www.hp.com/netconf/base:1.0">
<get-config>
<source>
<running/>
</source>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<Description hp:regExp="^[A-Z]*$"/>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get-config>
</rpc>

Conditional match filtering


To implement a complex data filtering with digits and character strings, you can add a match
attribute for a specific element. Table 11 lists the conditional match operators.
Table 11 Conditional match operators

Operation Operator Remarks


More than the specified value. The supported data types
More than match="more:value"
include date, digit, and character string.
Less than the specified value. The supported data types
Less than match="less:value"
include date, digit, and character string.
Not less than the specified value. The supported data types
Not less than match="notLess:value"
include date, digit, and character string.
Not more than the specified value. The supported data types
Not more than match="notMore:value"
include date, digit, and character string.

216
Operation Operator Remarks
Equal to the specified value. The supported data types include
Equal match="equal:value"
date, digit, character string, OID, and BOOL.
Not equal to the specified value. The supported data types
Not equal match="notEqual:value"
include date, digit, character string, OID, and BOOL.
Includes the specified string. The supported data types include
Include match="include:string"
only character string.
Excludes the specified string. The supported data types include
Not include match="exclude:string"
only character string.
Starts with the specified string. The supported data types
Start with match="startWith:string"
include character string and OID.
Ends with the specified string. The supported data types
End with match="endWith:string"
include only character string.

# Copy the following text to the client to retrieve extension information about the entity whose CPU
usage is more than 50%:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:hp="http://www.hp.com/netconf/base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Device>
<ExtPhysicalEntities>
<Entity>
<CpuUsage hp:match="more:50"></CpuUsage>
</Entity>
</ExtPhysicalEntities>
</Device>
</top>
</filter>
</get>
</rpc>

Example: Filtering data with regular expression match


Network configuration
Retrieve all data including Gig in the Description column of the Interfaces table under the Ifmgr
module.
Procedure
# Enter XML view.
<Sysname> xml

# Notify the device of the NETCONF capabilities supported on the client.


<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>

217
</capabilities>
</hello>

# Retrieve all data including Gig in the Description column of the Interfaces table under the Ifmgr
module.
<?xml version="1.0"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:hp="http://www.hp.com/netconf/base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<Description hp:regExp="(Gig)+"/>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get>
</rpc>

Verifying the configuration


If the client receives the following text, the operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:hp="http://www.hp.com/netconf/base:1.0" message-id="100">
<data>
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<IfIndex>2681</IfIndex>
<Description>GigabitEthernet1/0/1 Interface</Description>
</Interface>
<Interface>
<IfIndex>2685</IfIndex>
<Description>GigabitEthernet1/0/2 Interface</Description>
</Interface>
<Interface>
<IfIndex>2689</IfIndex>
<Description>GigabitEthernet1/0/3 Interface</Description>
</Interface>
<Interface>
</Ifmgr>
</top>
</data>
</rpc-reply>

218
Example: Filtering data by conditional match
Network configuration
Retrieve data in the Name column with the ifindex value not less than 5000 in the Interfaces table
under the Ifmgr module.
Procedure
# Enter XML view.
<Sysname> xml

# Notify the device of the NETCONF capabilities supported on the client.


<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>

# Retrieve data in the Name column with the ifindex value not less than 5000 in the Interfaces table
under the Ifmgr module.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:hp="http://www.hp.com/netconf/base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<IfIndex hp:match="notLess:5000"/>
<Name/>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get>
</rpc>

Verifying the configuration


If the client receives the following text, the operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:hp="http://www.hp.com/netconf/base:1.0" message-id="100">
<data>
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<IfIndex>7241</IfIndex>
<Name>NULL0</Name>

219
</Interface>
</Interfaces>
</Ifmgr>
</top>
</data>
</rpc-reply>

Locking or unlocking the running configuration


About configuration locking and unlocking
Multiple methods are available for configuring the device, such as CLI, NETCONF, and SNMP.
Before configuring, managing, or troubleshooting the device, you can lock the configuration to
prevent other users from changing the device configuration. After you lock the configuration, only
you can perform <edit-config> operations to change the configuration or unlock the configuration.
Other users can only read the configuration.
If you close your NETCONF session, the system unlocks the configuration. You can also manually
unlock the configuration.

Restrictions and guidelines for configuration locking and


unlocking
The <lock> operation locks the running configuration of the device. You cannot use it to lock the
configuration for a specific module.

Locking the running configuration


# Copy the following text to the client to lock the running configuration:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<lock>
<target>
<running/>
</target>
</lock>
</rpc>

If the <lock> operation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>

Unlocking the running configuration


# Copy the following text to the client to unlock the running configuration:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<unlock>

220
<target>
<running/>
</target>
</unlock>
</rpc>

If the <unlock> operation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>

Example: Locking the running configuration


Network configuration
Lock the device configuration so other users cannot change the device configuration.
Procedure
# Enter XML view.
<Sysname> xml

# Notify the device of the NETCONF capabilities supported on the client.


<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>

# Lock the configuration.


<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<lock>
<target>
<running/>
</target>
</lock>
</rpc>

Verifying the configuration


If the client receives the following response, the <lock> operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>

If another client sends a lock request, the device returns the following response:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<rpc-error>
<error-type>protocol</error-type>

221
<error-tag>lock-denied</error-tag>
<error-severity>error</error-severity>
<error-message xml:lang="en"> Lock failed because the NETCONF lock is held by another
session.</error-message>
<error-info>
<session-id>1</session-id>
</error-info>
</rpc-error>
</rpc-reply>

The output shows that the <lock> operation failed. The client with session ID 1 is holding the lock,

Modifying the configuration


About the <edit-config> operation
The <edit-config> operation includes the following operations: merge, create, replace, remove,
delete, default-operation, error-option, test-option, and incremental. For more information about the
operations, see "Supported NETCONF operations."

Procedure
# Copy the following text to perform the <edit-config> operation:
<?xml version="1.0"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>
<target><running></running></target>
<error-option>
error-option
</error-option>
<config>
<top xmlns="http://www.hp.com/netconf/config:1.0">
Specify the module name, submodule name, table name, and column name
</top>
</config>
</edit-config>
</rpc>

The <error-option> element indicates the action to be taken in response to an error that occurs
during the operation. It has the following values:

Value Description
stop-on-error Stops the <edit-config> operation.
continue-on-error Continues the <edit-config> operation.

222
Value Description
Rolls back the configuration to the configuration before the <edit-config>
operation was performed.
By default, an <edit-config> operation cannot be performed while the device is
rolling back the configuration. If the rollback time exceeds the maximum time that
the client can wait, the client determines that the <edit-config> operation has
rollback-on-error failed and performs the operation again. Because the previous rollback is not
completed, the operation triggers another rollback. If this process repeats itself,
CPU and memory resources will be exhausted and the device will reboot.
To allow an <edit-config> operation to be performed during a configuration
rollback, perform an <action> operation to change the value of the
DisableEditConfigWhenRollback attribute to false.

If the <edit-config> operation succeeds, the device returns a response in the following format:
<?xml version="1.0">
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>

You can also perform the <get> operation to verify that the current element value is the same as the
value specified through the <edit-config> operation.

Example: Modifying the configuration


Network configuration
Change the log buffer size for the Syslog module to 512.
Procedure
# Enter XML view.
<Sysname> xml

# Notify the device of the NETCONF capabilities supported on the client.


<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>urn:ietf:params:netconf:base:1.0</capability>
</capabilities>
</hello>

# Change the log buffer size for the Syslog module to 512.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:web="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>
<target>
<running/>
</target>
<config>
<top xmlns="http://www.hp.com/netconf/config:1.0" web:operation="merge">
<Syslog>
<LogBuffer>
<BufferSize>512</BufferSize>
</LogBuffer>
</Syslog>
</top>

223
</config>
</edit-config>
</rpc>

Verifying the configuration


If the client receives the following text, the <edit-config> operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>

Saving the running configuration


About the <save> operation
A <save> operation saves the running configuration to a configuration file and specifies the file as the
main next-startup configuration file.

Restrictions and guidelines


The <save> operation is resource intensive. Do not perform this operation when system resources
are heavily occupied.

Procedure
# Copy the following text to the client:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save OverWrite="false" Binary-only="false">
<file>Configuration file name</file>
</save>
</rpc>

Item Description
Specifies a .cfg configuration file by its name. The name must start with the
storage medium name.
If you specify the file column, a file name is required.
If the Binary-only attribute is false, the device saves the running configuration
file to both the text and binary configuration files.
• If the specified .cfg file does not exist, the device creates the binary and text
configuration files to save the running configuration.
• If you do not specify the file column, the device saves the running
configuration to the text and binary next-startup configuration files.
Determines whether to overwrite the specified file if the file already exists. The
following values are available:
• true—Overwrite the file.
OverWrite
• false—Do not overwrite the file. The running configuration cannot be
saved, and the system displays an error message.
The default value is true.

224
Item Description
Determines whether to save the running configuration only to the binary
configuration file. The following values are available:
• true—Save the running configuration only to the binary configuration file.
 If file specifies a nonexistent file, the <save> operation fails.
 If you do not specify the file column, the device identifies whether the
main next-startup configuration file is specified. If yes, the device saves
the running configuration to the corresponding binary file. If not, the
Binary-only <save> operation fails.
• false—Save the running configuration to both the text and binary
configuration files. For more information, see the description for the file
column in this table.
Saving the running configuration to both the text and binary configuration files
requires more time.
The default value is false.

If the <save> operation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>

Example: Saving the running configuration


Network configuration
Save the running configuration to the config.cfg file.
Procedure
# Enter XML view.
<Sysname> xml

# Notify the device of the NETCONF capabilities supported on the client.


<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>

# Save the running configuration of the device to the config.cfg file.


<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save>
<file>config.cfg</file>
</save>
</rpc>

Verifying the configuration


If the client receives the following response, the <save> operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

225
<ok/>
</rpc-reply>

Loading the configuration


About the <load> operation
The <load> operation merges the configuration from a configuration file into the running
configuration as follows:
• Loads settings that do not exist in the running configuration.
• Overwrites settings that already exist in the running configuration.

Restrictions and guidelines


When you perform a <load> operation, follow these restrictions and guidelines:
• The <load> operation is resource intensive. Do not perform this operation when the system
resources are heavily occupied.
• Some settings in a configuration file might conflict with the existing settings. For the settings in
the file to take effect, delete the existing conflicting settings, and then load the configuration file.

Procedure
# Copy the following text to the client to load a configuration file for the device:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<load>
<file>Configuration file name</file>
</load>
</rpc>

The configuration file name must start with the storage media name and end with the .cfg extension.
If the <load> operation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>

Rolling back the configuration


Restrictions and guidelines
The <rollback> operation is resource intensive. Do not perform this operation when the system
resources are heavily occupied.
By default, an <edit-config> operation cannot be performed while the device is rolling back the
configuration. To allow an <edit-config> operation to be performed during a configuration rollback,
perform an <action> operation to change the value of the DisableEditConfigWhenRollback
attribute to false.

226
Rolling back the configuration based on a configuration file
# Copy the following text to the client to roll back the running configuration to the configuration in a
configuration file:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<rollback>
<file>Specify the configuration file name</file>
</rollback>
</rpc>

If the <rollback> operation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>

Rolling back the configuration based on a rollback point


About this task
You can roll back the running configuration based on a rollback point when one of the following
situations occurs:
• A NETCONF client sends a rollback request.
• The NETCONF session idle time is longer than the rollback idle timeout time.
• A NETCONF client is unexpectedly disconnected from the device.
Restrictions and guidelines
Multiple users might simultaneously configure the device. As a best practice, lock the system before
rolling back the configuration to prevent other users from modifying the running configuration.
Procedure
1. Lock the running configuration. For more information, see "Locking or unlocking the running
configuration."
2. Enable configuration rollback based on a rollback point.
# Copy the following text to the client to perform a <save-point>/<begin> operation:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<begin>
<confirm-timeout>100</confirm-timeout>
</begin>
</save-point>
</rpc>

Item Description
Specifies the rollback idle timeout time in the range of 1 to 65535 seconds.
confirm-timeout
The default is 600 seconds. This item is optional.

If the <save-point/begin> operation succeeds, the device returns a response in the following
format:
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>

227
<save-point>
<commit>
<commit-id>1</commit-id>
</commit>
</save-point>
</data>
</rpc-reply>
3. Modify the running configuration. For more information, see "Modifying the configuration."
4. Mark the rollback point.
The system supports a maximum of 50 rollback points. If the limit is reached, specify the force
attribute for the <save-point>/<commit> operation to overwrite the earliest rollback point.
# Copy the following text to the client to perform a <save-point>/<commit> operation:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<commit>
<label>SUPPORT VLAN<label>
<comment>vlan 1 to 100 and interfaces.</comment>
</commit>
</save-point>
</rpc>
The <label> and <comment> elements are optional.
If the <save-point>/<commit> operation succeeds, the device returns a response in the
following format:
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
<save-point>
<commit>
<commit-id>2</commit-id>
</commit>
</save-point>
</data>
</rpc-reply>
5. Retrieve the rollback point configuration records.
The following text shows the message format for a <save-point/get-commits> request:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<get-commits>
<commit-id/>
<commit-index/>
<commit-label/>
</get-commits>
</save-point>
</rpc>
Specify the <commit-id/>, <commit-index/>, or <commit-label/> element to retrieve the
specified rollback point configuration records. If no element is specified, the operation retrieves
records for all rollback point settings.
# Copy the following text to the client to perform a <save-point>/<get-commits> operation:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<get-commits>

228
<commit-label>SUPPORT VLAN</commit-label>
</get-commits>
</save-point>
</rpc>
If the <save-point/get-commits> operation succeeds, the device returns a response in the
following format:
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
<save-point>
<commit-information>
<CommitID>2</CommitID>
<TimeStamp>Sun Jan 1 11:30:28 2019</TimeStamp>
<UserName>test</UserName>
<Label>SUPPORT VLAN</Label>
</commit-information>
</save-point>
</data>
</rpc-reply>
6. Retrieve the configuration data corresponding to a rollback point.
The following text shows the message format for a <save-point>/<get-commit-information>
request:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<get-commit-information>
<commit-information>
<commit-id/>
<commit-index/>
<commit-label/>
</commit-information>
<compare-information>
<commit-id/>
<commit-index/>
<commit-label/>
</compare-information>
</get-commit-information>
</save-point>
</rpc>
Specify one of the following elements: <commit-id/>, <commit-index/>, and <commit-label/>.
The <compare-information> element is optional.

Item Description
commit-id Uniquely identifies a rollback point.

Specifies 50 most recently configured rollback points. The value


commit-index of 0 indicates the most recently configured one and 49 indicates
the earliest configured one.

commit-label Specifies a unique label for a rollback point.

Retrieves the configuration data corresponding to the most


get-commit-information
recently configured rollback point.

229
# Copy the following text to the client to perform a <save-point>/<get-commit-information>
operation:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<get-commit-information>
<commit-information>
<commit-label>SUPPORT VLAN</commit-label>
</commit-information>
</get-commit-information>
</save-point>
</rpc>
If the <save-point/get-commit-information> operation succeeds, the device returns a response
in the following format:
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
<save-point>
<commit-information>
<content>

interface vlan 1

</content>
</commit-information>
</save-point>
</data>
</rpc-reply>
7. Roll back the configuration based on a rollback point.
The configuration can also be automatically rolled back based on the most recently configured
rollback point when the NETCONF session idle timer expires.
# Copy the following text to the client to perform a <save-point>/<rollback> operation:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<rollback>
<commit-id/>
<commit-index/>
<commit-label/>
</rollback>
</save-point>
</rpc>
Specify one of the following elements: <commit-id/>, <commit-index/>, and <commit-label/>. If
no element is specified, the operation rolls back configuration based on the most recently
configured rollback point.

Item Description
commit-id Uniquely identifies a rollback point.

Specifies 50 most recently configured rollback points. The value


commit-index of 0 indicates the most recently configured one and 49 indicates
the earliest configured one.

commit-label Specifies the unique label of a rollback point.

230
If the <save-point/rollback> operation succeeds, the device returns a response in the following
format:
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok></ok>
</rpc-reply>
8. End the rollback configuration.
# Copy the following text to the client to perform a <save-point>/<end> operation:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<end/>
</save-point>
</rpc>
If the <save-point/end> operation succeeds, the device returns a response in the following
format:
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
9. Unlock the configuration. For more information, see "Locking or unlocking the running
configuration."

Performing CLI operations through NETCONF


About CLI operations through NETCONF
You can enclose command lines in XML messages to configure the device.

Restrictions and guidelines


Performing CLI operations through NETCONF is resource intensive. As a best practice, do not
perform the following tasks:
• Enclose multiple command lines in one XML message.
• Use NETCONF to perform a CLI operation when other users are performing NETCONF CLI
operations.

Procedure
# Copy the following text to the client to execute the commands:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<CLI>
<Execution>
Commands
</Execution>
</CLI>
</rpc>

The <Execution> element can contain multiple commands, with one command on one line.
If the CLI operation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>

231
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<CLI>
<Execution>
<![CDATA[Responses to the commands]]>
</Execution>
</CLI>
</rpc-reply>

Example: Performing CLI operations


Network configuration
Send the display vlan command to the device.
Procedure
# Enter XML view.
<Sysname> xml

# Notify the device of the NETCONF capabilities supported on the client.


<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>

# Copy the following text to the client to execute the display vlan command:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<CLI>
<Execution>
display vlan
</Execution>
</CLI>
</rpc>

Verifying the configuration


If the client receives the following text, the operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<CLI>
<Execution><![CDATA[
<Sysname>display vlan
Total VLANs: 1
The VLANs include:
1(default)
]]>
</Execution>
</CLI>
</rpc-reply>

232
Subscribing to events
About event subscription
When an event takes place on the device, the device sends information about the event to
NETCONF clients that have subscribed to the event.

Restrictions and guidelines


Event subscription is not supported for NETCONF over SOAP sessions.
A subscription takes effect only on the current session. It is canceled when the session is terminated.
If you do not specify the event stream to be subscribed to, the device sends syslog event
notifications to the NETCONF client.

Subscribing to syslog events


# Copy the following message to the client to complete the subscription:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<create-subscription xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
<stream>NETCONF</stream>
<filter>
<event xmlns="http://www.hp.com/netconf/event:1.0">
<Code>code</Code>
<Group>group</Group>
<Severity>severity</Severity>
</event>
</filter>
<startTime>start-time</startTime>
<stopTime>stop-time</stopTime>
</create-subscription>
</rpc>

Item Description
Specifies the event stream. The name for the syslog event stream is
stream NETCONF.

Specifies the event. For information about the events to which you can
event
subscribe, see the system log message references for the device.

code Specifies the mnemonic symbol of the log message.

group Specifies the module name of the log message.

severity Specifies the severity level of the log message.

start-time Specifies the start time of the subscription.

stop-time Specifies the end time of the subscription.

If the subscription succeeds, the device returns a response in the following format:

233
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns:netconf="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>

If the subscription fails, the device returns an error message in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<rpc-error>
<error-type>error-type</error-type>
<error-tag>error-tag</error-tag>
<error-severity>error-severity</error-severity>
<error-message xml:lang="en">error-message</error-message>
</rpc-error>
</rpc-reply>

For more information about error messages, see RFC 4741.

Subscribing to events monitored by NETCONF


After you subscribe to events as described in this section, NETCONF regularly polls the subscribed
events and sends the events that match the subscription condition to the NETCONF client.
# Copy the following message to the client to complete the subscription:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<create-subscription xmlns='urn:ietf:params:xml:ns:netconf:notification:1.0'>
<stream>NETCONF_MONITOR_EXTENSION</stream>
<filter>
<NetconfMonitor xmlns='http://www.hp.com/netconf/monitor:1.0'>
<XPath>XPath</XPath>
<Interval>interval</Interval>
<ColumnConditions>
<ColumnCondition>
<ColumnName>ColumnName</ColumnName>
<ColumnValue>ColumnValue</ColumnValue>
<ColumnCondition>ColumnCondition</ColumnCondition>
</ColumnCondition>
</ColumnConditions>
<MustIncludeResultColumns>
<ColumnName>columnName</ColumnName>
</MustIncludeResultColumns>
</NetconfMonitor>
</filter>
<startTime>start-time</startTime>
<stopTime>stop-time</stopTime>
</create-subscription>
</rpc>

234
Item Description
Specifies the event stream. The name for the event stream is
stream
NETCONF_MONITOR_EXTENSION.

NetconfMonitor Specifies the filtering information for the event.

Specifies the path of the event in the format of


XPath
ModuleName[/SubmoduleName]/TableName.

Specifies the interval for NETCONF to obtain events that matches the
interval subscription condition. The value range is 1 to 4294967 seconds. The default
value is 300 seconds.

ColumnName Specifies the name of a column in the format of [GroupName.]ColumnName.

ColumnValue Specifies the baseline value.

Specifies the operator:


• more.
• less.
• notLess.
• notMore.
• equal.
ColumnCondition
• notEqual.
• include.
• exclude.
• startWith.
• endWith.
Choose an operator according to the type of the baseline value.

start-time Specifies the start time of the subscription.

stop-time Specifies the end time of the subscription.

If the subscription succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns:netconf="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>

Subscribing to events reported by modules


After you subscribe to events as described in this section, the specified modules report subscribed
events to NETCONF. NETCONF sends the events to the NETCONF client.
# Copy the following message to the client to complete the subscription:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:xs="http://www.hp.com/netconf/base:1.0">
<create-subscription xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
<stream>XXX_STREAM</stream>
<filter type="subtree">
<event xmlns="http://www.hp.com/netconf/event:1.0/xxx-features-list-name:1.0">
<ColumnName xs:condition="Condition">value</ColumnName>
</event>
</filter>

235
<startTime>start-time</startTime>
<stopTime>stop-time</stopTime>
</create-subscription>
</rpc>

Item Description
Specifies the event stream. Supported event streams vary by device
stream
model.

Specifies the event name. An event stream includes multiple events.


event
The events use the same namespaces as the event stream.

ColumnName Specifies the name of a column.

Specifies the operator:


• more.
• less.
• notLess.
• notMore.
• equal.
Condition
• notEqual.
• include.
• exclude.
• startWith.
• endWith.
Choose an operator according to the type of the baseline value.

value Specifies the baseline value for the column.

start-time Specifies the start time of the subscription.

stop-time Specifies the end time of the subscription.

If the subscription succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns:netconf="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>

Canceling an event subscription


# Copy the following message to the client to cancel a subscription:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<cancel-subscription xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
<stream>XXX_STREAM</stream>
</cancel-subscription>
</rpc>

Item Description
stream Specifies the event stream.

If the cancelation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>

236
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="100">
<ok/>
</rpc-reply>

If the subscription to be canceled does not exist, the device returns an error message in the following
format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<rpc-error>
<error-type>error-type</error-type>
<error-tag>error-tag</error-tag>
<error-severity>error-severity</error-severity>
<error-message xml:lang="en">The subscription stream to be canceled doesn't exist: Stream
name=XXX_STREAM.</error-message>
</rpc-error>
</rpc-reply>

Example: Subscribing to syslog events


Network configuration
Configure a client to subscribe to syslog events with no time limitation. After the subscription, all
events on the device are sent to the client before the session between the device and client is
terminated.
Procedure
# Enter XML view.
<Sysname> xml

# Notify the device of the NETCONF capabilities supported on the client.


<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>

# Subscribe to syslog events with no time limitation.


<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<create-subscription xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
<stream>NETCONF</stream>
</create-subscription>
</rpc>

Verifying the configuration


# If the client receives the following response, the subscription is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="100">
<ok/>
</rpc-reply>

# When another client (192.168.100.130) logs in to the device, the device sends a notification to the
client that has subscribed to all events:

237
<?xml version="1.0" encoding="UTF-8"?>
<notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
<eventTime>2019-01-04T12:30:52</eventTime>
<event xmlns="http://www.hp.com/netconf/event:1.0">
<Group>SHELL</Group>
<Code>SHELL_LOGIN</Code>
<Slot>1</Slot>
<Severity>Notification</Severity>
<context>VTY logged in from 192.168.100.130.</context>
</event>
</notification>

Terminating NETCONF sessions


About NETCONF session termination
NETCONF allows one client to terminate the NETCONF sessions of other clients. A client whose
session is terminated returns to user view.

Procedure
# Copy the following message to the client to terminate a NETCONF session:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<kill-session>
<session-id>
Specified session-ID
</session-id>
</kill-session>
</rpc>

If the <kill-session> operation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>

Example: Terminating another NETCONF session


Network configuration
The user whose session's ID is 1 terminates the session with session ID 2.
Procedure
# Enter XML view.
<Sysname> xml

# Notify the device of the NETCONF capabilities supported on the client.


<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>

238
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>

# Terminate the session with session ID 2.


<rpc message-id="100"xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<kill-session>
<session-id>2</session-id>
</kill-session>
</rpc>

Verifying the configuration


If the client receives the following text, the NETCONF session with session ID 2 has been terminated,
and the client with session ID 2 has returned from XML view to user view:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>

Returning to the CLI


Restrictions and guidelines
Before returning from XML view to the CLI, you must first complete capability exchange between the
device and the client.
Procedure
# Copy the following text to the client to return from XML view to the CLI:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<close-session/>
</rpc>

When the device receives the close-session request, it sends the following response and returns to
CLI's user view:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>

239
Supported NETCONF operations
This chapter describes NETCONF operations available with Comware 7.

action
Usage guidelines
This operation sends a non-configuration operation instruction to the device, for example, a reset
instruction.
XML example
# Clear statistics information for all interfaces.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<action>
<top xmlns="http://www.hp.com/netconf/action:1.0">
<Ifmgr>
<ClearAllIfStatistics>
<Clear>
</Clear>
</ClearAllIfStatistics>
</Ifmgr>
</top>
</action>
</rpc>

CLI
Usage guidelines
This operation executes CLI commands.
A request message encloses commands in the <CLI> element. A response message encloses the
command output in the <CLI> element.
You can use the following elements to execute commands:
• Execution—Executes commands in user view.
• Configuration—Executes commands in system view. To execute commands in a lower-level
view of the system view, use the <Configuration> element to enter the view first.
To use this element, include the exec-use-channel attribute and specify a value for the
attribute:
 false—Executes commands without using a channel.
 true—Executes commands by using a temporary channel. The channel is automatically
closed after the execution.
 persist—Executes commands by using the persistent channel for the session.
To use the persistent channel, first perform an <Open-channel> operation to open the
persistent channel. If you do not do so, the system will automatically open the persistent
channel.
After using the persistent channel, perform a <Close-channel> operation to close the
channel and return to system view. If you do not perform an <Open-channel> operation, the
system stays in the view and will execute subsequent commands in the view.

240
You can also specify the error-when-rollback attribute in the <Configuration> element to
indicate whether CLI operations are allowed during a configuration error-triggered configuration
rollback. This attribute takes effect only if the value of the <error-option> element in <edit-config>
operations is set to rollback-on-error. It has the following values:
 true—Rejects CLI operation requests and returns error messages.
 false (the default)—Allows CLI operations.
For CLI operations to be correctly performed, set the value of the error-when-rollback attribute
to true.
A NETCONF session supports only one persistent channel and but supports multiple temporary
channels.
This operation does not support executing interactive CLI commands.
You cannot execute the quit command by using a channel to exit user view.
XML example
# Execute the vlan 3 command in system view without using a channel.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<CLI>
<Configuration exec-use-channel="false">vlan 3</Configuration>
</CLI>
</rpc>

close-session
Usage guidelines
This operation terminates the current NETCONF session, unlocks the configuration, and releases
the resources (for example, memory) used by the session. After this operation, you exit the XML
view.
XML example
# Terminate the current NETCONF session.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<close-session/>
</rpc>

edit-config: create
Usage guidelines
This operation creates target configuration items.
To use the create attribute in an <edit-config> operation, you must specify the target configuration
item.
• If the table supports creating a target configuration item and the item does not exist, the
operation creates the item and configures the item.
• If the specified item already exists, a data-exist error message is returned.
XML example
# Set the buffer size to 120.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:xc="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>

241
<target>
<running/>
</target>
<config>
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Syslog xmlns="http://www.hp.com/netconf/config:1.0" xc:operation="create">
<LogBuffer>
<BufferSize>120</BufferSize>
</LogBuffer>
</Syslog>
</top>
</config>
</edit-config>
</rpc>

edit-config: delete
Usage guidelines
This operation deletes the specified configuration.
• If the specified target has only the table index, the operation removes all configuration of the
specified target, and the target itself.
• If the specified target has the table index and configuration data, the operation removes the
specified configuration data of this target.
• If the specified target does not exist, an error message is returned, showing that the target does
not exist.
XML example
The syntax is the same as the edit-config message with the create attribute. Change the operation
attribute from create to delete.

edit-config: merge
Usage guidelines
This operation commits target configuration items to the running configuration.
To use the merge attribute in an <edit-config> operation, you must specify the target configuration
item (on a specific level):
• If the specified item exists, the operation directly updates the setting for the item.
• If the specified item does not exist, the operation creates the item and configures the item.
• If the specified item does not exist and it cannot be created, an error message is returned.
XML example
The XML data format is the same as the edit-config message with the create attribute. Change the
operation attribute from create to merge.

edit-config: remove
Usage guidelines
This operation removes the specified configuration.

242
• If the specified target has only the table index, the operation removes all configuration of the
specified target, and the target itself.
• If the specified target has the table index and configuration data, the operation removes the
specified configuration data of this target.
• If the specified target does not exist, or the XML message does not specify any targets, a
success message is returned.
XML example
The syntax is the same as the edit-config message with the create attribute. Change the operation
attribute from create to remove.

edit-config: replace
Usage guidelines
This operation replaces the specified configuration.
• If the specified target exists, the operation replaces the configuration of the target with the
configuration carried in the message.
• If the specified target does not exist but is allowed to be created, the operation creates the
target and then applies the configuration.
• If the specified target does not exist and is not allowed to be created, the operation is not
conducted and an invalid-value error message is returned.
XML example
The syntax is the same as the edit-config message with the create attribute. Change the operation
attribute from create to replace.

edit-config: test-option
Usage guidelines
This operation determines whether to commit a configuration item in an <edit-configure> operation.
The <test-option> element has one of the following values:
• test-then-set—Performs a syntax check, and commits an item if the item passes the check. If
the item fails the check, the item is not committed. This is the default test-option value.
• set—Commits the item without performing a syntax check.
• test-only—Performs only a syntax check. If the item passes the check, a success message is
returned. Otherwise, an error message is returned.
XML example
# Test the configuration for an interface.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>
<target>
<running/>
</target>
<test-option>test-only</test-option>
<config xmlns:xc="urn:ietf:params:xml:ns:netconf:base:1.0">
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Ifmgr xc:operation="merge">
<Interfaces>
<Interface>

243
<IfIndex>262</IfIndex>
<Description>222</Description>
<ConfigSpeed>2</ConfigSpeed>
<ConfigDuplex>1</ConfigDuplex>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</config>
</edit-config>
</rpc>

edit-config: default-operation
Usage guidelines
This operation modifies the running configuration of the device by using the default operation
method.
NETCONF uses one of the following operation attributes to modify configuration: merge, create,
delete, and replace. If you do not specify an operation attribute for an edit-config message,
NETCONF uses the default operation method. The value specified for the <default-operation>
element takes effect only once. If you do not specify an operation attribute or the default operation
method for an <edit-config> message, merge always applies.
The <default-operation> element has the following values:
• merge—Default value for the <default-operation> element.
• replace—Value used when the operation attribute is not specified and the default operation
method is specified as replace.
• none—Value used when the operation attribute is not specified and the default operation
method is specified as none. If this value is specified, the <edit-config> operation is used only
for schema verification rather than issuing a configuration. If the schema verification is passed,
a successful message is returned. Otherwise, an error message is returned.
XML example
# Issue an empty operation for schema verification purposes.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>
<target>
<running/>
</target>
<default-operation>none</default-operation>
<config xmlns:xc="urn:ietf:params:xml:ns:netconf:base:1.0">
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<IfIndex>262</IfIndex>
<Description>222222</Description>
</Interface>
</Interfaces>
</Ifmgr>

244
</top>
</config>
</edit-config>
</rpc>

edit-config: error-option
Usage guidelines
This operation determines the action to take in case of a configuration error.
The <error-option> element has the following values:
• stop-on-error—Stops the operation and returns an error message. This is the default
error-option value.
• continue-on-error—Continues the operation and returns an error message.
• rollback-on-error—Rolls back the configuration.
XML example
# Issue the configuration for two interfaces, setting the <error-option> element to continue-on-error.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>
<target>
<running/>
</target> <error-option>continue-on-error</error-option>
<config xmlns:xc="urn:ietf:params:xml:ns:netconf:base:1.0">
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Ifmgr xc:operation="merge">
<Interfaces>
<Interface>
<IfIndex>262</IfIndex>
<Description>222</Description>
<ConfigSpeed>1024</ConfigSpeed>
<ConfigDuplex>1</ConfigDuplex>
</Interface>
<Interface>
<IfIndex>263</IfIndex>
<Description>333</Description>
<ConfigSpeed>1024</ConfigSpeed>
<ConfigDuplex>1</ConfigDuplex>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</config>
</edit-config>
</rpc>

245
edit-config: incremental
Usage guidelines
This operation adds configuration data to a column without affecting the original data.
The incremental attribute applies to a list column such as the vlan permitlist column.
You can use the incremental attribute for <edit-config> operations with an attribute other than
replace.
Support for the incremental attribute varies by module. For more information, see NETCONF XML
API references.
XML example
# Add VLANs 1 through 10 to an untagged VLAN list that has untagged VLANs 12 through 15.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:hp="http://www.hp.com/netconf/base:1.0">
<edit-config>
<target>
<running/>
</target>
<config xmlns:xc="urn:ietf:params:xml:ns:netconf:base:1.0">
<top xmlns="http://www.hp.com/netconf/config:1.0">
<VLAN xc:operation="merge">
<HybridInterfaces>
<Interface>
<IfIndex>262</IfIndex>
<UntaggedVlanList hp:incremental="true">1-10</UntaggedVlanList>
</Interface>
</HybridInterfaces>
</VLAN>
</top>
</config>
</edit-config>
</rpc>

get
Usage guidelines
This operation retrieves device configuration and state information.
XML example
# Retrieve device configuration and state information for the Syslog module.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:xc="http://www.hp.com/netconf/base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Syslog>
</Syslog>
</top>

246
</filter>
</get>
</rpc>

get-bulk
Usage guidelines
This operation retrieves a number of data entries (including device configuration and state
information) starting from the data entry next to the one with the specified index.
XML example
# Retrieve device configuration and state information for all interfaces.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-bulk>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Ifmgr>
<Interfaces xc:count="5" xmlns:xc="http://www.hp.com/netconf/base:1.0">
<Interface/>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get-bulk>
</rpc>

get-bulk-config
Usage guidelines
This operation retrieves a number of non-default configuration data entries starting from the data
entry next to the one with the specified index.
XML example
# Retrieve non-default configuration for all interfaces.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-bulk-config>
<source>
<running/>
</source>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Ifmgr>
</Ifmgr>
</top>
</filter>
</get-bulk-config>
</rpc>

247
get-config
Usage guidelines
This operation retrieves non-default configuration data. If no non-default configuration data exists,
the device returns a response with empty data.
XML example
# Retrieve non-default configuration data for the interface table.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:xc="http://www.hp.com/netconf/base:1.0">
<get-config>
<source>
<running/>
</source>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Ifmgr>
<Interfaces>
<Interface/>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get-config>
</rpc>

get-sessions
Usage guidelines
This operation retrieves information about all NETCONF sessions in the system. You cannot specify
a session ID to retrieve information about a specific NETCONF session.
XML example
# Retrieve information about all NETCONF sessions in the system.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-sessions/>
</rpc>

kill-session
Usage guidelines
This operation terminates the NETCONF session for another user. This operation cannot terminate
the NETCONF session for the current user.
XML example
# Terminate the NETCONF session with session ID 1.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<kill-session>
<session-id>1</session-id>

248
</kill-session>
</rpc>

load
Usage guidelines
This operation loads the configuration. After the device finishes a <load> operation, the configuration
in the specified file is merged into the running configuration of the device.
XML example
# Merge the configuration in file a1.cfg to the running configuration of the device.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<load>
<file>a1.cfg</file>
</load>
</rpc>

lock
Usage guidelines
This operation locks the configuration. After you lock the configuration, other users cannot perform
<edit-config> operations. Other NETCONF operations are allowed.
After a user locks the configuration, other users cannot use NETCONF or any other configuration
methods such as CLI and SNMP to configure the device.
XML example
# Lock the configuration.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<lock>
<target>
<running/>
</target>
</lock>
</rpc>

rollback
Usage guidelines
This operation rolls back the configuration. To do so, you must specify the configuration file in the
<file> element. After the device finishes the <rollback> operation, the current device configuration is
totally replaced with the configuration in the specified configuration file.
XML example
# Roll back the running configuration to the configuration in file 1A.cfg.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<rollback>
<file>1A.cfg</file>
</rollback>
</rpc>

249
save
Usage guidelines
This operation saves the running configuration. You can use the <file> element to specify a file for
saving the configuration. If the text does not include the file column, the running configuration is
automatically saved to the main next-startup configuration file.
The OverWrite attribute determines whether the running configuration overwrites the original
configuration file when the specified file already exists.
The Binary-only attribute determines whether to save the running configuration only to the binary
configuration file.
XML example
# Save the running configuration to file test.cfg.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save OverWrite="false" Binary-only="true">
<file>test.cfg</file>
</save>
</rpc>

unlock
Usage guidelines
This operation unlocks the configuration, so other users can configure the device.
Terminating a NETCONF session automatically unlocks the configuration.
XML example
# Unlock the configuration.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<unlock>
<target>
<running/>
</target>
</unlock>
</rpc>

250
Configuring CWMP
About CWMP
CPE WAN Management Protocol (CWMP), also called "TR-069," is a DSL Forum technical
specification for remote management of network devices.
The protocol was initially designed to provide remote autoconfiguration through a server for large
numbers of dispersed end-user devices in a network. CWMP can be used on different types of
networks, including Ethernet.

CWMP network framework


Figure 69 shows a basic CWMP network framework.
Figure 69 CWMP network framework

DHCP server ACS DNS server

IP network

CPE CPE CPE

A basic CWMP network includes the following network elements:


• ACS—Autoconfiguration server, the management device in the network.
• CPE—Customer premises equipment, the managed device in the network.
• DNS server—Domain name system server. CWMP defines that the ACS and the CPE use
URLs to identify and access each other. DNS is used to resolve the URLs.
• DHCP server—Assigns ACS attributes along with IP addresses to CPEs when the CPEs are
powered on. DHCP server is optional in CWMP. With a DHCP server, you do not need to
configure ACS attributes manually on each CPE. The CPEs can contact the ACS automatically
when they are powered on for the first time.
The device is operating as a CPE in the CWMP framework.

Basic CWMP functions


You can autoconfigure and upgrade CPEs in bulk from the ACS.
Autoconfiguration
You can create configuration files for different categories of CPEs on the ACS. Based on the device
models and serial numbers of the CPEs, the ACS verifies the categories of the CPEs and issues the
associated configuration to them.

251
The following are methods available for the ACS to issue configuration to the CPE:
• Transfers the configuration file to the CPE, and specifies the file as the next-startup
configuration file. At a reboot, the CPE starts up with the ACS-specified configuration file.
• Runs the configuration in the CPE's RAM. The configuration takes effect immediately on the
CPE. For the running configuration to survive a reboot, you must save the configuration on the
CPE.
CPE software management
The ACS can manage CPE software upgrade.
When the ACS finds a software version update, the ACS notifies the CPE to download the software
image file from a specific location. The location can be the URL of the ACS or an independent file
server.
If the CPE successfully downloads the software image file and the file is validated, the CPE notifies
the ACS of a successful download. If the CPE fails to download the software image file or the file is
invalidated, the CPE notifies the ACS of an unsuccessful download.
Data backup
The ACS can require the CPE to upload a configuration file or log file to a specific location. The
destination location can be the ACS or a file server.
CPE status and performance monitoring
The ACS can monitor the status and performance of CPEs. Table 12 shows the available CPE status
and performance objects for the ACS to monitor.
Table 12 CPE status and performance objects available for the ACS to monitor

Category Objects Remarks


Manufacturer
ManufacturerOUI
Device information SerialNumber N/A
HardwareVersion
SoftwareVersion

Operating status and DeviceStatus


N/A
information UpTime

Local configuration file stored on CPE for


upgrade. The ACS can issue configuration to
Configuration file ConfigFile the CPE by transferring a configuration file to
the CPE or running the configuration in CPE's
RAM.
URL address of the ACS to which the CPE
ACS URL initiates a CWMP connection. This object is
also used for main/backup ACS switchover.
When the username and password of the ACS
are changed, the ACS changes the ACS
username and password on the CPE to the
CWMP settings ACS username new username and password.
ACS password When a main/backup ACS switchover occurs,
the main ACS also changes the ACS
username and password to the backup ACS
username and password.
Whether to enable or disable the periodic
PeriodicInformEnable
Inform feature.

252
Category Objects Remarks
Interval for periodic connection from the CPE
PeriodicInformInterval to the ACS for configuration and software
update.
Scheduled time for connection from the CPE
PeriodicInformTime to the ACS for configuration and software
update.

ConnectionRequestURL (CPE
N/A
URL)

ConnectionRequestUsername
(CPE username) CPE username and password for
ConnectionRequestPassword authentication from the ACS to the CPE.
(CPE password)

How CWMP works


RPC methods
CWMP uses remote procedure call (RPC) methods for bidirectional communication between CPE
and ACS. The RPC methods are encapsulated in HTTP or HTTPS.
Table 13 shows the primary RPC methods used in CWMP.
Table 13 RPC methods

RPC method Description


Get The ACS obtains the values of parameters on the CPE.
Set The ACS modifies the values of parameters on the CPE.
The CPE sends an Inform message to the ACS for the following purposes:
• Initiates a connection to the ACS.
Inform
• Reports configuration changes to the ACS.
• Periodically updates CPE settings to the ACS.
The ACS requires the CPE to download a configuration or software image file from a
Download
specific URL for software or configuration update.
Upload The ACS requires the CPE to upload a file to a specific URL.
The ACS reboots the CPE remotely for the CPE to complete an upgrade or recover
Reboot
from an error condition.

Autoconnect between ACS and CPE


The CPE automatically initiates a connection to the ACS when one of the following events occurs:
• ACS URL change. The CPE initiates a connection request to the new ACS URL.
• CPE startup. The CPE initiates a connection to the ACS after the startup.
• Timeout of the periodic Inform interval. The CPE re-initiates a connection to the ACS at the
Inform interval.
• Expiration of the scheduled connection initiation time. The CPE initiates a connection to the
ACS at the scheduled time.

253
CWMP connection establishment
Step 1 through step 5 in Figure 70 show the procedure of establishing a connection between the
CPE and the ACS.
1. After obtaining the basic ACS parameters, the CPE initiates a TCP connection to the ACS.
2. If HTTPS is used, the CPE and the ACS initialize SSL for a secure HTTP connection.
3. The CPE sends an Inform message in HTTPS to initiate a CWMP session.
4. After the CPE passes authentication, the ACS returns an Inform response to establish the
session.
5. After sending all requests, the CPE sends an empty HTTP post message.
Figure 70 CWMP connection establishment
CPE

(1) Open TCP connection

(2) SSL initiation

(3) HTTP post (Inform)

(4) HTTP response (Inform response)

(5) HTTP post (empty)

(6) HTTP response (GetParameterValues request)

(7) HTTP post (GetParameterValues response)

(8) HTTP response (SetParameterValues request)

(9) HTTP post (SetParameterValues response)

(10) HTTP response (empty)

(11) Close connection

Main/backup ACS switchover


Typically, two ACSs are used in a CWMP network for consecutive monitoring on CPEs. When the
main ACS needs to reboot, it points the CPE to the backup ACS. Step 6 through step 11 in Figure 71
show the procedure of a main/backup ACS switchover.
1. Before the main ACS reboots, it queries the ACS URL set on the CPE.
2. The CPE replies with its ACS URL setting.
3. The main ACS sends a Set request to change the ACS URL on the CPE to the backup ACS
URL.
4. After the ACS URL is modified, the CPE sends a response.
5. The main ACS sends an empty HTTP message to notify the CPE that it has no other requests.
6. The CPE closes the connection, and then initiates a new connection to the backup ACS URL.

254
Figure 71 Main and backup ACS switchover
CPE

(1) Open TCP connection

(2) SSL initiation

(3) HTTP post (Inform)

(4) HTTP response (Inform response)

(5) HTTP post (empty)

(6) HTTP response (GetParameterValues request)

(7) HTTP post (GetParameterValues response)

(8) HTTP response (SetParameterValues request)

(9) HTTP post (SetParameterValues response)

(10) HTTP response (empty)

(11) Close connection

Restrictions and guidelines: CWMP configuration


You can configure ACS and CPE attributes from the CPE's CLI, the DHCP server, or the ACS. For an
attribute, the CLI- and ACS-assigned values have higher priority than the DHCP-assigned value.
The CLI- and ACS-assigned values overwrite each other, whichever is assigned later.
This document only describes configuring ACS and CPE attributes from the CLI and DHCP server.
For more information about configuring and using the ACS, see ACS documentation.

CWMP tasks at a glance


To configure CWMP, perform the following tasks:
1. Enabling CWMP from the CLI
You can also enable CWMP from a DHCP server.
2. Configuring ACS attributes
a. Configuring the preferred ACS attributes
b. (Optional.) Configuring the default ACS attributes from the CLI
3. Configuring CPE attributes
a. Specifying an SSL client policy for HTTPS connection to ACS
This task is required when the ACS uses HTTPS for secure access.
b. (Optional.) Configuring ACS authentication parameters
c. (Optional.) Configuring the provision code
d. (Optional.) Configuring the CWMP connection interface
e. (Optional.) Configuring autoconnect parameters
f. (Optional.) Setting the close-wait timer

255
Enabling CWMP from the CLI
1. Enter system view.
system-view
2. Enter CWMP view.
cwmp
3. Enable CWMP.
cwmp enable
By default, CWMP is disabled.

Configuring ACS attributes


About ACS attributes
You can configure two sets of ACS attributes for the CPE: preferred and default.
• The preferred ACS attributes are configurable from the CPE's CLI, the DHCP server, and ACS.
• The default ACS attributes are configurable only from the CLI.
If the preferred ACS attributes are not configured, the CPE uses the default ACS attributes for
connection establishment.

Configuring the preferred ACS attributes


Assigning ACS attributes from the DHCP server
The DHCP server in a CWMP network assigns the following information to CPEs:
• IP addresses for the CPEs.
• DNS server address.
• ACS URL and ACS login authentication information.
This section introduces how to use DHCP option 43 to assign the ACS URL and ACS login
authentication username and password. For more information about DHCP and DNS, see Layer
3—IP Services Configuration Guide.
If the DHCP server is an HPE device, you can configure DHCP option 43 by using the option 43
hex 01length URL username password command.
• length—A hexadecimal number that indicates the total length of the length, URL, username,
and password arguments, including the spaces between these arguments. No space is allowed
between the 01 keyword and the length value.
• URL—ACS URL.
• username—Username for the CPE to authenticate to the ACS.
• password—Password for the CPE to authenticate to the ACS.

NOTE:
The ACS URL, username and password must use the hexadecimal format and be space separated.

The following example configures the ACS address as http://169.254.76.31:7547/acs, username as


1234, and password as 5678:
<Sysname> system-view

256
[Sysname] dhcp server ip-pool 0
[Sysname-dhcp-pool-0] option 43 hex
0127687474703A2F2F3136392E3235342E37362E33313A373534372F61637320313233342035363738

Table 14 Hexadecimal forms of the ACS attributes

Attribute Attribute value Hexadecimal form


Length 39 characters 27
687474703A2F2F3136392E3235342E37362E33313A37353
http://169.254.76.31:75 4372F61637320
ACS URL
47/acs NOTE:
The two ending digits (20) represent the space.
3132333420
ACS connect
1234 NOTE:
username
The two ending digits (20) represent the space.
ACS connect
5678 35363738
password

Configuring the preferred ACS attributes from the CLI


1. Enter system view.
system-view
2. Enter CWMP view.
cwmp
3. Configure the preferred ACS URL.
cwmp acs url url
By default, no preferred ACS URL has been configured.
4. Configure the username for authentication to the preferred ACS URL.
cwmp acs username username
By default, no username has been configured for authentication to the preferred ACS URL.
5. (Optional.) Configure the password for authentication to the preferred ACS URL.
cwmp acs password { cipher | simple } string
By default, no password has been configured for authentication to the preferred ACS URL.

Configuring the default ACS attributes from the CLI


1. Enter system view.
system-view
2. Enter CWMP view.
cwmp
3. Configure the default ACS URL.
cwmp acs default url url
By default, no default ACS URL has been configured.
4. Configure the username for authentication to the default ACS URL.
cwmp acs default username username
By default, no username has been configured for authentication to the default ACS URL.
5. (Optional.) Configure the password for authentication to the default ACS URL.

257
cwmp acs default password { cipher | simple } string
By default, no password has been configured for authentication to the default ACS URL.

Configuring CPE attributes


About CPE attributes
You can configure the following CPE attributes only from the CPE's CLI.
• CWMP connection interface.
• Maximum number of connection retries.
• SSL client policy for HTTPS connection to ACS.
For other CPE attribute values, you can assign them to the CPE from the CPE's CLI or the ACS. The
CLI- and ACS-assigned values overwrite each other, whichever is assigned later.

Specifying an SSL client policy for HTTPS connection to ACS


About this task
This task is required when the ACS uses HTTPS for secure access. CWMP uses HTTP or HTTPS
for data transmission. When HTTPS is used, the ACS URL begins with https://. You must specify an
SSL client policy for the CPE to authenticate the ACS for HTTPS connection establishment.
Prerequisites
Before you perform this task, first create an SSL client policy. For more information about configuring
SSL client policies, see Security Configuration Guide.
Procedure
1. Enter system view.
system-view
2. Enter CWMP view.
cwmp
3. Specify an SSL client policy.
ssl client-policy policy-name
By default, no SSL client policy is specified.

Configuring ACS authentication parameters


About this task
To protect the CPE against unauthorized access, configure a CPE username and password for ACS
authentication. When an ACS initiates a connection to the CPE, the ACS must provide the correct
username and password.
Procedure
1. Enter system view.
system-view
2. Enter CWMP view.
cwmp
3. Configure the username for authentication to the CPE.

258
cwmp cpe username username
By default, no username has been configured for authentication to the CPE.
4. (Optional.) Configure the password for authentication to the CPE.
cwmp cpe password { cipher | simple } string
By default, no password has been configured for authentication to the CPE.
The password setting is optional. You can specify only a username for authentication.

Configuring the provision code


About this task
The ACS can use the provision code to identify services assigned to each CPE. For correct
configuration deployment, make sure the same provision code is configured on the CPE and the
ACS. For information about the support of your ACS for provision codes, see the ACS
documentation.
Procedure
1. Enter system view.
system-view
2. Enter CWMP view.
cwmp
3. Configure the provision code.
cwmp cpe provision-code provision-code
The default provision code is PROVISIONINGCODE.

Configuring the CWMP connection interface


About this task
The CWMP connection interface is the interface that the CPE uses to communicate with the ACS. To
establish a CWMP connection, the CPE sends the IP address of this interface in the Inform
messages, and the ACS replies to this IP address.
Typically, the CPE selects the CWMP connection interface automatically. If the CWMP connection
interface is not the interface that connects the CPE to the ACS, the CPE fails to establish a CWMP
connection with the ACS. In this case, you need to manually set the CWMP connection interface.
Procedure
1. Enter system view.
system-view
2. Enter CWMP view.
cwmp
3. Specify the interface that connects to the ACS as the CWMP connection interface.
cwmp cpe connect interface interface-type interface-number
By default, no CWMP connection interface is specified.

259
Configuring autoconnect parameters
About this task
You can configure the CPE to connect to the ACS periodically, or at a scheduled time for
configuration or software update.
The CPE retries a connection automatically when one of the following events occurs:
• The CPE fails to connect to the ACS. The CPE considers a connection attempt as having failed
when the close-wait timer expires. This timer starts when the CPE sends an Inform request. If
the CPE fails to receive a response before the timer expires, the CPE resends the Inform
request.
• The connection is disconnected before the session on the connection is completed.
To protect system resources, limit the number of retries that the CPE can make to connect to the
ACS.
Configuring the periodic Inform feature
1. Enter system view.
system-view
2. Enter CWMP view.
cwmp
3. Enable the periodic Inform feature.
cwmp cpe inform interval enable
By default, this function is disabled.
4. Set the Inform interval.
cwmp cpe inform interval interval
By default, the CPE sends an Inform message to start a session every 600 seconds.
Scheduling a connection initiation
1. Enter system view.
system-view
2. Enter CWMP view.
cwmp
3. Schedule a connection initiation.
cwmp cpe inform time time
By default, no connection initiation has been scheduled.
Setting the maximum number of connection retries
1. Enter system view.
system-view
2. Enter CWMP view.
cwmp
3. Set the maximum number of connection retries.
cwmp cpe connect retry retries
By default, the CPE retries a failed connection until the connection is established.

260
Setting the close-wait timer
About this task
The close-wait timer specifies the following:
• The maximum amount of time the CPE waits for the response to a session request. The CPE
determines that its session attempt has failed when the timer expires.
• The amount of time the connection to the ACS can be idle before it is terminated. The CPE
terminates the connection to the ACS if no traffic is sent or received before the timer expires.
Procedure
1. Enter system view.
system-view
2. Enter CWMP view.
cwmp
3. Set the close-wait timer.
cwmp cpe wait timeout seconds
By default, the close-wait timer is 30 seconds.

Display and maintenance commands for CWMP


Execute display commands in any view.

Task Command
Display CWMP configuration. display cwmp configuration
Display the current status of CWMP. display cwmp status

CWMP configuration examples


Example: Configuring CWMP
Network configuration
As shown in Figure 72, use HPE IMC BIMS as the ACS to bulk-configure the devices (CPEs), and
assign ACS attributes to the CPEs from the DHCP server.
The configuration files for the CPEs in equipment rooms A and B are configure1.cfg and
configure2.cfg, respectively.

261
Figure 72 Network diagram
ACS DHCP server DNS server
10.185.10.41 10.185.10.52 10.185.10.60

GE1/0/1 GE1/0/1 GE1/0/1 GE1/0/1 GE1/0/1 GE1/0/1

CPE1 CPE2 CPE3 CPE4 CPE5 CPE6


Room A Room B

Table 15 shows the ACS attributes for the CPEs to connect to the ACS.
Table 15 ACS attributes

Item Setting
Preferred ACS URL http://10.185.10.41:9090
ACS username admin
ACS password 12345

Table 16 lists serial numbers of the CPEs.


Table 16 CPE list

Room Device Serial number


CPE 1 210231A95YH10C000045
A CPE 2 210235AOLNH12000010
CPE 3 210235AOLNH12000015
CPE 4 210235AOLNH12000017
B CPE 5 210235AOLNH12000020
CPE 6 210235AOLNH12000022

Configuring the ACS


Figures in this section are for illustration only.
To configure the ACS:
1. Log in to the ACS:

262
a. Launch a Web browser on the ACS configuration terminal.
b. In the address bar of the Web browser, enter the ACS URL and port number. This example
uses http://10.185.10.41:8080/imc.
c. On the login page, enter the ACS login username and password, and then click Login.
2. Create a CPE group for each equipment room:
a. Select Service > BIMS > CPE Group from the top navigation bar.
The CPE Group page appears.
Figure 73 CPE Group page

b. Click Add.
c. Enter a username, and then click OK.
Figure 74 Adding a CPE group

d. Repeat the previous two steps to create a CPE group for CPEs in Room B.
3. Add CPEs to the CPE group for each equipment room:
a. Select Service > BIMS > Resource Management > Add CPE from the top navigation bar.
b. On the Add CPE page, configure the following parameters:
− Authentication Type—Select ACS UserName.
− CPE Name—Enter a CPE name.
− ACS Username—Enter admin.
− ACS Password Generated—Select Manual Input.
− ACS Password—Enter a password for ACS authentication.
− ACS Confirm Password—Re-enter the password.
− CPE Model—Select the CPE model.
− CPE Group—Select the CPE group.

263
Figure 75 Adding a CPE

c. Click OK.
d. Verify that the CPE has been added successfully from the All CPEs page.
Figure 76 Viewing CPEs

e. Repeat the previous steps to add CPE 2 and CPE 3 to the CPE group for Room A, and add
CPEs in Room B to the CPE group for Room B.
4. Configure a configuration template for each equipment room:
a. Select Service > BIMS > Configuration Management > Configuration Templates from
the top navigation bar.

264
Figure 77 Configuration Templates page

b. Click Import.
c. Select a source configuration file, select Configuration Segment as the template type, and
then click OK.
The created configuration template will be displayed in the Configuration Template list
after a successful file import.

IMPORTANT:
If the first command in the configuration template file is system-view, make sure no
characters exist in front of the command.

Figure 78 Importing a configuration template

265
Figure 79 Configuration Template list

d. Repeat the previous steps to configure a configuration template for Room B.


5. Add software library entries:
a. Select Service > BIMS > Configuration Management > Software Library from the top
navigation bar.
Figure 80 Software Library page

b. Click Import.
c. Select a source file, and then click OK.
Figure 81 Importing CPE software

d. Repeat the previous steps to add software library entries for CPEs of different models.
6. Create an auto-deployment task for each equipment room:
a. Select Service > BIMS > Configuration Management > Deployment Guide from the top
navigation bar.

266
Figure 82 Deployment Guide

b. Click By CPE Model from the Auto Deployment Configuration field.


c. Select a configuration template, select Startup Configuration from the File Type to be
Deployed list, and click Select Model to select CPEs in Room A. Then, click OK.
You can search for CPEs by CPE group.
Figure 83 Auto deployment configuration

d. Click OK on the Auto Deploy Configuration page.

267
Figure 84 Operation result

e. Repeat the previous steps to add a deployment task for CPEs in Room B.
Configuring the DHCP server
In this example, an HPE device is operating as the DHCP server.
1. Configure an IP address pool to assign IP addresses and DNS server address to the CPEs.
This example uses subnet 10.185.10.0/24 for IP address assignment.
# Enable DHCP.
<DHCP_server> system-view
[DHCP_server] dhcp enable
# Enable DHCP server on VLAN-interface 1.
[DHCP_server] interface vlan-interface 1
[DHCP_server-Vlan-interface1] dhcp select server
[DHCP_server-Vlan-interface1] quit
# Exclude the DNS server address 10.185.10.60 and the ACS IP address 10.185.10.41 from
dynamic allocation.
[DHCP_server] dhcp server forbidden-ip 10.185.10.41
[DHCP_server] dhcp server forbidden-ip 10.185.10.60
# Create DHCP address pool 0.
[DHCP_server] dhcp server ip-pool 0
# Assign subnet 10.185.10.0/24 to the address pool, and specify the DNS server address
10.185.10.60 in the address pool.
[DHCP_server-dhcp-pool-0] network 10.185.10.0 mask 255.255.255.0
[DHCP_server-dhcp-pool-0] dns-list 10.185.10.60
2. Configure DHCP Option 43 to contain the ACS URL, username, and password in hexadecimal
format.
[DHCP_server-dhcp-pool-0] option 43 hex
013B687474703A2F2F6163732E64617461626173653A393039302F616373207669636B79203132333
435

Configuring the DNS server


Map http://acs.database:9090 to http://10.185.1.41:9090 on the DNS server. For more information
about DNS configuration, see DNS server documentation.
Connecting the CPEs to the network
# Connect CPE 1 to the network, and then power on the CPE. (Details not shown.)
# Log in to CPE 1 and configure its interface GigabitEthernet 1/0/1 to use DHCP for IP address
acquisition. At startup, the CPE obtains the IP address and ACS information from the DHCP server
to initiate a connection to the ACS. After the connection is established, the CPE interacts with the
ACS to complete autoconfiguration.
<CPE1> system-view
[CPE1] interface interface-type gigabitethernet 1/0/1

268
[CPE1] ip address dhcp-alloc

# Repeat the previous steps to configure the other CPEs.


Verifying the configuration
# Execute the display current-configuration command to verify that the running
configurations on CPEs are the same as the configurations issued by the ACS.

269
Configuring EAA
About EAA
Embedded Automation Architecture (EAA) is a monitoring framework that enables you to self-define
monitored events and actions to take in response to an event. It allows you to create monitor policies
by using the CLI or Tcl scripts.

EAA framework
EAA framework includes a set of event sources, a set of event monitors, a real-time event manager
(RTM), and a set of user-defined monitor policies, as shown in Figure 85.
Figure 85 EAA framework
Event
sources
CLI Syslog SNMP Hotplug

SNMP-
Interface Process Track
notification

EM

RTM

EAA monitor
policies

CLI-defined policy Tcl-defined policy

Event sources
Event sources are software or hardware modules that trigger events (see Figure 85).
For example, the CLI module triggers an event when you enter a command. The Syslog module (the
information center) triggers an event when it receives a log message.
Event monitors
EAA creates one event monitor to monitor the system for the event specified in each monitor policy.
An event monitor notifies the RTM to run the monitor policy when the monitored event occurs.
RTM
RTM manages the creation, state machine, and execution of monitor policies.

270
EAA monitor policies
A monitor policy specifies the event to monitor and actions to take when the event occurs.
You can configure EAA monitor policies by using the CLI or Tcl.
A monitor policy contains the following elements:
• One event.
• A minimum of one action.
• A minimum of one user role.
• One running time setting.
For more information about these elements, see "Elements in a monitor policy."

Elements in a monitor policy


Elements in an EAA monitor policy include event, action, user role, and runtime.
Event
Table 17 shows types of events that EAA can monitor.
Table 17 Monitored events

Event type Description


CLI event occurs in response to monitored operations performed at the CLI. For
CLI example, a command is entered, a question mark (?) is entered, or the Tab key is
pressed to complete a command.
Syslog event occurs when the information center receives the monitored log within a
specific period.
Syslog
NOTE:
The log that is generated by the EAA RTM does not trigger the monitor policy to run.
Process event occurs in response to a state change of the monitored process (such as
Process an exception, shutdown, start, or restart). Both manual and automatic state changes
can cause the event to occur.
Hot-swapping event occurs when the monitored member device joins or leaves the
Hotplug
IRF fabric or a card is inserted in or removed from the monitored slot.
Each interface event is associated with two user-defined thresholds: start and restart.
An interface event occurs when the monitored interface traffic statistic crosses the start
threshold in the following situations:
Interface
• The statistic crosses the start threshold for the first time.
• The statistic crosses the start threshold each time after it crosses the restart
threshold.

Each SNMP event is associated with two user-defined thresholds: start and restart.
SNMP event occurs when the monitored MIB variable's value crosses the start
threshold in the following situations:
SNMP
• The monitored variable's value crosses the start threshold for the first time.
• The monitored variable's value crosses the start threshold each time after it
crosses the restart threshold.
SNMP-Notification event occurs when the monitored MIB variable's value in an SNMP
SNMP-Notification notification matches the specified condition. For example, the broadcast traffic rate on
an Ethernet interface reaches or exceeds 30%.

271
Track event occurs when the state of the track entry changes from Positive to Negative
or from Negative to Positive. If you specify multiple track entries for a policy, EAA
triggers the policy only when the state of all the track entries changes from Positive
Track (Negative) to Negative (Positive).
If you set a suppress time for a policy, the timer starts when the policy is triggered. The
system does not process the messages that report the track entry state change from
Positive (Negative) to Negative (Positive) until the timer times out.

Action
You can create a series of order-dependent actions to take in response to the event specified in the
monitor policy.
The following are available actions:
• Executing a command.
• Sending a log.
• Enabling an active/standby switchover.
• Executing a reboot without saving the running configuration.
User role
For EAA to execute an action in a monitor policy, you must assign the policy the user role that has
access to the action-specific commands and resources. If EAA lacks access to an action-specific
command or resource, EAA does not perform the action and all the subsequent actions.
For example, a monitor policy has four actions numbered from 1 to 4. The policy has user roles that
are required for performing actions 1, 3, and 4. However, it does not have the user role required for
performing action 2. When the policy is triggered, EAA executes only action 1.
For more information about user roles, see RBAC in Fundamentals Configuration Guide.
Runtime
The runtime limits the amount of time that the monitor policy runs its actions from the time it is
triggered. This setting prevents a policy from running its actions permanently to occupy resources.

EAA environment variables


EAA environment variables decouple the configuration of action arguments from the monitor policy
so you can modify a policy easily.
An EAA environment variable is defined as a <variable_name variable_value> pair and can be used
in different policies. When you define an action, you can enter a variable name with a leading dollar
sign ($variable_name). EAA will replace the variable name with the variable value when it performs
the action.
To change the value for an action argument, modify the value specified in the variable pair instead of
editing each affected monitor policy.
EAA environment variables include system-defined variables and user-defined variables.
System-defined variables
System-defined variables are provided by default, and they cannot be created, deleted, or modified
by users. System-defined variable names start with an underscore (_) sign. The variable values are
set automatically depending on the event setting in the policy that uses the variables.
System-defined variables include the following types:
• Public variable—Available for any events.
• Event-specific variable—Available only for a type of event. The hotplug event-specific
variables are _slot and _subslot. When a member device in slot 1 joins or leaves the IRF fabric,

272
the value of _slot is 1. When a member device in slot 2 joins or leaves the IRF fabric, the value
of _slot is 2.
Table 18 shows all system-defined variables.
Table 18 System-defined EAA environment variables by event type

Event Variable name and description


_event_id: Event ID
_event_type: Event type
Any event _event_type_string: Event type description
_event_time: Time when the event occurs
_event_severity: Severity level of an event
CLI _cmd: Commands that are matched
Syslog _syslog_pattern: Log message content

Hotplug _slot: ID of the member device that joins or leaves the IRF fabric

Hotplug _subslot: ID of the subslot where subcard hot-swapping occurs.

Interface _ifname: Interface name


_oid: OID of the MIB variable where an SNMP operation is performed
SNMP
_oid_value: Value of the MIB variable
SNMP-Notification _oid: OID that is included in the SNMP notification.
Process _process_name: Process name

User-defined variables
You can use user-defined variables for all types of events.
User-defined variable names can contain digits, characters, and the underscore sign (_), except that
the underscore sign cannot be the leading character.

Configuring a user-defined EAA environment


variable
About this task
Configure user-defined EAA environment variables so that you can use them when creating EAA
monitor policies.
Procedure
1. Enter system view.
system-view
2. Configure a user-defined EAA environment variable.
rtm environment var-name var-value
For the system-defined variables, see Table 18.

273
Configuring a monitor policy
Restrictions and guidelines
Make sure the actions in different policies do not conflict. Policy execution result will be unpredictable
if policies that conflict in actions are running concurrently.
You can assign the same policy name to a CLI-defined policy and a Tcl-defined policy. However, you
cannot assign the same name to policies that are the same type.
A monitor policy supports only one event and runtime. If you configure multiple events for a policy,
the most recent one takes effect.
A monitor policy supports a maximum of 64 valid user roles. User roles added after this limit is
reached do not take effect.

Configuring a monitor policy from the CLI


Restrictions and guidelines
You can configure a series of actions to be executed in response to the event specified in a monitor
policy. EAA executes the actions in ascending order of action IDs. When you add actions to a policy,
you must make sure the execution order is correct. If two actions have the same ID, the most recent
one takes effect.
Procedure
1. Enter system view.
system-view
2. (Optional.) Set the size for the EAA-monitored log buffer.
rtm event syslog buffer-size buffer-size
By default, the EAA-monitored log buffer stores a maximum of 50000 logs
3. Create a CLI-defined policy and enter its view.
rtm cli-policy policy-name
4. Configure an event for the policy.
 Configure a CLI event.
event cli { async [ skip ] | sync } mode { execute | help | tab } pattern
regular-exp
 Configure a hotplug event.
event hotplug [ insert | remove ] slot slot-number [ subslot
subslot-number ]
 Configure an interface event.
event interface interface-list monitor-obj monitor-obj start-op
start-op start-val start-val restart-op restart-op restart-val
restart-val [ interval interval ]
 Configure a process event.
event process { exception | restart | shutdown | start } [ name
process-name [ instance instance-id ] ] [ slot slot-number ]
 Configure an SNMP event.
event snmp oid oid monitor-obj { get | next } start-op start-op
start-val start-val restart-op restart-op restart-val restart-val
[ interval interval ]

274
 Configure an SNMP-Notification event.
event snmp-notification oid oid oid-val oid-val op op [ drop ]
 Configure a Syslog event.
event syslog priority priority msg msg occurs times period period
 Configure a track event.
event track track-list state { negative | positive } [ suppress-time
suppress-time ]
By default, a monitor policy does not contain an event.
If you configure multiple events for a policy, the most recent one takes effect.
5. Configure the actions to take when the event occurs.
Choose the following tasks as needed:
 Configure a CLI action.
action number cli command-line
 Configure a reboot action.
action number reboot [ slot slot-number [ subslot subslot-number ] ]
 Configure an active/standby switchover action.
action number switchover
 Configure a logging action.
action number syslog priority priority facility local-number msg
msg-body
By default, a monitor policy does not contain any actions.
6. (Optional.) Assign a user role to the policy.
user-role role-name
By default, a monitor policy contains user roles that its creator had at the time of policy creation.
An EAA policy cannot have both the security-audit user role and any other user roles.
Any previously assigned user roles are automatically removed when you assign the
security-audit user role to the policy. The previously assigned security-audit user
role is automatically removed when you assign any other user roles to the policy.
7. (Optional.) Configure the policy action runtime.
running-time time
The default policy action runtime is 20 seconds.
If you configure multiple action runtimes for a policy, the most recent one takes effect.
8. Enable the policy.
commit
By default, CLI-defined policies are not enabled.
A CLI-defined policy can take effect only after you perform this step.

Configuring a monitor policy by using Tcl


About this task
A Tcl script contains two parts: Line 1 and the other lines.
• Line 1
Line 1 defines the event, user roles, and policy action runtime. After you create and enable a Tcl
monitor policy, the device immediately parses, delivers, and executes Line 1.
Line 1 must use the following format:

275
::platformtools::rtm::event_register event-type arg1 arg2 arg3 …
user-role role-name1 | [ user-role role-name2 | [ … ] ] [ running-time
running-time ]
 The arg1 arg2 arg3 … arguments represent event matching rules. If an argument value
contains spaces, use double quotation marks ("") to enclose the value. For example, "a b c."
 The configuration requirements for the event-type, user-role, and running-time
arguments are the same as those for a CLI-defined monitor policy.
• The other lines
From the second line, the Tcl script defines the actions to be executed when the monitor policy
is triggered. You can use multiple lines to define multiple actions. The system executes these
actions in sequence. The following actions are available:
 Standard Tcl commands.
 EAA-specific Tcl actions:
− switchover ( ::platformtools::rtm::action switchover )
− syslog (::platformtools::rtm::action syslog priority priority
facility local-number msg msg-body). For more information about these
arguments, see EAA commands in Network Management and Monitoring Command
Reference.
 Commands supported by the device.
Restrictions and guidelines
To revise the Tcl script of a policy, you must suspend all monitor policies first, and then resume the
policies after you finish revising the script. The system cannot execute a Tcl-defined policy if you edit
its Tcl script without first suspending these policies.
Procedure
1. Download the Tcl script file to the device by using FTP or TFTP.
For more information about using FTP and TFTP, see Fundamentals Configuration Guide.
2. Create and enable a Tcl monitor policy.
a. Enter system view.
system-view
b. Create a Tcl-defined policy and bind it to the Tcl script file.
rtm tcl-policy policy-name tcl-filename
By default, no Tcl policies exist.
Make sure the script file is saved on all IRF member devices. This practice ensures that the
policy can run correctly after a master/subordinate switchover occurs or the member device
where the script file resides leaves the IRF.

Suspending monitor policies


About this task
This task suspends all CLI-defined and Tcl-defined monitor policies. If a policy is running when you
perform this task, the system suspends the policy after it executes all the actions.
Restrictions and guidelines
To restore the operation of the suspended policies, execute the undo rtm scheduler suspend
command.
Procedure
1. Enter system view.

276
system-view
2. Suspend monitor policies.
rtm scheduler suspend

Display and maintenance commands for EAA


Execute display commands except for the display this command in any view.

Task Command
Display the running configuration of all
display current-configuration
CLI-defined monitor policies.
Display user-defined EAA environment
display rtm environment [ var-name ]
variables.

display rtm policy { active |


Display EAA monitor policies.
registered [ verbose ] } [ policy-name ]
Display the running configuration of a
CLI-defined monitor policy in CLI-defined display this
monitor policy view.

EAA configuration examples


Example: Configuring a CLI event monitor policy by using Tcl
Network configuration
As shown in Figure 86, use Tcl to create a monitor policy on the Device. This policy must meet the
following requirements:
• EAA sends the log message "rtm_tcl_test is running" when a command that contains the
display this string is entered.
• The system executes the command only after it executes the policy successfully.
Figure 86 Network diagram
TFTP client TFTP server
1.1.1.1/16 1.2.1.1/16
Internet
Device
PC

Procedure
# Edit a Tcl script file (rtm_tcl_test.tcl, in this example) for EAA to send the message "rtm_tcl_test is
running" when a command that contains the display this string is executed.
::platformtools::rtm::event_register cli sync mode execute pattern display this
user-role network-admin
::platformtools::rtm::action syslog priority 1 facility local4 msg rtm_tcl_test is
running

# Download the Tcl script file from the TFTP server at 1.2.1.1.
<Sysname> tftp 1.2.1.1 get rtm_tcl_test.tcl

# Create Tcl-defined policy test and bind it to the Tcl script file.
<Sysname> system-view

277
[Sysname] rtm tcl-policy test rtm_tcl_test.tcl
[Sysname] quit

Verifying the configuration


# Execute the display rtm policy registered command to verify that a Tcl-defined policy
named test is displayed in the command output.
<Sysname> display rtm policy registered
Total number: 1
Type Event TimeRegistered PolicyName
TCL CLI Aug 01 09:47:12 2019 test

# Enable the information center to output log messages to the current monitoring terminal.
<Sysname> terminal monitor
The current terminal is enabled to display logs.
<Sysname> system-view
[Sysname] info-center enable
Information center is enabled.
[Sysname] quit

# Execute the display this command. Verify that the system displays an "rtm_tcl_test is running"
message and a message that the policy is being executed successfully.
[Sysname] display this
%Aug 1 09:50:04:634 2019 Sysname RTM/1/RTM_ACTION: rtm_tcl_test is running
%Aug 1 09:50:04:636 2019 Sysname RTM/6/RTM_POLICY: TCL policy test is running
successfully.
#
return

Example: Configuring a CLI event monitor policy from the CLI


Network configuration
Configure a policy from the CLI to monitor the event that occurs when a question mark (?) is entered
at the command line that contains letters and digits.
When the event occurs, the system executes the command and sends the log message "hello world"
to the information center.
Procedure
# Create CLI-defined policy test and enter its view.
<Sysname> system-view
[Sysname] rtm cli-policy test

# Add a CLI event that occurs when a question mark (?) is entered at any command line that contains
letters and digits.
[Sysname-rtm-test] event cli async mode help pattern [a-zA-Z0-9]

# Add an action that sends the message "hello world" with a priority of 4 from the logging facility
local3 when the event occurs.
[Sysname-rtm-test] action 0 syslog priority 4 facility local3 msg “hello world”

# Add an action that enters system view when the event occurs.
[Sysname-rtm-test] action 2 cli system-view

# Add an action that creates VLAN 2 when the event occurs.


[Sysname-rtm-test] action 3 cli vlan 2

278
# Set the policy action runtime to 2000 seconds.
[Sysname-rtm-test] running-time 2000

# Specify the network-admin user role for executing the policy.


[Sysname-rtm-test] user-role network-admin

# Enable the policy.


[Sysname-rtm-test] commit

Verifying the configuration


# Execute the display rtm policy registered command to verify that a CLI-defined policy
named test is displayed in the command output.
[Sysname-rtm-test] display rtm policy registered
Total number: 1
Type Event TimeRegistered PolicyName
CLI CLI Aug 1 14:56:50 2019 test

# Enable the information center to output log messages to the current monitoring terminal.
[Sysname-rtm-test] return
<Sysname> terminal monitor
The current terminal is enabled to display logs.
<Sysname> system-view
[Sysname] info-center enable
Information center is enabled.
[Sysname] quit

# Enter a question mark (?) at a command line that contains a letter d. Verify that the system displays
a "hello world" message and a message that the policy is being executed successfully on the
terminal screen.
<Sysname> d?
debugging
delete
diagnostic-logfile
dir
display

<Sysname>d%Aug 1 14:57:20:218 2019 Sysname RTM/4/RTM_ACTION: "hello world"


%Aug 1 14:58:11:170 2019 Sysname RTM/6/RTM_POLICY: CLI policy test is running
successfully.

Example: Configuring a track event monitor policy from the


CLI
Network configuration
As shown in Figure 87, Device A has established BGP sessions with Device D and Device E. Traffic
from Device D and Device E to the Internet is forwarded through Device A.
Configure a CLI-defined EAA monitor policy on Device A to disconnect the sessions with Device D
and Device E when GigabitEthernet 1/0/1 connected to Device C is down. In this way, traffic from
Device D and Device E to the Internet can be forwarded through Device B.

279
Figure 87 Network diagram

IP network

Device C
10.2.1.2

Device A Device B
GE1/0/1 GE1/0/1

Device D Device E
10.3.1.2 10.3.2.2

Procedure

# Display BGP peer information for Device A.


<DeviceA> display bgp peer ipv4

BGP local router ID: 1.1.1.1


Local AS number: 100
Total number of peers: 3 Peers in established state: 3

* - Dynamically created peer


Peer AS MsgRcvd MsgSent OutQ PrefRcv Up/Down State

10.2.1.2 200 13 16 0 0 00:16:12 Established


10.3.1.2 300 13 16 0 0 00:10:34 Established
10.3.2.2 300 13 16 0 0 00:10:38 Established

# Create track entry 1 and associate it with the link state of GigabitEthernet 1/0/1.
<Device A> system-view
[Device A] track 1 interface gigabitethernet 1/0/1

# Configure a CLI-defined EAA monitor policy so that the system automatically disables session
establishment with Device D and Device E when GigabitEthernet 1/0/1 is down.
[Device A] rtm cli-policy test
[Device A-rtm-test] event track 1 state negative
[Device A-rtm-test] action 0 cli system-view
[Device A-rtm-test] action 1 cli bgp 100

280
[Device A-rtm-test] action 2 cli peer 10.3.1.2 ignore
[Device A-rtm-test] action 3 cli peer 10.3.2.2 ignore
[Device A-rtm-test] user-role network-admin
[Device A-rtm-test] commit
[Device A-rtm-test] quit

Verifying the configuration


# Shut down GigabitEthernet 1/0/1.
[Device A] interface gigabitethernet 1/0/1
[Device A-GigabitEthernet1/0/1] shutdown

# Execute the display bgp peer ipv4 command on Device A to display BGP peer information.
If no BGP peer information is displayed, Device A does not have any BGP peers.

Example: Configuring a CLI event monitor policy with EAA


environment variables from the CLI
Network configuration
Define an environment variable to match the IP address 1.1.1.1.
Configure a policy from the CLI to monitor the event that occurs when a command line that contains
loopback0 is executed. In the policy, use the environment variable for IP address assignment.
When the event occurs, the system performs the following tasks:
• Creates the Loopback 0 interface.
• Assigns 1.1.1.1/24 to the interface.
• Sends the matching command line to the information center.
Procedure
# Configure an EAA environment variable for IP address assignment. The variable name is
loopback0IP, and the variable value is 1.1.1.1.
<Sysname> system-view
[Sysname] rtm environment loopback0IP 1.1.1.1

# Create the CLI-defined policy test and enter its view.


[Sysname] rtm cli-policy test

# Add a CLI event that occurs when a command line that contains loopback0 is executed.
[Sysname-rtm-test] event cli async mode execute pattern loopback0

# Add an action that enters system view when the event occurs.
[Sysname-rtm-test] action 0 cli system-view

# Add an action that creates the interface Loopback 0 and enters loopback interface view.
[Sysname-rtm-test] action 1 cli interface loopback 0

# Add an action that assigns the IP address 1.1.1.1 to Loopback 0. The loopback0IP variable is
used in the action for IP address assignment.
[Sysname-rtm-test] action 2 cli ip address $loopback0IP 24

# Add an action that sends the matching loopback0 command with a priority of 0 from the logging
facility local7 when the event occurs.
[Sysname-rtm-test] action 3 syslog priority 0 facility local7 msg $_cmd

# Specify the network-admin user role for executing the policy.


[Sysname-rtm-test] user-role network-admin

281
# Enable the policy.
[Sysname-rtm-test] commit
[Sysname-rtm-test] return
<Sysname>

Verifying the configuration


# Enable the information center to output log messages to the current monitoring terminal.
<Sysname> terminal monitor
The current terminal is enabled to display logs.
<Sysname> terminal log level debugging
<Sysname> system-view
[Sysname] info-center enable
Information center is enabled.

# Execute the loopback0 command. Verify that the system displays a "loopback0" message and a
message that the policy is being executed successfully on the terminal screen.
[Sysname] interface loopback0
[Sysname-LoopBack0]%Aug 1 09:46:10:592 2019 Sysname RTM/7/RTM_ACTION: interface
loopback0
%Aug 1 09:46:10:613 2019 Sysname RTM/6/RTM_POLICY: CLI policy test is running
successfully.

# Verify that Loopback 0 has been created and its IP address is 1.1.1.1.
<Sysname-LoopBack0> display interface loopback brief
Brief information on interfaces in route mode:
Link: ADM - administratively down; Stby - standby
Protocol: (s) - spoofing
Interface Link Protocol Primary IP Description
Loop0 UP UP(s) 1.1.1.1

<Sysname-LoopBack0>

282
Monitoring and maintaining processes
About monitoring and maintaining processes
The system software of the device is a full-featured, modular, and scalable network operating system
based on the Linux kernel. The system software features run the following types of independent
processes:
• User process—Runs in user space. Most system software features run user processes. Each
process runs in an independent space so the failure of a process does not affect other
processes. The system automatically monitors user processes. The system supports
preemptive multithreading. A process can run multiple threads to support multiple activities.
Whether a process supports multithreading depends on the software implementation.
• Kernel thread—Runs in kernel space. A kernel thread executes kernel code. It has a higher
security level than a user process. If a kernel thread fails, the system breaks down. You can
monitor the running status of kernel threads.

Process monitoring and maintenance tasks at a


glance
To monitor and maintain processes, perform the following tasks:
• (Optional.) Starting or stopping a third-party process
 Starting a third-party process
 Stopping a third-party process
• Monitoring and maintaining user processes
 Monitoring and maintaining processes
The commands in this section apply to both user processes and kernel threads.
 Monitoring and maintaining user processes
The commands in this section apply only to user processes.
• Monitoring and maintaining kernel threads
 Monitoring and maintaining processes
The commands in this section apply to both user processes and kernel threads.
 Monitoring and maintaining kernel threads
The commands in this section apply only to kernel threads.

Starting or stopping a third-party process


About third-party processes
Third-party processes do not start up automatically. Use this feature to start or stop a third-party
process, such as Puppet or Chef.

283
Starting a third-party process
Restrictions and guidelines
If you execute the third-part-process start command multiple times but specify the same
process name, whether the command can be executed successfully depends on the process. You
can use the display current-configuration | include third-part-process
command to view the result.
Procedure
1. Enter system view.
system-view
2. Start a third-party process.
third-part-process start name process-name [ arg args ]

Stopping a third-party process


1. Display the IDs of third party processes.
display process all
This command is available in any view. "Y" in the THIRD field from the output indicates a
third-party process, and the PID field indicates the ID of the process.
2. Enter system view.
system-view
3. Stop a third-party process.
third-part-process stop pid pid&<1-10>
This command can be used to stop only processes started by the third-part-process
start command.

Monitoring and maintaining processes


About this task
The commands in this section apply to both user processes and kernel threads. You can use the
commands for the following purposes:
• Display the overall memory usage.
• Display the running processes and their memory and CPU usage.
• Locate abnormal processes.
If a process consumes excessive memory or CPU resources, the system identifies the process as an
abnormal process.
• If an abnormal process is a user process, troubleshoot the process as described in "Monitoring
and maintaining user processes."
• If an abnormal process is a kernel thread, troubleshoot the process as described in "Monitoring
and maintaining kernel threads."
Procedure
Execute the following commands in any view.

284
Task Command
display memory [ summary ] [ slot slot-number
Display memory usage.
[ cpu cpu-number ] ]
display process [ all | job job-id | name
Display process state
process-name ] [ slot slot-number [ cpu
information.
cpu-number ] ]
Display CPU usage for all display process cpu [ slot slot-number [ cpu
processes. cpu-number ] ]
monitor process [ dumbtty ] [ iteration number ]
Monitor process running state.
[ slot slot-number [ cpu cpu-number ] ]
monitor thread [ dumbtty ] [ iteration number ]
Monitor thread running state.
[ slot slot-number [ cpu cpu-number ] ]

For more information about the display memory command, see Fundamentals Command
Reference.

Monitoring and maintaining user processes


About monitoring and maintaining user processes
Use this feature to monitor abnormal user processes and locate problems.

Configuring core dump


About this task
The core dump feature enables the system to generate a core dump file each time a process crashes
until the maximum number of core dump files is reached. A core dump file stores information about
the process. You can send the core dump files to Hewlett Packard Enterprise Support to
troubleshoot the problems.
Restrictions and guidelines
Core dump files consume storage resources. Enable core dump only for processes that might have
problems.
Procedure
Execute the following commands in user view:
1. (Optional.) Specify the directory for saving core dump files.
exception filepath directory
By default, the directory for saving core dump files is the root directory of the default file system.
For more information about the default file system, see file system management in
Fundamentals Configuration Guide.
2. Enable core dump for a process and specify the maximum number of core dump files, or
disable core dump for a process.
process core { maxcore value | off } { job job-id | name process-name }
By default, a process generates a core dump file for the first exception and does not generate
any core dump files for subsequent exceptions.

285
Display and maintenance commands for user processes
Execute display commands in any view and other commands in user view.

Task Command
display exception context [ count
Display context information for process exceptions. value ] [ slot slot-number [ cpu
cpu-number ] ]
display exception filepath [ slot
Display the core dump file directory.
slot-number [ cpu cpu-number ] ]
display process log [ slot
Display log information for all user processes.
slot-number [ cpu cpu-number ] ]
display process memory [ slot
Display memory usage for all user processes.
slot-number [ cpu cpu-number ] ]
display process memory heap job
Display heap memory usage for a user process. job-id [ verbose ] [ slot
slot-number [ cpu cpu-number ] ]
display process memory heap job
Display memory content starting from a specified job-id address starting-address
memory block for a user process. length memory-length [ slot
slot-number [ cpu cpu-number ] ]
display process memory heap job
Display the addresses of memory blocks with a job-id size memory-size [ offset
specified size used by a user process. offset-size ] [ slot slot-number
[ cpu cpu-number ] ]
reset exception context [ slot
Clear context information for process exceptions.
slot-number [ cpu cpu-number ] ]

Monitoring and maintaining kernel threads


Configuring kernel thread deadloop detection
About this task
Kernel threads share resources. If a kernel thread monopolizes the CPU, other threads cannot run,
resulting in a deadloop.
This feature enables the device to detect deadloops. If a thread occupies the CPU for a specific
interval, the device determines that a deadloop has occurred and generates a deadloop message.
Restrictions and guidelines
Change kernel thread deadloop detection settings only under the guidance of Hewlett Packard
Enterprise Support. Inappropriate configuration can cause system breakdown.
Procedure
1. Enter system view.
system-view
2. Enable kernel thread deadloop detection.

286
monitor kernel deadloop enable [ slot slot-number [ cpu cpu-number
[ core core-number&<1-64> ] ] ]
By default, kernel thread deadloop detection is enabled.
3. (Optional.) Set the threshold for identifying a kernel thread deadloop.
monitor kernel deadloop time time [ slot slot-number [ cpu
cpu-number ] ]
By default, the threshold for identifying a kernel thread deadloop is 20 seconds.
4. (Optional.) Exclude a kernel thread from kernel thread deadloop detection.
monitor kernel deadloop exclude-thread tid [ slot slot-number [ cpu
cpu-number ] ]
When enabled, kernel thread deadloop detection monitors all kernel threads by default.
5. (Optional.) Specify the action to be taken in response to a kernel thread deadloop.
monitor kernel deadloop action { reboot | record-only } [ slot
slot-number [ cpu cpu-number ] ]
The default action is reboot.

Configuring kernel thread starvation detection


About this task
Starvation occurs when a thread is unable to access shared resources.
Kernel thread starvation detection enables the system to detect and report thread starvation. If a
thread is not executed within a specific interval, the system determines that a starvation has
occurred and generates a starvation message.
Thread starvation does not impact system operation. A starved thread can automatically run when
certain conditions are met.
Restrictions and guidelines
Configure kernel thread starvation detection only under the guidance of Hewlett Packard Enterprise
Support. Inappropriate configuration can cause system breakdown.
Procedure
1. Enter system view.
system-view
2. Enable kernel thread starvation detection.
monitor kernel starvation enable [ slot slot-number [ cpu cpu-number ] ]
By default, kernel thread starvation detection is disabled.
3. (Optional.) Set the threshold for identifying a kernel thread starvation.
monitor kernel starvation time time [ slot slot-number [ cpu
cpu-number ] ]
By default, the threshold for identifying a kernel thread starvation is 120 seconds.
4. (Optional.) Exclude a kernel thread from kernel thread starvation detection.
monitor kernel starvation exclude-thread tid [ slot slot-number [ cpu
cpu-number ] ]
When enabled, kernel thread starvation detection monitors all kernel threads by default.

Display and maintenance commands for kernel threads


Execute display commands in any view and reset commands in user view.

287
Task Command
Display kernel thread deadloop detection display kernel deadloop configuration
configuration. [ slot slot-number [ cpu cpu-number ] ]
display kernel deadloop show-number
Display kernel thread deadloop information. [ offset ] [ verbose ] [ slot slot-number
[ cpu cpu-number ] ]
display kernel exception show-number
Display kernel thread exception information. [ offset ] [ verbose ] [ slot slot-number
[ cpu cpu-number ] ]
display kernel reboot show-number
Display kernel thread reboot information. [ offset ] [ verbose ] [ slot slot-number
[ cpu cpu-number ] ]
display kernel starvation
Display kernel thread starvation detection
configuration [ slot slot-number [ cpu
configuration.
cpu-number ] ]
display kernel starvation show-number
Display kernel thread starvation information. [ offset ] [ verbose ] [ slot slot-number
[ cpu cpu-number ] ]
reset kernel deadloop [ slot slot-number
Clear kernel thread deadloop information.
[ cpu cpu-number ] ]
reset kernel exception [ slot
Clear kernel thread exception information.
slot-number [ cpu cpu-number ] ]
reset kernel reboot [ slot slot-number
Clear kernel thread reboot information.
[ cpu cpu-number ] ]
reset kernel starvation [ slot
Clear kernel thread starvation information.
slot-number [ cpu cpu-number ] ]

288
Configuring port mirroring
About port mirroring
Port mirroring copies the packets passing through a port to a port that connects to a data monitoring
device for packet analysis.

Terminology
The following terms are used in port mirroring configuration.
Mirroring source
The mirroring sources can be one or more monitored ports called source ports.
Packets passing through mirroring sources are copied to a port connecting to a data monitoring
device for packet analysis. The copies are called mirrored packets.
Source device
The device where the mirroring sources reside is called a source device.
Mirroring destination
The mirroring destination connects to a data monitoring device and is the destination port (also
known as the monitor port) of mirrored packets. Mirrored packets are sent out of the monitor port to
the data monitoring device.
A monitor port might receive multiple copies of a packet when it monitors multiple mirroring sources.
For example, two copies of a packet are received on Port A when the following conditions exist:
• Port A is monitoring bidirectional traffic of Port B and Port C on the same device.
• The packet travels from Port B to Port C.
Destination device
The device where the monitor port resides is called the destination device.
Mirroring direction
The mirroring direction specifies the direction of the traffic that is copied on a mirroring source.
• Inbound—Copies packets received.
• Outbound—Copies packets sent.
• Bidirectional—Copies packets received and sent.
Mirroring group
Port mirroring is implemented through mirroring groups. Mirroring groups can be classified into local
mirroring groups, remote source groups, and remote destination groups.
Reflector port, egress port, and remote probe VLAN
Reflector ports, remote probe VLANs, and egress ports are used for Layer 2 remote port mirroring.
The remote probe VLAN is a dedicated VLAN for transmitting mirrored packets to the destination
device. Both the reflector port and egress port reside on a source device and send mirrored packets
to the remote probe VLAN.
On port mirroring devices, all ports except source, destination, reflector, and egress ports are called
common ports.

289
Port mirroring classification
Port mirroring can be classified into local port mirroring and remote port mirroring.
• Local port mirroring—Also known as Switch Port Analyzer (SPAN). In local port mirroring, the
source device is directly connected to a data monitoring device. The source device also acts as
the destination device and forwards mirrored packets directly to the data monitoring device.
• Remote port mirroring—The source device is not directly connected to a data monitoring
device. The source device sends mirrored packets to the destination device, which forwards the
packets to the data monitoring device.
Remote port mirroring is also known as Layer 2 remote port mirroring because the source
device and destination device are on the same Layer 2 network.

Local port mirroring (SPAN)


Figure 88 Local port mirroring implementation
Mirroring process in the
device

Port A Port B

Port A Port B
Data monitoring
Host Device
device

Original packets Source port


Mirrored packets Monitor port

As shown in Figure 88, the source port (Port A) and the monitor port (Port B) reside on the same
device. Packets received on Port A are copied to Port B. Port B then forwards the packets to the data
monitoring device for analysis.

Layer 2 remote port mirroring (RSPAN)


In Layer 2 remote port mirroring, the mirroring sources and destination reside on different devices
and are in different mirroring groups.
A remote source group is a mirroring group that contains the mirroring sources. A remote destination
group is a mirroring group that contains the mirroring destination. Intermediate devices are the
devices between the source device and the destination device.
Layer 2 remote port mirroring can be implemented through the reflector port method or the egress
port method.
Reflector port method
In Layer 2 remote port mirroring that uses the reflector port method, packets are mirrored as follows:
1. The source device copies packets received on the mirroring sources to the reflector port.
2. The reflector port broadcasts the mirrored packets in the remote probe VLAN.
3. The intermediate devices transmit the mirrored packets to the destination device through the
remote probe VLAN.

290
4. Upon receiving the mirrored packets, the destination device determines whether the ID of the
mirrored packets is the same as the remote probe VLAN ID. If the two VLAN IDs match, the
destination device forwards the mirrored packets to the data monitoring device through the
monitor port.
Figure 89 Layer 2 remote port mirroring implementation through the reflector port
method
Mirroring process in the device

Port A Port C Port B

Destination
Port C
Port B Port A Port B Port A device
Source Remote Intermediate Remote
Port A Port B
device probe VLAN device probe VLAN

Data monitoring
Host
device

Original packets Source port Reflector port


Mirrored packets Monitor port Common port

Egress port method


In Layer 2 remote port mirroring that uses the egress port method, packets are mirrored as follows:
1. The source device copies packets received on the mirroring sources to the egress port.
2. The egress port forwards the mirrored packets to the intermediate devices.
3. The intermediate devices flood the mirrored packets in the remote probe VLAN and transmit the
mirrored packets to the destination device.
4. Upon receiving the mirrored packets, the destination device determines whether the ID of the
mirrored packets is the same as the remote probe VLAN ID. If the two VLAN IDs match, the
destination device forwards the mirrored packets to the data monitoring device through the
monitor port.

291
Figure 90 Layer 2 remote port mirroring implementation through the egress port method
Mirroring process in the
device

Port A Port B

Source Destination
device Port B Port A Port B Port A device

Port A Remote Intermediate Remote Port B


prove VLAN device prove VLAN

Host Data monitoring


device
Original packets Source port Egress port
Mirrored packets Monitor port Common port

Restrictions and guidelines: Port mirroring


configuration
The reflector port method for Layer 2 remote port mirroring can be used to implement local port
mirroring with multiple data monitoring devices.
In the reflector port method, the reflector port broadcasts mirrored packets in the remote probe VLAN.
By assigning the ports that connect to data monitoring devices to the remote probe VLAN, you can
implement local port mirroring to mirror packets to multiple data monitoring devices. The egress port
method cannot implement local port mirroring in this way.
For inbound traffic mirroring, the VLAN tag in the original packet is copied to the mirrored packet.
For outbound traffic mirroring, the mirrored packet carries the VLAN tag of the egress interface.

Configuring local port mirroring (SPAN)


Restrictions and guidelines for local port mirroring
configuration
A local mirroring group takes effect only after it is configured with the monitor port and mirroring
sources.
A Layer 3 aggregate interface cannot be configured as a source port for a local mirroring group.
A Layer 2 or Layer 3 aggregate interface cannot be configured as the monitor port for a local
mirroring group.

Local port mirroring tasks at a glance


To configure local port mirroring, perform the following tasks:

292
1. Configuring mirroring sources
Choose one of the following tasks:
 Configuring source ports
2. Configuring the monitor port

Creating a local mirroring group


1. Enter system view.
system-view
2. Create a local mirroring group.
mirroring-group group-id local

Configuring mirroring sources


Restrictions and guidelines for mirroring source configuration
When you configure source ports for a local mirroring group, follow these restrictions and guidelines:
• A mirroring group can contain multiple source ports.
• A port can act as a source port for only one mirroring group.
• A source port cannot be configured as a reflector port, egress port, or monitor port.
Configuring source ports
• Configure source ports in system view:
a. Enter system view.
system-view
b. Configure source ports for a local mirroring group.
mirroring-group group-id mirroring-port interface-list { both |
inbound | outbound }
By default, no source port is configured for a local mirroring group.
• Configure source ports in interface view:
c. Enter system view.
system-view
d. Enter interface view.
interface interface-type interface-number
e. Configure the port as a source port for a local mirroring group.
mirroring-group group-id mirroring-port { both | inbound |
outbound }
By default, a port does not act as a source port for any local mirroring groups.

Configuring the monitor port


Restrictions and guidelines
Do not enable the spanning tree feature on the monitor port.
Only one monitor port can be specified for a local mirroring group.
Use a monitor port only for port mirroring, so the data monitoring device receives only the mirrored
traffic.

293
Procedure
• Configure the monitor port in system view:
a. Enter system view.
system-view
b. Configure the monitor port for a local mirroring group.
mirroring-group group-id monitor-port interface-type
interface-number
By default, no monitor port is configured for a local mirroring group.
• Configure the monitor port in interface view:
c. Enter system view.
system-view
d. Enter interface view.
interface interface-type interface-number
e. Configure the port as the monitor port for a mirroring group.
mirroring-group group-id monitor-port
By default, a port does not act as the monitor port for any local mirroring groups.

Configuring local port mirroring group with


multiple monitoring devices
About local port mirroring with multiple monitoring devices
To monitor interesting traffic passing through a device on multiple directly connected data monitoring
devices, configure local port mirroring with a remote probe VLAN as follows:
1. Configure a remote source group on the device.
2. Configure mirroring sources and a reflector port for the remote source group.
3. Specify a VLAN as the remote probe VLAN and assign the ports connecting to the data
monitoring devices to the VLAN.
This configuration enables the device to copy packets received on the mirroring sources to the
reflector port, which broadcasts the packets in the remote probe VLAN. The packets are then sent
out of the member ports of the remote probe VLAN to the data monitoring devices.
Restrictions and guidelines
The reflector port must be a port not in use. Do not connect a network cable to the reflector port.
When a port is configured as a reflector port, the port restores to the factory default settings. You
cannot configure other features on the reflector port.
Do not assign a source port of a mirroring group to the remote probe VLAN of the mirroring group.
A VLAN can act as the remote probe VLAN for only one remote source group. As a best practice, use
the VLAN for port mirroring exclusively. Do not create a VLAN interface for the VLAN or configure
other features for the VLAN.
The remote probe VLAN must be a static VLAN.
To delete a VLAN that has been configured as the remote probe VLAN for a mirroring group, remove
the remote probe VLAN from the mirroring group first.
The device supports only one remote probe VLAN.

294
Procedure
1. Enter system view.
system-view
2. Create a remote source group.
mirroring-group group-id remote-source
3. Configure mirroring sources for the remote source group. Choose one option as needed:
 Configure mirroring ports in system view.
mirroring-group group-id mirroring-port interface-list { both |
inbound | outbound }
 Execute the following commands in sequence to enter interface view and then configure the
interface as a source port.
interface interface-type interface-number
mirroring-group group-id mirroring-port { both | inbound |
outbound }
quit
4. Configure the reflector port for the remote source group.
mirroring-group group-id reflector-port reflector-port
By default, no reflector port is configured for a remote source group.
5. Create a VLAN and enter its view.
vlan vlan-id
6. Assign the ports that connect to the data monitoring devices to the VLAN.
port interface-list
By default, a VLAN does not contain any ports.
7. Return to system view.
quit
8. Specify the VLAN as the remote probe VLAN for the remote source group.
mirroring-group group-id remote-probe vlan vlan-id
By default, no remote probe VLAN is configured for a remote source group.

Configuring Layer 2 remote port mirroring


(RSPAN)
Restrictions and guidelines for Layer 2 remote port mirroring
configuration
To ensure successful traffic mirroring, configure devices in the order of the destination device, the
intermediate devices, and the source device.
If intermediate devices exist, configure the intermediate devices to allow the remote probe VLAN to
pass through.
For a mirrored packet to successfully arrive at the remote destination device, make sure its VLAN ID
is not removed or changed.
Do not configure both MVRP and Layer 2 remote port mirroring. Otherwise, MVRP might register the
remote probe VLAN with incorrect ports, which would cause the monitor port to receive undesired
copies. For more information about MVRP, see Layer 2—LAN Switching Configuration Guide.

295
To monitor the bidirectional traffic of a source port, disable MAC address learning for the remote
probe VLAN on the source, intermediate, and destination devices. For more information about MAC
address learning, see Layer 2—LAN Switching Configuration Guide.
The member port of a Layer 2 or Layer 3 aggregate interface cannot be configured as the monitor
port for Layer 2 remote port mirroring.

Layer 2 remote port mirroring with a fixed reflector port


configuration tasks at a glance
Configuring the destination device
1. Creating a remote destination group
2. Configuring the monitor port
3. Configuring the remote probe VLAN
4. Assigning the monitor port to the remote probe VLAN
Configuring the source device
1. Creating a remote source group
2. Configuring mirroring sources
3. Configuring the remote probe VLAN

Layer 2 remote port mirroring with reflector port configuration


task list
Configuring the destination device
1. Creating a remote destination group
2. Configuring the monitor port
3. Configuring the remote probe VLAN
4. Assigning the monitor port to the remote probe VLAN
Configuring the source device
1. Creating a remote source group
2. Configuring mirroring sources
3. Configuring the reflector port
4. Configuring the remote probe VLAN

Layer 2 remote port mirroring with egress port configuration


task list
Configuring the destination device
1. Creating a remote destination group
2. Configuring the monitor port
3. Configuring the remote probe VLAN
4. Assigning the monitor port to the remote probe VLAN
Configuring the source device
1. Creating a remote source group

296
2. Configuring mirroring sources
3. Configuring the egress port
4. Configuring the remote probe VLAN

Creating a remote destination group


Restrictions and guidelines
Perform this task on the destination device only.
Procedure
1. Enter system view.
system-view
2. Create a remote destination group.
mirroring-group group-id remote-destination

Configuring the monitor port


Restrictions and guidelines for monitor port configuration
Perform this task on the destination device only.
Do not enable the spanning tree feature on the monitor port.
Use a monitor port only for port mirroring, so the data monitoring device receives only the mirrored
traffic.
A monitor port can belong to only one mirroring group.
Only one monitor port can be specified for a remote destination group.
A Layer 2 or Layer 3 aggregate interface cannot be configured as the monitor port for a mirroring
group.
Configuring the monitor port in system view
1. Enter system view.
system-view
2. Configure the monitor port for a remote destination group.
mirroring-group group-id monitor-port interface-type
interface-number
By default, no monitor port is configured for a remote destination group.
Configuring the monitor port in interface view
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Configure the port as the monitor port for a remote destination group.
mirroring-group group-id monitor-port
By default, a port does not act as the monitor port for any remote destination groups.

297
Configuring the remote probe VLAN
Restrictions and guidelines
This task is required on the both the source and destination devices.
Only an existing static VLAN can be configured as a remote probe VLAN.
When a VLAN is configured as a remote probe VLAN, use the remote probe VLAN for port mirroring
exclusively.
Configure the same remote probe VLAN for the remote source group and the remote destination
group.
The device supports only one remote probe VLAN.
Procedure
1. Enter system view.
system-view
2. Configure the remote probe VLAN for the remote source or destination group.
mirroring-group group-id remote-probe vlan vlan-id
By default, no remote probe VLAN is configured for a remote source or destination group.

Assigning the monitor port to the remote probe VLAN


Restrictions and guidelines
Perform this task on the destination device only.
Procedure
1. Enter system view.
system-view
2. Enter the interface view of the monitor port.
interface interface-type interface-number
3. Assign the port to the remote probe VLAN.
 Assign an access port to the remote probe VLAN.
port access vlan vlan-id
 Assign a trunk port to the remote probe VLAN.
port trunk permit vlan vlan-id
 Assign a hybrid port to the remote probe VLAN.
port hybrid vlan vlan-id { tagged | untagged }
For more information about the port access vlan, port trunk permit vlan, and port
hybrid vlan commands, see Layer 2—LAN Switching Command Reference.

Creating a remote source group


Restrictions and guidelines
Perform this task on the source device only.
Procedure
1. Enter system view.
system-view

298
2. Create a remote source group.
mirroring-group group-id remote-source

Configuring mirroring sources


Restrictions and guidelines for mirroring source configuration
Perform this task on the source device only.
When you configure source ports for a remote source group, follow these restrictions and guidelines:
• Do not assign a source port of a mirroring group to the remote probe VLAN of the mirroring
group.
• A mirroring group can contain multiple source ports.
• A port can act as a source port for only one mirroring group.
• A source port cannot be configured as a reflector port, monitor port, or egress port.
• When a Layer 2 aggregate interface is configured as a source port, only overlay traffic of the
interface can be mirrored in a VXLAN network.
• When the member port of a Layer 2 aggregate interface is configured as a source port, VXLAN
traffic of the member port cannot be mirrored.
Configuring source ports
• Configure source ports in system view:
a. Enter system view.
system-view
b. Configure source ports for a remote source group.
mirroring-group group-id mirroring-port interface-list { both |
inbound | outbound }
By default, no source port is configured for a remote source group.
• Configure source ports in interface view:
c. Enter system view.
system-view
d. Enter interface view.
interface interface-type interface-number
e. Configure the port as a source port for a remote source group.
mirroring-group group-id mirroring-port { both | inbound |
outbound }
By default, a port does not act as a source port for any remote source groups.

Configuring the reflector port


Restrictions and guidelines for reflector port configuration
Perform this task on the source device only.
The port to be configured as a reflector port must be a port not in use. Do not connect a network
cable to a reflector port.
When a port is configured as a reflector port, the default settings of the port are automatically
restored. You cannot configure other features on the reflector port.
If an IRF port is bound to only one physical interface, do not configure the physical interface as a
reflector port. Otherwise, the IRF might split.

299
A remote source group supports only one reflector port.
Configuring the reflector port in system view
1. Enter system view.
system-view
2. Configure the reflector port for a remote source group.
mirroring-group group-id reflector-port interface-type
interface-number
By default, no reflector port is configured for a remote source group.
Configuring the reflector port in interface view
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Configure the port as the reflector port for a remote source group.
mirroring-group group-id reflector-port
By default, a port does not act as the reflector port for any remote source groups.

Configuring the egress port


Restrictions and guidelines for egress port configuration
Perform this task on the source device only.
Disable the following features on the egress port:
• Spanning tree.
• 802.1X.
• IGMP snooping.
• Static ARP.
• MAC address learning.
A port of an existing mirroring group cannot be configured as an egress port.
A mirroring group supports only one egress port.
Configuring the egress port in system view
1. Enter system view.
system-view
2. Configure the egress port for a remote source group.
mirroring-group group-id monitor-egress interface-type
interface-number
By default, no egress port is configured for a remote source group.
3. Enter the egress port view.
interface interface-type interface-number
4. Assign the egress port to the remote probe VLAN.
 Assign a trunk port to the remote probe VLAN.
port trunk permit vlan vlan-id
 Assign a hybrid port to the remote probe VLAN.
port hybrid vlan vlan-id { tagged | untagged }

300
For more information about the port trunk permit vlan and port hybrid vlan
commands, see Layer 2—LAN Switching Command Reference.
Configuring the egress port in interface view
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Configure the port as the egress port for a remote source group.
mirroring-group group-id monitor-egress
By default, a port does not act as the egress port for any remote source groups.

Display and maintenance commands for port


mirroring
Execute display commands in any view.

Task Command
display mirroring-group { group-id | all
Display mirroring group information. | local | remote-destination |
remote-source }

Port mirroring configuration examples


Example: Configuring local port mirroring
Network configuration
As shown in Figure 91, configure local port mirroring in source port mode to enable the server to
monitor the bidirectional traffic of the two departments.
Figure 91 Network diagram

Marketing Dept.
GE1/0/1

GE1/0/3
Device

GE1/0/2 Server
Technical Dept.

Source port

Monitor port

301
Procedure
# Create local mirroring group 1.
<Device> system-view
[Device] mirroring-group 1 local

# Configure GigabitEthernet 1/0/1 and GigabitEthernet 1/0/2 as source ports for local mirroring group
1.
[Device] mirroring-group 1 mirroring-port gigabitethernet 1/0/1 gigabitethernet 1/0/2
both

# Configure GigabitEthernet 1/0/3 as the monitor port for local mirroring group 1.
[Device] mirroring-group 1 monitor-port gigabitethernet 1/0/3

# Disable the spanning tree feature on the monitor port (GigabitEthernet 1/0/3).
[Device] interface gigabitethernet 1/0/3
[Device-GigabitEthernet1/0/3] undo stp enable
[Device-GigabitEthernet1/0/3] quit

Verifying the configuration


# Verify the mirroring group configuration.
[Device] display mirroring-group all
Mirroring group 1:
Type: Local
Status: Active
Mirroring port:
GigabitEthernet1/0/1 Both
GigabitEthernet1/0/2 Both
Monitor port: GigabitEthernet1/0/3

Example: Configuring local port mirroring with multiple


monitoring devices
Network configuration
As shown in Figure 92, Dept. A, Dept. B, and Dept. C are connected to the device through
GigabitEthernet 1/0/1, GigabitEthernet 1/0/2, and GigabitEthernet 1/0/3, respectively.
Configure port mirroring to enable data monitoring devices Server A and Server B to monitor both the
incoming and outgoing traffic of departments A, B, and C.

302
Figure 92 Network diagram

Dept A
Server A
GE
4
1/0
/1 1 /0/
GE
GE1/0/2
GE
1/0
/0/3 /5
Dept B E1
G Device

Server B
Dept C

Procedure
# Create remote source group 1.
<Device> system-view
[Device] mirroring-group 1 remote-source

# Configure GigabitEthernet 1/0/1 through GigabitEthernet 1/0/3 as source ports of remote source
group 1.
[Device] mirroring-group 1 mirroring-port GigabitEthernet1/0/1 to GigabitEthernet1/0/3
both

# Configure an unused port (GigabitEthernet 1/0/6 in this example) as the reflector port of remote
source group 1.
[Device] mirroring-group 1 reflector-port GigabitEthernet1/0/6
This operation may delete all settings made on the interface. Continue? [Y/N]:y

# Create VLAN 10 and assign the ports connecting the data monitoring devices to the VLAN.
[Device] vlan 10
[Device-vlan10] port GigabitEthernet1/0/4 to GigabitEthernet1/0/5
[Device-vlan10] quit

# Configure VLAN 10 as the remote probe VLAN of remote source group 1.


[Device] mirroring-group 1 remote-probe vlan 10

Example: Configuring Layer 2 remote port mirroring (RSPAN


with reflector port)
Network configuration
As shown in Figure 93, configure Layer 2 remote port mirroring to enable the server to monitor the
bidirectional traffic of the Marketing Department.

303
Figure 93 Network diagram
Source device Intermediate device Destination device
Device A Device B Device C
GE1/0/2 GE1/0/1 GE1/0/2 GE1/0/1
GE1/0/3
VLAN 2 VLAN 2
GE1/0/1 GE1/0/2

Marketing Dept.

Server

Common port Source port Reflector port Monitor port

Procedure
1. Configure Device C (the destination device):
# Configure GigabitEthernet 1/0/1 as a trunk port, and assign the port to VLAN 2.
<DeviceC> system-view
[DeviceC] interface gigabitethernet 1/0/1
[DeviceC-GigabitEthernet1/0/1] port link-type trunk
[DeviceC-GigabitEthernet1/0/1] port trunk permit vlan 2
[DeviceC-GigabitEthernet1/0/1] quit
# Create a remote destination group.
[DeviceC] mirroring-group 2 remote-destination
# Create VLAN 2.
[DeviceC] vlan 2
# Disable MAC address learning for VLAN 2.
[DeviceC-vlan2] undo mac-address mac-learning enable
[DeviceC-vlan2] quit
# Configure VLAN 2 as the remote probe VLAN for the mirroring group.
[DeviceC] mirroring-group 2 remote-probe vlan 2
# Configure GigabitEthernet 1/0/2 as the monitor port for the mirroring group.
[DeviceC] interface gigabitethernet 1/0/2
[DeviceC-GigabitEthernet1/0/2] mirroring-group 2 monitor-port
# Disable the spanning tree feature on GigabitEthernet 1/0/2.
[DeviceC-GigabitEthernet1/0/2] undo stp enable
# Assign GigabitEthernet 1/0/2 to VLAN 2.
[DeviceC-GigabitEthernet1/0/2] port access vlan 2
[DeviceC-GigabitEthernet1/0/2] quit
2. Configure Device B (the intermediate device):
# Create VLAN 2.
<DeviceB> system-view
[DeviceB] vlan 2
# Disable MAC address learning for VLAN 2.
[DeviceB-vlan2] undo mac-address mac-learning enable
[DeviceB-vlan2] quit
# Configure GigabitEthernet 1/0/1 as a trunk port, and assign the port to VLAN 2.

304
[DeviceB] interface gigabitethernet 1/0/1
[DeviceB-GigabitEthernet1/0/1] port link-type trunk
[DeviceB-GigabitEthernet1/0/1] port trunk permit vlan 2
[DeviceB-GigabitEthernet1/0/1] quit
# Configure GigabitEthernet 1/0/2 as a trunk port, and assign the port to VLAN 2.
[DeviceB] interface gigabitethernet 1/0/2
[DeviceB-GigabitEthernet1/0/2] port link-type trunk
[DeviceB-GigabitEthernet1/0/2] port trunk permit vlan 2
[DeviceB-GigabitEthernet1/0/2] quit
3. Configure Device A (the source device):
# Create a remote source group.
<DeviceA> system-view
[DeviceA] mirroring-group 1 remote-source
# Create VLAN 2.
[DeviceA] vlan 2
# Disable MAC address learning for VLAN 2.
[DeviceA-vlan2] undo mac-address mac-learning enable
[DeviceA-vlan2] quit
# Configure VLAN 2 as the remote probe VLAN for the mirroring group.
[DeviceA] mirroring-group 1 remote-probe vlan 2
# Configure GigabitEthernet 1/0/1 as a source port for the mirroring group.
[DeviceA] mirroring-group 1 mirroring-port gigabitethernet 1/0/1 both
# Configure GigabitEthernet 1/0/3 as the reflector port for the mirroring group.
[DeviceA] mirroring-group 1 reflector-port gigabitethernet 1/0/3
This operation may delete all settings made on the interface. Continue? [Y/N]: y
# Configure GigabitEthernet 1/0/2 as a trunk port, and assign the port to VLAN 2.
[DeviceA] interface gigabitethernet 1/0/2
[DeviceA-GigabitEthernet1/0/2] port link-type trunk
[DeviceA-GigabitEthernet1/0/2] port trunk permit vlan 2
[DeviceA-GigabitEthernet1/0/2] quit

Verifying the configuration


# Verify the mirroring group configuration on Device C.
[DeviceC] display mirroring-group all
Mirroring group 2:
Type: Remote destination
Status: Active
Monitor port: GigabitEthernet1/0/2
Remote probe VLAN: 2

# Verify the mirroring group configuration on Device A.


[DeviceA] display mirroring-group all
Mirroring group 1:
Type: Remote source
Status: Active
Mirroring port:
GigabitEthernet1/0/1 Both
Reflector port: GigabitEthernet1/0/3
Remote probe VLAN: 2

305
Example: Configuring Layer 2 remote port mirroring (RSPAN
with egress port)
Network configuration
On the Layer 2 network shown in Figure 94, configure Layer 2 remote port mirroring to enable the
server to monitor the bidirectional traffic of the Marketing Department.
Figure 94 Network diagram
Source device Intermediate device Destination device
Device A Device B Device C
GE1/0/2 GE1/0/1 GE1/0/2 GE1/0/1

VLAN 2 VLAN 2
GE1/0/1 GE1/0/2

Marketing Dept.

Server

Common port Source port Egress port Monitor port

Procedure
1. Configure Device C (the destination device):
# Configure GigabitEthernet 1/0/1 as a trunk port, and assign the port to VLAN 2.
<DeviceC> system-view
[DeviceC] interface gigabitethernet 1/0/1
[DeviceC-GigabitEthernet1/0/1] port link-type trunk
[DeviceC-GigabitEthernet1/0/1] port trunk permit vlan 2
[DeviceC-GigabitEthernet1/0/1] quit
# Create a remote destination group.
[DeviceC] mirroring-group 2 remote-destination
# Create VLAN 2.
[DeviceC] vlan 2
# Disable MAC address learning for VLAN 2.
[DeviceC-vlan2] undo mac-address mac-learning enable
[DeviceC-vlan2] quit
# Configure VLAN 2 as the remote probe VLAN for the mirroring group.
[DeviceC] mirroring-group 2 remote-probe vlan 2
# Configure GigabitEthernet 1/0/2 as the monitor port for the mirroring group.
[DeviceC] interface gigabitethernet 1/0/2
[DeviceC-GigabitEthernet1/0/2] mirroring-group 2 monitor-port
# Disable the spanning tree feature on GigabitEthernet 1/0/2.
[DeviceC-GigabitEthernet1/0/2] undo stp enable
# Assign GigabitEthernet 1/0/2 to VLAN 2 as an access port.
[DeviceC-GigabitEthernet1/0/2] port access vlan 2
[DeviceC-GigabitEthernet1/0/2] quit
2. Configure Device B (the intermediate device):

306
# Create VLAN 2.
<DeviceB> system-view
[DeviceB] vlan 2
# Disable MAC address learning for VLAN 2.
[DeviceB-vlan2] undo mac-address mac-learning enable
[DeviceB-vlan2] quit
# Configure GigabitEthernet 1/0/1 as a trunk port, and assign the port to VLAN 2.
[DeviceB] interface gigabitethernet 1/0/1
[DeviceB-GigabitEthernet1/0/1] port link-type trunk
[DeviceB-GigabitEthernet1/0/1] port trunk permit vlan 2
[DeviceB-GigabitEthernet1/0/1] quit
# Configure GigabitEthernet 1/0/2 as a trunk port, and assign the port to VLAN 2.
[DeviceB] interface gigabitethernet 1/0/2
[DeviceB-GigabitEthernet1/0/2] port link-type trunk
[DeviceB-GigabitEthernet1/0/2] port trunk permit vlan 2
[DeviceB-GigabitEthernet1/0/2] quit
3. Configure Device A (the source device):
# Create a remote source group.
<DeviceA> system-view
[DeviceA] mirroring-group 1 remote-source
# Create VLAN 2.
[DeviceA] vlan 2
# Disable MAC address learning for VLAN 2.
[DeviceA-vlan2] undo mac-address mac-learning enable
[DeviceA-vlan2] quit
# Configure VLAN 2 as the remote probe VLAN of the mirroring group.
[DeviceA] mirroring-group 1 remote-probe vlan 2
# Configure GigabitEthernet 1/0/1 as a source port for the mirroring group.
[DeviceA] mirroring-group 1 mirroring-port gigabitethernet 1/0/1 both
# Configure GigabitEthernet 1/0/2 as the egress port for the mirroring group.
[DeviceA] mirroring-group 1 monitor-egress gigabitethernet 1/0/2
# Configure GigabitEthernet 1/0/2 as a trunk port, and assign the port to VLAN 2.
[DeviceA] interface gigabitethernet 1/0/2
[DeviceA-GigabitEthernet1/0/2] port link-type trunk
[DeviceA-GigabitEthernet1/0/2] port trunk permit vlan 2
# Disable the spanning tree feature on the port.
[DeviceA-GigabitEthernet1/0/2] undo stp enable
[DeviceA-GigabitEthernet1/0/2] quit

Verifying the configuration


# Verify the mirroring group configuration on Device C.
[DeviceC] display mirroring-group all
Mirroring group 2:
Type: Remote destination
Status: Active
Monitor port: GigabitEthernet1/0/2
Remote probe VLAN: 2

307
# Verify the mirroring group configuration on Device A.
[DeviceA] display mirroring-group all
Mirroring group 1:
Type: Remote source
Status: Active
Mirroring port:
GigabitEthernet1/0/1 Both
Monitor egress port: GigabitEthernet1/0/2
Remote probe VLAN: 2

308
Configuring flow mirroring
About flow mirroring
Flow mirroring copies packets matching a class to a destination for packet analyzing and monitoring.
It is implemented through QoS.
To implement flow mirroring through QoS, perform the following tasks:
• Define traffic classes and configure match criteria to classify packets to be mirrored. Flow
mirroring allows you to flexibly classify packets to be analyzed by defining match criteria.
• Configure traffic behaviors to mirror the matching packets to the specified destination.
You can configure an action to mirror the matching packets to one of the following destinations:
• Interface—The matching packets are copied to an interface and then forwarded to a data
monitoring device for analysis.
• CPU—The matching packets are copied to the CPU of an IRF member device. The CPU
analyzes the packets or delivers them to upper layers.
For more information about QoS policies, traffic classes, and traffic behaviors, see ACL and QoS
Configuration Guide.

Types of flow-mirroring traffic to an interface


Depending on whether the mirroring source and mirroring destination are on the same device,
flow-mirroring traffic to an interface includes the following types:
• Flow mirroring SPAN—Flow-mirrors traffic to a local interface.
• Flow mirroring RSPAN—Flow-mirrors traffic to an interface, and then forwards traffic to a
remote Layer 2 interface based on the VLAN of the mirrored traffic or through QoS traffic
redirecting.
• Flow mirroring ERSPAN—Encapsulates traffic in GRE packets with protocol number 0x88BE
and routes the traffic to a remote monitoring device at Layer 3.

Flow mirroring SPAN or RSPAN


For flow mirroring SPAN, configure a QoS policy on the source device. Configure the QoS policy as
follows:
1. Configure a traffic class to match packets.
2. Configure a traffic behavior to flow-mirror traffic to an interface.
3. Associate the traffic class with the traffic behavior.
When the device receives a matching packet, the device sends one copy of the packet to the
interface specified by the traffic behavior. The interface forwards the mirrored packet to the
monitoring device.

309
Figure 95 Flow mirroring SPAN

Network

Mirrored packets from Host B


Device

Monitoring device

Flow mirroring
destination interface
Packets from Host A

Packets from Host B

Host A Host B

To implement RSPAN, forward the mirrored packet to a remote Layer 2 interface based on the VLAN
of the mirrored packet or through QoS traffic redirecting.

Flow mirroring ERSPAN


On all devices from source to destination, configure a unicast routing protocol to ensure Layer 3
reachability between the devices.
As shown in Figure 96, flow mirroring ERSPAN in encapsulation parameter mode works as follows:
1. Configure a QoS policy on the source device as follows:
a. Configure a traffic class to match packets.
b. Configure a traffic behavior to flow-mirror traffic to an interface.
c. Associate the traffic class with the traffic behavior.
2. The source device copies a matching packet.
3. The source device encapsulates the packet with the specified ERSPAN encapsulation
parameters.
4. The source device forwards the packet in either of the following methods:
 Forwards the mirrored packets out of the specified outgoing interface.
 Looks up a route for the encapsulated mirrored packet based on the source IP address and
destination IP address of the encapsulated packet.
5. The encapsulated packet is routed to the monitoring device.
6. The monitoring device decapsulates the packet and analyzes the packet contents.
The packet sent to the monitoring device through flow mirroring in encapsulation parameter
mode is encapsulated. In this mode, make sure the monitoring device supports decapsulating
packets.

310
Figure 96 ERSPAN in encapsulation parameter mode

Network

Mirrored packets from Host B

Source IP network
device

Monitoring
device

Flow mirroring
destination interface
Packets from Host A

Packets from Host B

Host A Host B

Restrictions and guidelines: Flow mirroring


configuration
For information about the configuration commands except the mirror-to command, see ACL and
QoS Command Reference.

Flow mirroring tasks at a glance


To configure flow mirroring, perform the following tasks:
1. Configuring a traffic class
A traffic class defines the criteria that filters the traffic to be mirrored.
2. Configuring a traffic behavior
A traffic behavior specifies mirroring destinations.
3. Configuring a QoS policy
4. Applying a QoS policy
Choose one of the following tasks:
 Applying a QoS policy to an interface
 Applying a QoS policy to a VLAN
 Applying a QoS policy globally

Configuring a traffic class


1. Enter system view.
system-view
2. Create a class and enter class view.

311
traffic classifier classifier-name [ operator { and | or } ]
3. Configure match criteria.
if-match match-criteria
By default, no match criterion is configured in a traffic class.
4. (Optional.) Display traffic class information.
display traffic classifier
This command is available in any view.

Configuring a traffic behavior


1. Enter system view.
system-view
2. Create a traffic behavior and enter traffic behavior view.
traffic behavior behavior-name
3. Configure mirroring destinations for the traffic behavior. Choose one option as needed:
 Mirror traffic to interfaces.
mirror-to interface interface-type interface-number
By default, no mirroring actions exist to mirror traffic to interfaces.
You can mirror traffic to only one interface in a traffic behavior. If you execute this command
for a traffic behavior multiple times, only the most recent configuration takes effect.
 Mirror traffic to the CPU.
mirror-to cpu
By default, no mirroring actions exist to mirror traffic to the CPU.
4. (Optional.) Display traffic behavior configuration.
display traffic behavior
This command is available in any view.

Configuring a QoS policy


1. Enter system view.
system-view
2. Create a QoS policy and enter QoS policy view.
qos policy policy-name
3. Associate a class with a traffic behavior in the QoS policy.
classifier classifier-name behavior behavior-name
By default, no traffic behavior is associated with a class.
4. (Optional.) Display QoS policy configuration.
display qos policy
This command is available in any view.

312
Applying a QoS policy
Applying a QoS policy to an interface
Restrictions and guidelines
You can apply a QoS policy to an interface to mirror the traffic of the interface.
A policy can be applied to multiple interfaces.
A QoS policy cannot be applied in the outbound direction of an interface. In the inbound direction of
an interface, only one QoS policy can be applied.
The device does not support mirroring outbound traffic of an aggregate interface.
Procedure
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Apply a policy to the interface.
qos apply policy policy-name inbound
4. (Optional.) Display the QoS policy applied to the interface.
display qos policy interface
This command is available in any view.

Applying a QoS policy to a VLAN


Restrictions and guidelines
You can apply a QoS policy to a VLAN to mirror the traffic on all ports in the VLAN.
A QoS policy cannot be applied in the outbound direction of a VLAN. In the inbound direction of a
VLAN, only one QoS policy can be applied.
Procedure
1. Enter system view.
system-view
2. Apply a QoS policy to a VLAN.
qos vlan-policy policy-name vlan vlan-id-list inbound
3. (Optional.) Display the QoS policy applied to the VLAN.
display qos vlan-policy
This command is available in any view.

Applying a QoS policy globally


Restrictions and guidelines
You can apply a QoS policy globally to mirror the traffic on all ports.
A QoS policy cannot be applied in the outbound direction globally. In the inbound direction, only one
QoS policy can be applied globally.

313
Procedure
1. Enter system view.
system-view
2. Apply a QoS policy globally.
qos apply policy policy-name global inbound
3. (Optional.) Display global QoS policies.
display qos policy global
This command is available in any view.

Flow mirroring configuration examples


Example: Configuring flow mirroring
Network configuration
As shown in Figure 97, configure flow mirroring so that the server can monitor the following traffic:
• All traffic that the Technical Department sends to access the Internet.
• IP traffic that the Technical Department sends to the Marketing Department during working
hours (8:00 to 18:00) on weekdays.
Figure 97 Network diagram

Internet

GE1/01 Device

GE1/02 GE1/04

GE1/03

Marketing Dept. Technical Dept.


192.168.1.0/24 192.168.2.0/24

Host A Host B Server Host C Host D

Procedure
# Create working hour range work, in which working hours are from 8:00 to 18:00 on weekdays.
<Device> system-view
[Device] time-range work 8:00 to 18:00 working-day

# Create IPv4 advanced ACL 3000 to allow packets from the Technical Department to access the
Internet and the Marketing Department during working hours.
[Device] acl advanced 3000
[Device-acl-ipv4-adv-3000] rule permit tcp source 192.168.2.0 0.0.0.255 destination-port
eq www
[Device-acl-ipv4-adv-3000] rule permit ip source 192.168.2.0 0.0.0.255 destination
192.168.1.0 0.0.0.255 time-range work

314
[Device-acl-ipv4-adv-3000] quit

# Create traffic class tech_c, and configure the match criterion as ACL 3000.
[Device] traffic classifier tech_c
[Device-classifier-tech_c] if-match acl 3000
[Device-classifier-tech_c] quit

# Create traffic behavior tech_b, configure the action of mirroring traffic to GigabitEthernet 1/0/3.
[Device] traffic behavior tech_b
[Device-behavior-tech_b] mirror-to interface gigabitethernet 1/0/3
[Device-behavior-tech_b] quit

# Create QoS policy tech_p, and associate traffic class tech_c with traffic behavior tech_b in the
QoS policy.
[Device] qos policy tech_p
[Device-qospolicy-tech_p] classifier tech_c behavior tech_b
[Device-qospolicy-tech_p] quit

# Apply QoS policy tech_p to the incoming packets of GigabitEthernet 1/0/4.


[Device] interface gigabitethernet 1/0/4
[Device-GigabitEthernet1/0/4] qos apply policy tech_p inbound
[Device-GigabitEthernet1/0/4] quit

Verifying the configuration


# Verify that the server can monitor the following traffic:
• All traffic sent by the Technical Department to access the Internet.
• IP traffic that the Technical Department sends to the Marketing Department during working
hours on weekdays.
(Details not shown.)

315
Configuring sFlow
About sFlow
sFlow is a traffic monitoring technology.
As shown in Figure 98, the sFlow system involves an sFlow agent embedded in a device and a
remote sFlow collector. The sFlow agent collects interface counter information and packet
information and encapsulates the sampled information in sFlow packets. When the sFlow packet
buffer is full, or the aging timer (fixed to 1 second) expires, the sFlow agent performs the following
actions:
• Encapsulates the sFlow packets in the UDP datagrams.
• Sends the UDP datagrams to the specified sFlow collector.
The sFlow collector analyzes the information and displays the results. One sFlow collector can
monitor multiple sFlow agents.
sFlow provides the following sampling mechanisms:
• Flow sampling—Obtains packet information.
• Counter sampling—Obtains interface counter information.
sFlow can use flow sampling and counter sampling at the same time.
Figure 98 sFlow system

sFlow agent
Ethernet IP UDP
Flow header header sFlow Datagram
sampling header
Counter
sampling

Device
sFlow collector

Protocols and standards


• RFC 3176, InMon Corporation's sFlow: A Method for Monitoring Traffic in Switched and Routed
Networks
• sFlow.org, sFlow Version 5

Configuring basic sFlow information


Restrictions and guidelines
As a best practice, manually configure an IP address for the sFlow agent. The device periodically
checks whether the sFlow agent has an IP address. If the sFlow agent does not have an IP address,
the device automatically selects an IPv4 address for the sFlow agent but does not save the IPv4
address in the configuration file.
Only one IP address can be configured for the sFlow agent on the device, and a newly configured IP
address overwrites the existing one.

316
Procedure
1. Enter system view.
system-view
2. Configure an IP address for the sFlow agent.
sflow agent { ip ipv4-address | ipv6 ipv6-address }
By default, no IP address is configured for the sFlow agent.
3. Configure the sFlow collector information.
sflow collector collector-id [ vpn-instance vpn-instance-name ] { ip
ipv4-address | ipv6 ipv6-address } [ port port-number | datagram-size
size | time-out seconds | description string ] *
By default, no sFlow collector information is configured.
4. Specify the source IP address of sFlow packets.
sflow source { ip ipv4-address | ipv6 ipv6-address } *
By default, the source IP address is determined by routing.

Configuring flow sampling


About this task
Perform this task to configure flow sampling on an Ethernet interface. The sFlow agent performs the
following tasks:
1. Samples packets on that interface according to the configured parameters.
2. Encapsulates the packets into sFlow packets.
3. Encapsulates the sFlow packets in the UDP packets and sends the UDP packets to the
specified sFlow collector.
Procedure
1. Enter system view.
system-view
2. Enter Layer 2 Ethernet interface view or Layer 3 Ethernet interface view.
interface interface-type interface-number
3. (Optional.) Set the flow sampling mode.
sflow sampling-mode random
By default, random sampling is used.
4. Enable flow sampling and specify the number of packets out of which flow sampling samples a
packet on the interface.
sflow sampling-rate rate
By default, flow sampling is disabled.
n
As a best practice, set the sampling interval to 2 that is greater than or equal to 8192, for
example, 32768.
5. (Optional.) Set the maximum number of bytes (starting from the packet header) that flow
sampling can copy per packet.
sflow flow max-header length
The default setting is 128 bytes.
As a best practice, use the default setting.
6. Specify the sFlow instance and sFlow collector for flow sampling.
sflow flow [ instance instance-id ] collector collector-id

317
By default, no sFlow instance or sFlow collector is specified for flow sampling.

Configuring counter sampling


About this task
Perform this task to configure counter sampling on an Ethernet interface. The sFlow agent performs
the following tasks:
1. Periodically collects the counter information on that interface.
2. Encapsulates the counter information into sFlow packets.
3. Encapsulates the sFlow packets in the UDP packets and sends the UDP packets to the
specified sFlow collector.
Procedure
1. Enter system view.
system-view
2. Enter Layer 2 Ethernet interface view or Layer 3 Ethernet interface view.
interface interface-type interface-number
3. Enable counter sampling and set the counter sampling interval.
sflow counter interval interval
By default, counter sampling is disabled.
4. Specify the sFlow instance and sFlow collector for counter sampling.
sflow counter [ instance instance-id ] collector collector-id
By default, no sFlow instance or sFlow collector is specified for counter sampling.

Display and maintenance commands for sFlow


Execute display commands in any view.

Task Command
Display sFlow configuration. display sflow

sFlow configuration examples


Example: Configuring sFlow
Network configuration
As shown in Figure 99, perform the following tasks:
• Configure flow sampling in random mode and counter sampling on GigabitEthernet 1/0/1 of the
device to monitor traffic on the port.
• Configure the device to send sampled information in sFlow packets through GigabitEthernet
1/0/3 to the sFlow collector.

318
Figure 99 Network diagram
sFlow collector
3.3.3.2/16

GE1/0/3
Vlan-int4 GE1/0/2
3.3.3.1/16 Vlan-int3
2.2.2.1/16
GE1/0/1
Host A Vlan-int2 Server
1.1.1.2/16 Device 2.2.2.2/16
1.1.1.1/16

Procedure
1. Configure VLANs, assign Ethernet interfaces to VLANs, and configure IP addresses and
subnet masks for VLAN interfaces, as shown in Figure 99. (Details not shown.)
2. Configure the sFlow agent and configure information about the sFlow collector:
# Configure the IP address for the sFlow agent.
<Device> system-view
[Device] sflow agent ip 3.3.3.1
# Configure information about the sFlow collector. Specify the sFlow collector ID as 1, IP
address as 3.3.3.2, port number as 6343 (default), and description as netserver.
[Device] sflow collector 1 ip 3.3.3.2 description netserver
3. Configure counter sampling:
# Enable counter sampling and set the counter sampling interval to 120 seconds on
GigabitEthernet 1/0/1.
[Device] interface gigabitethernet 1/0/1
[Device-GigabitEthernet1/0/1] sflow counter interval 120
# Specify sFlow collector 1 for counter sampling.
[Device-GigabitEthernet1/0/1] sflow counter collector 1
4. Configure flow sampling:
# Enable flow sampling and set the flow sampling mode to random and sampling interval to
32767.
[Device-GigabitEthernet1/0/1] sflow sampling-mode random
[Device-GigabitEthernet1/0/1] sflow sampling-rate 32767
# Specify sFlow collector 1 for flow sampling.
[Device-GigabitEthernet1/0/1] sflow flow collector 1

Verifying the configuration


# Verify the following items:
• GigabitEthernet 1/0/1 enabled with sFlow is active.
• The counter sampling interval is 120 seconds.
• The flow sampling interval is 32767 (one packet is sampled from every 32767 packets).
[Device-GigabitEthernet1/0/1] display sflow
sFlow datagram version: 5
Global information:
Agent IP: 3.3.3.1(CLI)
Source address:
Collector information:

319
ID IP Port Aging Size VPN-instance Description
1 3.3.3.2 6343 N/A 1400 netserver
Port counter sampling information:
Interface Instance CID Interval(s)
GE1/0/1 1 1 120
Port flow sampling information:
Interface Instance FID MaxHLen Rate Mode Status
GE1/0/1 1 1 128 32767 Random Active

Troubleshooting sFlow
The remote sFlow collector cannot receive sFlow packets
Symptom
The remote sFlow collector cannot receive sFlow packets.
Analysis
The possible reasons include:
• The sFlow collector is not specified.
• sFlow is not configured on the interface.
• The IP address of the sFlow collector specified on the sFlow agent is different from that of the
remote sFlow collector.
• No IP address is configured for the Layer 3 interface that sends sFlow packets.
• An IP address is configured for the Layer 3 interface that sends sFlow packets. However, the
UDP datagrams with this source IP address cannot reach the sFlow collector.
• The physical link between the device and the sFlow collector fails.
• The sFlow collector is bound to a non-existent VPN.
• The length of an sFlow packet is less than the sum of the following two values:
 The length of the sFlow packet header.
 The number of bytes that flow sampling can copy per packet.
Solution
To resolve the problem:
1. Use the display sflow command to verify that sFlow is correctly configured.
2. Verify that a correct IP address is configured for the device to communicate with the sFlow
collector.
3. Verify that the physical link between the device and the sFlow collector is up.
4. Verify that the VPN bound to the sFlow collector already exists.
5. Verify that the length of an sFlow packet is greater than the sum of the following two values:
 The length of the sFlow packet header.
 The number of bytes (as a best practice, use the default setting) that flow sampling can copy
per packet.

320
Configuring the information center
About the information center
The information center on the device receives logs generated by source modules and outputs logs to
different destinations according to log output rules. Based on the logs, you can monitor device
performance and troubleshoot network problems.
Figure 100 Information center diagram

Output destination

Logs

Log types
Logs are classified into the following types:
• Standard system logs—Record common system information. Unless otherwise specified, the
term "logs" in this document refers to standard system logs.
• Diagnostic logs—Record debugging messages.
• Security logs—Record security information, such as authentication and authorization
information.
• Hidden logs—Record log information not displayed on the terminal, such as input commands.
• Trace logs—Record system tracing and debugging messages, which can be viewed only after
the devkit package is installed.

Log levels
Logs are classified into eight severity levels from 0 through 7 in descending order. The information
center outputs logs with a severity level that is higher than or equal to the specified level. For
example, if you specify a severity level of 6 (informational), logs that have a severity level from 0 to 6
are output.
Table 19 Log levels

Severity value Level Description


The system is unusable. For example, the system authorization
0 Emergency
has expired.

Action must be taken immediately. For example, traffic on an


1 Alert
interface exceeds the upper limit.

Critical condition. For example, the device temperature exceeds


2 Critical
the upper limit, the power module fails, or the fan tray fails.

3 Error Error condition. For example, the link state changes.


Warning condition. For example, an interface is disconnected, or
4 Warning
the memory resources are used up.
Normal but significant condition. For example, a terminal logs in to
5 Notification
the device, or the device reboots.

321
Severity value Level Description
Informational message. For example, a command or a ping
6 Informational
operation is executed.
7 Debugging Debugging message.

Log destinations
The system outputs logs to the following destinations: console, monitor terminal, log buffer, log host,
and log file. Log output destinations are independent and you can configure them after enabling the
information center. One log can be sent to multiple destinations.

Default output rules for logs


A log output rule specifies the source modules and severity level of logs that can be output to a
destination. Logs matching the output rule are output to the destination. Table 20 shows the default
log output rules.
Table 20 Default output rules

Destination Log source modules Output switch Severity


Console All supported modules Enabled Debugging
Monitor terminal All supported modules Disabled Debugging
Log host All supported modules Enabled Informational
Log buffer All supported modules Enabled Informational
Log file All supported modules Enabled Informational

Default output rules for diagnostic logs


Diagnostic logs can only be output to the diagnostic log file, and cannot be filtered by source
modules and severity levels. Table 21 shows the default output rule for diagnostic logs.
Table 21 Default output rule for diagnostic logs

Destination Log source modules Output switch Severity


Diagnostic log file All supported modules Enabled Debugging

Default output rules for security logs


Security logs can only be output to the security log file, and cannot be filtered by source modules and
severity levels. Table 22 shows the default output rule for security logs.
Table 22 Default output rule for security logs

Destination Log source modules Output switch Severity


Security log file All supported modules Disabled Debugging

322
Default output rules for hidden logs
Hidden logs can be output to the log host, the log buffer, and the log file. Table 23 shows the default
output rules for hidden logs.
Table 23 Default output rules for hidden logs

Destination Log source modules Output switch Severity


Log host All supported modules Enabled Informational
Log buffer All supported modules Enabled Informational
Log file All supported modules Enabled Informational

Default output rules for trace logs


Trace logs can only be output to the trace log file, and cannot be filtered by source modules and
severity levels. Table 24 shows the default output rules for trace logs.
Table 24 Default output rules for trace logs

Destination Log source modules Output switch Severity


Trace log file All supported modules Enabled Debugging

Log formats and field descriptions


Log formats
The format of logs varies by output destinations. Table 25 shows the original format of log information,
which might be different from what you see. The actual format varies by the log resolution tool used.
Table 25 Log formats

Output destination Format


Prefix Timestamp Sysname Module/Level/Mnemonic: Content
Console, monitor
terminal, log buffer, or log Example:
file %Nov 24 14:21:43:502 2019 Sysname SHELL/5/SHELL_LOGIN: VTY logged
in from 192.168.1.26

323
Output destination Format
Non-customized format:
<PRI>Timestamp Sysname %%vvModule/Level/Mnemonic: Location; Content
Example:
<190>Nov 24 16:22:21 2019 Sysname %%10SHELL/5/SHELL_LOGIN:
-DevIP=1.1.1.1; VTY logged in from 192.168.1.26<190>Nov 24
16:22:21 2016 Sysname %%10 SHELL/5/SHELL_LOGIN: -DevIP=1.1.1.1;
VTY logged in from 192.168.1.26
CMCC format:
<PRI>Timestamp Sysname %vvModule/Level/Mnemonic: location; Content
Log host <189>Oct 9 14:59:04 2019 Sysname %10SHELL/5/SHELL_LOGIN: VTY
logged in from 192.168.1.21
SGCC format:
<PRI> Timestamp Sysname Devtype Content
<189> 2019-03-18 15:39:45 Sysname FW 0 2 admin 10.1.1.1
Unicom format:
<PRI>Timestamp Hostip vvModule/Level/Serial_number: Content
Example:
<189>Oct 13 16:48:08 2019 10.1.1.1
10SHELL/5/210231a64jx073000020: VTY logged in from 192.168.1.21

Log field description


Table 26 Log field description

Field Description
A log to a destination other than the log host has an identifier in front of the
timestamp:
• An identifier of percent sign (%) indicates a log with a level equal to or
Prefix (information type) higher than informational.
• An identifier of asterisk (*) indicates a debugging log or a trace log.
• An identifier of caret (^) indicates a diagnostic log.

A log destined for the log host has a priority identifier in front of the timestamp.
The priority is calculated by using this formula: facility*8+level, where:
• facility is the facility name. Facility names local0 through local7
correspond to values 16 through 23. The facility name can be configured
PRI (priority) using the info-center loghost command. It is used to identify log
sources on the log host, and to query and filter the logs from specific log
sources.
• level is in the range of 0 to 7. See Table 19 for more information about
severity levels.
Records the time when the log was generated.
Timestamp Logs sent to the log host and those sent to the other destinations have different
timestamp precisions, and their timestamp formats are configured with different
commands. For more information, see Table 27 and Table 28.

Source IP address of the log. If the info-center loghost source


command is configured, this field displays the IP address of the specified source
Hostip interface. Otherwise, this field displays the sysname.
This field exists only in logs that are sent to the log host in unicom format.
Serial number of the device that generated the log.
Serial number
This field exists only in logs that are sent to the log host in unicom format.

324
Field Description
Sysname (host name or The sysname is the host name or IP address of the device that generated the
host IP address) log. You can use the sysname command to modify the name of the device.

Indicates that the information was generated by an HPE device.


%% (vendor ID)
This field exists only in logs sent to the log host.
Identifies the version of the log, and has a value of 10.
vv (version information)
This field exists only in logs that are sent to the log host.
Specifies the name of the module that generated the log. You can enter the
Module
info-center source ? command in system view to view the module list.
Identifies the level of the log. See Table 19 for more information about severity
Level
levels.

Mnemonic Describes the content of the log. It contains a string of up to 32 characters.


Optional field that identifies the log sender. This field exists only in logs that are
sent to the log host in non-customized or CMCC format.
Location The field contains the following information:
• Devlp—IP address of the log sender.
• Slot—Member ID of the IRF member device that sent the log.

Device type. This field exists only in logs that are sent to the log host in SGCC
Devtype
format.

Content Provides the content of the log.

Table 27 Timestamp precisions and configuration commands

Destined for the console, monitor


Item Destined for the log host
terminal, log buffer, and log file
Precision Seconds (default) or milliseconds Milliseconds
Command
used to set the
info-center timestamp loghost info-center timestamp
timestamp
format

Table 28 Description of the timestamp parameters

Timestamp parameters Description


Time that has elapsed since system startup, in the format of xxx.yyy. xxx
represents the higher 32 bits, and yyy represents the lower 32 bits, of
milliseconds elapsed.
Logs that are sent to all destinations other than a log host support this
boot parameter.
Example:
%0.109391473 Sysname FTPD/5/FTPD_LOGIN: User ftp
(192.168.1.23) has logged in successfully.
0.109391473 is a timestamp in the boot format.

325
Timestamp parameters Description
Current date and time.
• For logs output to a log host, the timestamp can be in the format of MMM
DD hh:mm:ss YYYY (accurate to seconds) or MMM DD hh:mm:ss.ms
YYYY (accurate to milliseconds).
• For logs output to other destinations, the timestamp is in the format of
MMM DD hh:mm:ss:ms YYYY.
date All logs support this parameter.
Example:
%May 30 05:36:29:579 2019 Sysname FTPD/5/FTPD_LOGIN: User ftp
(192.168.1.23) has logged in successfully.
May 30 05:36:29:579 2019 is a timestamp in the date format in logs sent to
the console.
Timestamp format stipulated in ISO 8601, accurate to seconds (default) or
milliseconds.
Only logs that are sent to a log host support this parameter.
Example:
iso
<189>2019-05-30T06:42:44 Sysname %%10FTPD/5/FTPD_LOGIN: User
ftp (192.168.1.23) has logged in successfully.
2019-05-30T06:42:44 is a timestamp in the iso format accurate to seconds.
A timestamp accurate to milliseconds is like 2019-05-30T06:42:44.708.

No timestamp is included.
All logs support this parameter.

none Example:
% Sysname FTPD/5/FTPD_LOGIN: User ftp (192.168.1.23) has logged
in successfully.
No timestamp is included.

Current date and time without year or millisecond information, in the format of
MMM DD hh:mm:ss.
Only logs that are sent to a log host support this parameter.
no-year-date Example:
<189>May 30 06:44:22 Sysname %%10FTPD/5/FTPD_LOGIN: User ftp
(192.168.1.23) has logged in successfully.
May 30 06:44:22 is a timestamp in the no-year-date format.

FIPS compliance
The device supports the FIPS mode that complies with NIST FIPS 140-2 requirements. Support for
features, commands, and parameters might differ in FIPS mode and non-FIPS mode. For more
information about FIPS mode, see Security Configuration Guide.

Information center tasks at a glance


Managing standard system logs
1. Enabling the information center
2. Outputting logs to various destinations
Choose the following tasks as needed:

326
 Outputting logs to the console
 Outputting logs to the monitor terminal
 Outputting logs to log hosts
 Outputting logs to the log buffer
 Saving logs to the log file
3. (Optional.) Setting the minimum storage period for logs
4. (Optional.) Enabling synchronous information output
5. (Optional.) Configuring log suppression
Choose the following tasks as needed:
 Enabling duplicate log suppression
 Configuring log suppression for a module
 Disabling an interface from generating link up or link down logs
6. (Optional.) Enabling SNMP notifications for system logs

Managing hidden logs


1. Enabling the information center
2. Outputting logs to various destinations
Choose the following tasks as needed:
 Outputting logs to log hosts
 Outputting logs to the log buffer
 Saving logs to the log file
3. (Optional.) Setting the minimum storage period for logs
4. (Optional.) Configuring log suppression
Choose the following tasks as needed:
 Enabling duplicate log suppression
 Configuring log suppression for a module

Managing security logs


1. Enabling the information center
2. (Optional.) Configuring log suppression
Choose the following tasks as needed:
 Enabling duplicate log suppression
 Configuring log suppression for a module
3. Managing security logs
 Saving security logs to the security log file
 Managing the security log file

Managing diagnostic logs


1. Enabling the information center
2. (Optional.) Configuring log suppression
Choose the following tasks as needed:
 Enabling duplicate log suppression

327
 Configuring log suppression for a module
3. Saving diagnostic logs to the diagnostic log file

Managing trace logs


1. Enabling the information center
2. (Optional.) Configuring log suppression
Choose the following tasks as needed:
 Enabling duplicate log suppression
 Configuring log suppression for a module
3. Setting the maximum size of the trace log file

Enabling the information center


About this task
The information center can output logs only after it is enabled.
Procedure
1. Enter system view.
system-view
2. Enable the information center.
info-center enable
The information center is enabled by default.

Outputting logs to various destinations


Outputting logs to the console
Restrictions and guidelines
The terminal monitor, terminal debugging, and terminal logging commands take
effect only for the current connection between the terminal and the device. If a new connection is
established, the default is restored.
Procedure
1. Enter system view.
system-view
2. (Optional.) Configure an output rule for sending logs to the console.
info-center source { module-name | default } console { deny | level
severity }
For information about the default output rules, see "Default output rules for logs."
3. (Optional.) Configure the timestamp format.
info-center timestamp { boot | date | none }
The default timestamp format is date.
4. Return to user view.
quit
5. Enable log output to the console.

328
terminal monitor
By default, log output to the console is enabled.
6. Enable output of debugging messages to the console.
terminal debugging
By default, output of debugging messages to the console is disabled.
This command enables output of debugging-level log messages to the console.
7. Set the lowest severity level of logs that can be output to the console.
terminal logging level severity
The default setting is 6 (informational).

Outputting logs to the monitor terminal


About this task
Monitor terminals refer to terminals that log in to the device through the AUX, VTY, or TTY line.
Restrictions and guidelines
The terminal monitor, terminal debugging, and terminal logging commands take
effect only for the current connection between the terminal and the device. If a new connection is
established, the default is restored.
Procedure
1. Enter system view.
system-view
2. (Optional.) Configure an output rule for sending logs to the monitor terminal.
info-center source { module-name | default } monitor { deny | level
severity }
For information about the default output rules, see "Default output rules for logs."
3. (Optional.) Configure the timestamp format.
info-center timestamp { boot | date | none }
The default timestamp format is date.
4. Return to user view.
quit
5. Enable log output to the monitor terminal.
terminal monitor
By default, log output to the monitor terminal is disabled.
6. Enable output of debugging messages to the monitor terminal.
terminal debugging
By default, output of debugging messages to the monitor terminal is disabled.
This command enables output of debugging-level log messages to the monitor terminal.
7. Set the lowest level of logs that can be output to the monitor terminal.
terminal logging level severity
The default setting is 6 (informational).

329
Outputting logs to log hosts
Restrictions and guidelines
The device supports the following methods (in descending order of priority) for outputting logs of a
module to designated log hosts:
• Fast log output.
For information about the modules that support fast log output and how to configure fast log
output, see "Configuring fast log output."
• Flow log.
For information about the modules that support flow log output and how to configure flow log
output, see "Configuring flow log."
• Information center.
If you configure multiple log output methods for a module, only the method with the highest priority
takes effect.
Procedure
1. Enter system view.
system-view
2. (Optional.) Configure a log output filter or a log output rule. Choose one option as needed:
 Configure a log output filter.
info-center filter filter-name { module-name | default } { deny |
level severity }
You can create multiple log output filters. When specifying a log host, you can apply a log
output filter to the log host to control log output.
 Configure a log output rule for the log host output destination.
info-center source { module-name | default } loghost { deny | level
severity }
For information about the default log output rules for the log host output destination, see
"Default output rules for logs."
The system chooses the settings to control log output to a log host in the following order:
a. Log output filter applied to the log host.
b. Log output rules configured for the log host output destination by using the info-center
source command.
c. Default log output rules (see "Default output rules for logs").
3. (Optional.) Specify a source IP address for logs sent to log hosts.
info-center loghost source interface-type interface-number
By default, the source IP address of logs sent to log hosts is the primary IP address of their
outgoing interfaces.
4. (Optional.) Specify the format in which logs are output to log hosts.
info-center format { cmcc | sgcc | unicom }
By default, logs are output to log hosts in non-customized format.
5. (Optional.) Configure the timestamp format.
info-center timestamp loghost { date [ with-milliseconds ] | iso
[ with-milliseconds | with-timezone ] * | no-year-date | none }
The default timestamp format is date.
6. Specify a log host and configure related parameters.

330
info-center loghost [ vpn-instance vpn-instance-name ] { hostname |
ipv4-address | ipv6 ipv6-address } [ port port-number ] [ dscp
dscp-value ] [ facility local-number ] [ filter filter-name ]
By default, no log hosts or related parameters are specified.
The value for the port-number argument must be the same as the value configured on the
log host. Otherwise, the log host cannot receive logs.

Outputting logs to the log buffer


1. Enter system view.
system-view
2. (Optional.) Configure an output rule for sending logs to the log buffer.
info-center source { module-name | default } logbuffer { deny | level
severity }
For information about the default output rules, see "Default output rules for logs."
3. (Optional.) Configure the timestamp format.
info-center timestamp { boot | date | none }
The default timestamp format is date.
4. Enable log output to the log buffer.
info-center logbuffer
By default, log output to the log buffer is enabled.
5. (Optional.) Set the maximum log buffer size.
info-center logbuffer size buffersize
By default, a maximum of 512 logs can be buffered.

Saving logs to the log file


About this task
By default, the log file feature saves logs from the log file buffer to the log file at the specified saving
interval. You can also manually trigger an immediate saving of buffered logs to the log file. After
saving logs to the log file, the system clears the log file buffer.
The log file is automatically created when needed and has a maximum capacity. When no log file
space or storage device space is available, the system will replace the oldest logs with new logs.
You can enable log file overwrite-protection to stop the device from saving new logs when no log file
space or storage device space is available.

TIP:
Clean up the storage space of the device regularly to ensure sufficient storage space for the log file
feature.

Procedure
1. Enter system view.
system-view
2. (Optional.) Configure an output rule for sending logs to the log file.
info-center source { module-name | default } logfile { deny | level
severity }
For information about the default output rules, see "Default output rules for logs."

331
3. Enable the log file feature.
info-center logfile enable
By default, the log file feature is enabled.
4. (Optional.) Enable log file overwrite-protection.
info-center logfile overwrite-protection [ all-port-powerdown ]
By default, log file overwrite-protection is disabled.
Log file overwrite-protection is supported only in FIPS mode.
5. (Optional.) Set the maximum log file size.
info-center logfile size-quota size
The default maximum log file size is 10 MB.
6. (Optional.) Set the log file usage alarm threshold.
info-center logfile alarm-threshold usage
The default alarm threshold for log file usage ratio is 80%. When the usage ratio of the log file
reaches 80%, the system outputs a message to inform the user.
7. (Optional.) Specify the log file directory.
info-center logfile directory dir-name
The default log file directory is flash:/logfile.
The device uses the default log file directory after an IRF reboot or a master/subordinate
switchover.
8. Save logs in the log file buffer to the log file. Choose one option as needed:
 Configure the automatic log file saving interval.
info-center logfile frequency freq-sec
The default saving interval is 86400 seconds.
 Manually save logs in the log file buffer to the log file.
logfile save
This command is available in any view.

Setting the minimum storage period for logs


About this task
Use this feature to set the minimum storage period for logs in the log buffer and log file. This feature
ensures that logs will not be overwritten by new logs during a set period of time.
By default, when the log buffer or log file is full, new logs will automatically overwrite the oldest logs.
After the minimum storage period is set, the system identifies the storage period of a log to determine
whether to delete the log. The system current time minus a log's generation time is the log's storage
period.
• If the storage period of a log is shorter than or equal to the minimum storage period, the system
does not delete the log. The new log will not be saved.
• If the storage period of a log is longer than the minimum storage period, the system deletes the
log to save the new log.
Procedure
1. Enter system view.
system-view
2. Set the log minimum storage period.
info-center syslog min-age min-age

332
By default, the log minimum storage period is not set.

Enabling synchronous information output


About this task
System log output interrupts ongoing configuration operations, obscuring previously entered
commands. Synchronous information output shows the obscured commands. It also provides a
command prompt in command editing mode, or a [Y/N] string in interaction mode so you can
continue your operation from where you were stopped.
Procedure
1. Enter system view.
system-view
2. Enable synchronous information output.
info-center synchronous
By default, synchronous information output is disabled.

Configuring log suppression


Enabling duplicate log suppression
About this task
Output of consecutive duplicate logs (logs that have the same module name, level, mnemonic,
location, and text) wastes system and network resources.
With duplicate log suppression enabled, the system starts a suppression period upon outputting a
log:
• If only duplicate logs are received during the suppression period, the information center does
not output the duplicate logs. When the suppression period expires, the information center
outputs the suppressed log and the number of times the log is suppressed.
• If a different log is received during the suppression period, the information center performs the
following operations:
 Outputs the suppressed log and the number of times the log is suppressed.
 Outputs the different log and starts a suppression period for that log.
• If no log is received within the suppression period, the information center does not output any
message when the suppression period expires.
Procedure
1. Enter system view.
system-view
2. Enable duplicate log suppression.
info-center logging suppress duplicates
By default, duplicate log suppression is disabled.

333
Configuring log suppression for a module
About this task
This feature suppresses output of logs. You can use this feature to filter out the logs that you are not
concerned with.
Perform this task to configure a log suppression rule to suppress output of all logs or logs with a
specific mnemonic value for a module.
Procedure
1. Enter system view.
system-view
2. Configure a log suppression rule for a module.
info-center logging suppress module module-name mnemonic { all |
mnemonic-value }
By default, the device does not suppress output of any logs from any modules.

Disabling an interface from generating link up or link down


logs
About this task
By default, an interface generates link up or link down log information when the interface state
changes. In some cases, you might want to disable certain interfaces from generating this
information. For example:
• You are concerned about the states of only some interfaces. In this case, you can use this
function to disable other interfaces from generating link up and link down log information.
• An interface is unstable and continuously outputs log information. In this case, you can disable
the interface from generating link up and link down log information.
Use the default setting in normal cases to avoid affecting interface status monitoring.
Procedure
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Disable the interface from generating link up or link down logs.
undo enable log updown
By default, an interface generates link up and link down logs when the interface state changes.

Enabling SNMP notifications for system logs


About this task
This feature enables the device to send an SNMP notification for each log message it outputs. The
device encapsulates the logs in SNMP notifications and then sends them to the SNMP module and
the log trap buffer.
You can configure the SNMP module to send received SNMP notifications in SNMP traps or informs
to remote hosts. For more information, see "Configuring SNMP."

334
To view the traps in the log trap buffer, access the MIB corresponding to the log trap buffer.
Procedure
1. Enter system view.
system-view
2. Enable SNMP notifications for system logs.
snmp-agent trap enable syslog
By default, the device does not send SNMP notifications for system logs.
3. Set the maximum number of traps that can be stored in the log trap buffer.
info-center syslog trap buffersize buffersize
By default, the log trap buffer can store a maximum of 1024 traps.

Managing security logs


Saving security logs to the security log file
About this task
Security logs are very important for locating and troubleshooting network problems. Generally,
security logs are output together with other logs. It is difficult to identify security logs among all logs.
To solve this problem, you can save security logs to the security log file without affecting the current
log output rules.
After you enable the security log file feature, the system processes security logs as follows:
1. Outputs security logs to the security log file buffer.
2. Saves logs from the security log file buffer to the security log file at the specified interval.
If you have the security-audit role, you can also manually save security logs to the security log
file.
3. Clears the security log file buffer immediately after the security logs are saved to the security log
file.
Restrictions and guidelines
The device supports only one security log file. The system will overwrite old logs with new logs when
the security log file is full. To avoid security log loss, you can set an alarm threshold for the security
log file usage ratio. When the alarm threshold is reached, the system outputs a message to inform
you of the alarm. You can log in to the device with the security-audit user role and back up the
security log file to prevent the loss of important data.
Procedure
1. Enter system view.
system-view
2. Enable the security log file feature.
info-center security-logfile enable
By default, the security log file feature is disabled.
3. Set the interval at which the system saves security logs.
info-center security-logfile frequency freq-sec
The default security log file saving interval is 86400 seconds.
4. (Optional.) Set the maximum size for the security log file.
info-center security-logfile size-quota size

335
By default, the maximum security log file size is 10 MB.
5. (Optional.) Set the alarm threshold of the security log file usage.
info-center security-logfile alarm-threshold usage
By default, the alarm threshold of the security log file usage ratio is 80. When the usage of the
security log file reaches 80%, the system will send a message.

Managing the security log file


Restrictions and guidelines
To use the security log file management commands, you must have the security-audit user role. For
information about configuring the security-audit user role, see AAA in Security Configuration Guide.
Procedure
1. Enter system view.
system-view
2. Change the directory of the security log file.
info-center security-logfile directory dir-name
By default, the security log file is saved in the seclog directory in the root directory of the
storage device.
The device uses the default security log file directory after an IRF reboot or a
master/subordinate switchover.
3. Manually save all logs in the security log file buffer to the security log file.
security-logfile save
This command is available in any view.
4. (Optional.) Display the summary of the security log file.
display security-logfile summary
This command is available in any view.

Saving diagnostic logs to the diagnostic log file


About this task
By default, the diagnostic log file feature saves diagnostic logs from the diagnostic log file buffer to
the diagnostic log file at the specified saving interval. You can also manually trigger an immediate
saving of diagnostic logs to the diagnostic log file. After saving diagnostic logs to the diagnostic log
file, the system clears the diagnostic log file buffer.
The device supports only one diagnostic log file. The diagnostic log file has a maximum capacity.
When the capacity is reached, the system replaces the oldest diagnostic logs with new logs.
Procedure
1. Enter system view.
system-view
2. Enable the diagnostic log file feature.
info-center diagnostic-logfile enable
By default, the diagnostic log file feature is enabled.
3. (Optional.) Set the maximum diagnostic log file size.
info-center diagnostic-logfile quota size
By default, the maximum diagnostic log file size is 10 MB.

336
4. (Optional.) Specify the diagnostic log file directory.
info-center diagnostic-logfile directory dir-name
The default diagnostic log file directory is flash:/diagfile.
The device uses the default diagnostic log file directory after an IRF reboot or a
master/subordinate switchover.
5. Save diagnostic logs in the diagnostic log file buffer to the diagnostic log file. Choose one option
as needed:
 Configure the automatic diagnostic log file saving interval.
info-center diagnostic-logfile frequency freq-sec
The default saving interval is 86400 seconds.
 Manually save diagnostic logs to the diagnostic log file.
diagnostic-logfile save
This command is available in any view.

Setting the maximum size of the trace log file


About this task
The device has only one trace log file. When the trace log file is full, the device overwrites the oldest
trace logs with new ones.
Procedure
1. Enter system view.
system-view
2. Set the maximum size for the trace log file.
info-center trace-logfile quota size
The default maximum size of the trace log file is 1 MB.

Display and maintenance commands for


information center
Execute display commands in any view and reset commands in user view.

Task Command
Display the diagnostic log file
display diagnostic-logfile summary
configuration.

Display the information center


display info-center
configuration.
Display information about log output
display info-center filter [ filter-name ]
filters.

display logbuffer [ reverse ] [ level severity


Display log buffer information and
buffered logs.
| size buffersize | slot slot-number ] *
[ last-mins mins ]
display logbuffer summary [ level severity |
Display the log buffer summary.
slot slot-number ] *

337
Task Command
Display the content of the log file
display logfile buffer
buffer.

Display the log file configuration. display logfile summary


Display the content of the security log
file buffer. (To execute this command,
display security-logfile buffer
you must have the security-audit user
role.)
Display summary information of the
display security-logfile summary
security log file .

Clear the log buffer. reset logbuffer

Information center configuration examples


Example: Outputting logs to the console
Network configuration
Configure the device to output to the console FTP logs that have a minimum severity level of
warning.
Figure 101 Network diagram
Console

PC Device

Procedure
# Enable the information center.
<Device> system-view
[Device] info-center enable

# Disable log output to the console.


[Device] info-center source default console deny

To avoid output of unnecessary information, disable all modules from outputting log information to
the specified destination (console in this example) before you configure the output rule.
# Configure an output rule to output to the console FTP logs that have a minimum severity level of
warning.
[Device] info-center source ftp console level warning
[Device] quit

# Enable log output to the console.


<Device> terminal logging level 6
<Device> terminal monitor
The current terminal is enabled to display logs.

Now, if the FTP module generates logs, the information center automatically sends the logs to the
console, and the console displays the logs.

338
Example: Outputting logs to a UNIX log host
Network configuration
Configure the device to output to the UNIX log host FTP logs that have a minimum severity level of
informational.
Figure 102 Network diagram

1.1.0.1/16 1.2.0.1/16
Internet

Device Host

Procedure
1. Make sure the device and the log host can reach each other. (Details not shown.)
2. Configure the device:
# Enable the information center.
<Device> system-view
[Device] info-center enable
# Specify log host 1.2.0.1/16 with local4 as the logging facility.
[Device] info-center loghost 1.2.0.1 facility local4
# Disable log output to the log host.
[Device] info-center source default loghost deny
To avoid output of unnecessary information, disable all modules from outputting logs to the
specified destination (loghost in this example) before you configure an output rule.
# Configure an output rule to output to the log host FTP logs that have a minimum severity level
of informational.
[Device] info-center source ftp loghost level informational
3. Configure the log host:
The log host configuration procedure varies by the vendor of the UNIX operating system. The
following shows an example:
a. Log in to the log host as a root user.
b. Create a subdirectory named Device in directory /var/log/, and then create file info.log in
the Device directory to save logs from the device.
# mkdir /var/log/Device
# touch /var/log/Device/info.log
c. Edit file syslog.conf in directory /etc/ and add the following contents:
# Device configuration messages
local4.info /var/log/Device/info.log
In this configuration, local4 is the name of the logging facility that the log host uses to
receive logs. The value info indicates the informational severity level. The UNIX system
records the log information that has a minimum severity level of informational to file
/var/log/Device/info.log.

339
NOTE:
Follow these guidelines while editing file /etc/syslog.conf:
• Comments must be on a separate line and must begin with a pound sign (#).
• No redundant spaces are allowed after the file name.
• The logging facility name and the severity level specified in the /etc/syslog.conf file must
be identical to those configured on the device by using the info-center loghost and
info-center source commands. Otherwise, the log information might not be output
to the log host correctly.

d. Display the process ID of syslogd, kill the syslogd process, and then restart syslogd by
using the –r option to validate the configuration.
# ps -ae | grep syslogd
147
# kill -HUP 147
# syslogd -r &

Now, the device can output FTP logs to the log host, which stores the logs to the specified file.

Example: Outputting logs to a Linux log host


Network configuration
Configure the device to output to the Linux log host 1.2.0.1/16 FTP logs that have a minimum
severity level of informational.
Figure 103 Network diagram

1.1.0.1/16 1.2.0.1/16
Internet

Device Host

Procedure
1. Make sure the device and the log host can reach each other. (Details not shown.)
2. Configure the device:
# Enable the information center.
<Device> system-view
[Device] info-center enable
# Specify log host 1.2.0.1/16 with local5 as the logging facility.
[Device] info-center loghost 1.2.0.1 facility local5
# Disable log output to the log host.
[Device] info-center source default loghost deny
To avoid outputting unnecessary information, disable all modules from outputting log
information to the specified destination (loghost in this example) before you configure an
output rule.
# Configure an output rule to enable output to the log host FTP logs that have a minimum
severity level of informational.
[Device] info-center source ftp loghost level informational
3. Configure the log host:
The log host configuration procedure varies by the vendor of the Linux operating system. The
following shows an example:

340
a. Log in to the log host as a root user.
b. Create a subdirectory named Device in directory /var/log/, and create file info.log in the
Device directory to save logs from the device.
# mkdir /var/log/Device
# touch /var/log/Device/info.log
c. Edit file syslog.conf in directory /etc/ and add the following contents:
# Device configuration messages
local5.info /var/log/Device/info.log
In this configuration, local5 is the name of the logging facility that the log host uses to
receive logs. The value info indicates the informational severity level. The Linux system
will store the log information with a severity level equal to or higher than informational to
file /var/log/Device/info.log.

NOTE:
Follow these guidelines while editing file /etc/syslog.conf:
• Comments must be on a separate line and must begin with a pound sign (#).
• No redundant spaces are allowed after the file name.
• The logging facility name and the severity level specified in the /etc/syslog.conf file must
be identical to those configured on the device by using the info-center loghost and
info-center source commands. Otherwise, the log information might not be output
to the log host correctly.

d. Display the process ID of syslogd, kill the syslogd process, and then restart syslogd by
using the -r option to validate the configuration.
Make sure the syslogd process is started with the -r option on the Linux log host.
# ps -ae | grep syslogd
147
# kill -9 147
# syslogd -r &

Now, the device can output FTP logs to the log host, which stores the logs to the specified file.

341
Configuring the packet capture
About packet capture
The packet capture feature captures incoming packets on the device. The feature displays the
captured packets on the terminal in real time, and allows you to save the captured packets to a .pcap
file for future analysis. Packet capture can read both .pcap and .pcapng files.

Building a capture filter rule


Capture filter rule keywords
Qualifiers
Table 29 Qualifiers for capture filter rules

Category Description Examples


• arp—Matches ARP.
Matches a protocol. • icmp—Matches ICMP.
If you do not specify a protocol • ip—Matches IPv4.
Protocol
qualifier, the filter matches any • ip6—Matches IPv6.
supported protocols. • tcp—Matches TCP.
• udp—Matches UDP.

Matches packets based on its • src—Matches the source IP address


source or destination location (an field.
IP address or port number).
• dst—Matches the destination IP
Direction If you do not specify a direction address field.
qualifier, the src or dst qualifier
• src or dst—Matches the source or
applies. For example, port 23 is
destination IP address field.
equivalent to src or dst port 23.
• host—Matches the IP address of a
Specifies the direction type. host.
The host qualifier applies if you • net—Matches an IP subnet.
Type do not specify any type qualifier. • port—Matches a service port number.
For example, src 2.2.2.2 is
equivalent to src host 2.2.2.2. • portrange—Matches a service port
range.
• broadcast—Matches broadcast
packets.
• multicast—Matches multicast and
broadcast packets.
Any other qualifiers than the • less—Matches packets that are less
Others than or equal to a specific size.
previously described qualifiers.
• greater—Matches packets that are
greater than or equal to a specific size.
• len—Matches the packet length.
• vlan—Matches VLAN packets.

Variables
A capture filter variable must be modified by one or more qualifiers.

342
The broadcast, multicast, and all protocol qualifiers cannot modify variables. The other qualifiers
must be followed by variables.
Table 30 Variable types for capture filter rules

Variable type Description Examples


Represented in binary, octal, The port 23 expression matches traffic sent to or
Integer
decimal, or hexadecimal notation. from port number 23.
Represented by hyphenated The portrange 100-200 expression matches traffic
Integer range
integers. sent to or from any ports in the range of 100 to 200.
Represented in dotted decimal The src 1.1.1.1 expression matches traffic sent
IPv4 address
notation. from the IPv4 host at 1.1.1.1.
Represented in colon hexadecimal The dst host 1::1 expression matches traffic sent
IPv6 address
notation. to the IPv6 host at 1::1.
Both of the following expressions match traffic sent
Represented by an IPv4 network ID to or from the IPv4 subnet 1.1.1.0/24:
IPv4 subnet
or an IPv4 address with a mask. • src 1.1.1.
• src net 1.1.1.0/24.

IPv6 network Represented by an IPv6 address The dst net 1::/64 expression matches traffic sent
segment with a prefix length. to the IPv6 network 1::/64.

Capture filter rule operators


Logical operators
Logical operators are left associative. They group from left to right. The not operator has the highest
priority. The and and or operators have the same priority.
Table 31 Logical operators for capture filter rules

Nonalphanumeric Alphanumeric
Description
symbol symbol
Reverses the result of a condition.
Use this operator to capture traffic that matches the opposite
! not
value of a condition.
For example, to capture non-HTTP traffic, use not port 80.
Joins two conditions.
Use this operator to capture traffic that matches both
&& and conditions.
For example, to capture non-HTTP traffic that is sent to or from
1.1.1.1, use host 1.1.1.1 and not port 80.
Joins two conditions.
Use this operator to capture traffic that matches either of the
|| or conditions.
For example, to capture traffic that is sent to or from 1.1.1.1 or
2.2.2.2, use host 1.1.1.1 or host 2.2.2.2.

343
Arithmetic operators
Table 32 Arithmetic operators for capture filter rules

Nonalphanumeric
Description
symbol
+ Adds two values.

- Subtracts one value from another.

* Multiplies one value by another.

/ Divides one value by another.

Returns the result of the bitwise AND operation on two integral values in binary
&
form.

| Returns the result of the bitwise OR operation on two integral values in binary form.

Performs the bitwise left shift operation on the operand to the left of the operator.
<<
The right-hand operand specifies the number of bits to shift.
Performs the bitwise right shift operation on the operand to the left of the operator.
>>
The right-hand operand specifies the number of bits to shift.
Specifies a byte offset relative to a protocol layer. This offset indicates the byte
where the matching begins.
[] You must enclose the offset value in the brackets and specify a protocol qualifier.
For example, ip[6] matches the seventh byte of payload in IPv4 packets (the byte
that is six bytes away from the beginning of the IPv4 payload).

Relational operators
Table 33 Relational operators for capture filter rules

Nonalphanumeric
Description
symbol
Equal to.
= For example, ip[6]=0x1c matches an IPv4 packet if its seventh byte of payload
is equal to 0x1c.

Not equal to.


!=
For example, len!=60 matches a packet if its length is not equal to 60 bytes.

Greater than.
>
For example, len>100 matches a packet if its length is greater than 100 bytes.

Less than.
<
For example, len<100 matches a packet if its length is less than 100 bytes.
Greater than or equal to.
>= For example, len>=100 matches a packet if its length is greater than or equal to
100 bytes.
Less than or equal to.
<= For example, len<=100 matches a packet if its length is less than or equal to 100
bytes.

344
Capture filter rule expressions
Logical expression
Use this type of expression to capture packets that match the result of logical operations.
Logical expressions contain keywords and logical operators. For example:
• not port 23 and not port 22—Captures packets with a port number that is not 23 or 22.
• port 23 or icmp—Captures packets with a port number 23 or ICMP packets.
In a logical expression, a qualifier can modify more than one variable connected by its nearest logical
operator. For example, to capture packets sourced from IPv4 address 192.168.56.1 or IPv4 network
192.168.27, use either of the following expressions:
• src 192.168.56.1 or 192.168.27.
• src 192.168.56.1 or src 192.168.27.
The expr relop expr expression
Use this type of expression to capture packets that match the result of arithmetic operations.
This expression contains keywords, arithmetic operators (expr), and relational operators (relop). For
example, len+100>=200 captures packets that are greater than or equal to 100 bytes.
The proto [ expr:size ] expression
Use this type of expression to capture packets that match the result of arithmetic operations on a
number of bytes relative to a protocol layer.
This type of expression contains the following elements:
• proto—Specifies a protocol layer.
• []—Performs arithmetic operations on a number of bytes relative to the protocol layer.
• expr—Specifies the arithmetic expression.
• size—Specifies the byte offset. This offset indicates the number of bytes relative to the
protocol layer. The operation is performed on the specified bytes. The offset is set to 1 byte if
you do not specify an offset.
For example, ip[0]&0xf !=5 captures an IP packet if the result of ANDing the first byte with 0x0f is
not 5.
To match a field, you can specify a field name for expr:size. For example,
icmp[icmptype]=0x08 captures ICMP packets that contain a value of 0x08 in the Type field.
The vlan vlan_id expression
Use this type of expression to capture 802.1Q tagged VLAN traffic.
This type of expression contains the vlan vlan_id keywords and logical operators. The vlan_id
variable is an integer that specifies a VLAN ID. For example, vlan 1 and ip captures IPv4 packets in
VLAN 1.
To capture packets of a VLAN, set a capture filter as follows:
• To capture tagged packets that are permitted on the interface, you must use the vlan
vlan_id expression prior to any other expressions. For example, use the vlan 3 and src
192.168.1.10 and dst 192.168.1.1 expression to capture packets of VLAN 3 that are sent from
192.168.1.10 to 192.168.1.1.
• After receiving an untagged packet, the device adds a VLAN tag to the packet header. To
capture the packet, add "vlan xx" to the capture filter expression. For Layer 3 packets, the xx
represents the default VLAN ID of the outgoing interface. For Layer 2 packets, the xx
represents the default VLAN ID of the incoming interface.

345
Building a display filter rule
A display filter rule only identifies the packets to display. It does not affect which packets to save in a
file.

Display filter rule keywords


Qualifiers
Table 34 Qualifiers for display filter rules

Category Description Examples


• eth—Matches Ethernet.
• ftp—Matches FTP.
• http—Matches HTTP.
Matches a protocol. • icmp—Matches ICMP.
Protocol If you do not specify a protocol qualifier, • ip—Matches IPv4.
the filter matches any supported
• ipv6—Matches IPv6.
protocols.
• tcp—Matches TCP.
• telnet—Matches Telnet.
• udp—Matches UDP.

Matches a field in packets by using a


• tcp.flags.syn—Matches the SYN bit in
dotted string in the
the flags field of TCP.
Packet field protocol.field[.level1-su
• tcp.port—Matches the source or
bfield]…[.leveln-subfield
destination port field of TCP.
] format.

Variables
A packet field qualifier requires a variable.
Table 35 Variable types for display filter rules

Variable type Description


Represented in binary, octal, decimal, or hexadecimal notation.
For example, to display IP packets that are less than or equal to 1500 bytes, use one of
the following expressions:
Integer
• ip.len le 1500.
• ip.len le 02734.
• ip.len le 0x436.

This variable type has two values: true or false.


This variable type applies if you use a packet field string alone to identify the presence of
a field in a packet.
Boolean • If the field is present, the match result is true. The filter displays the packet.
• If the field is not present, the match result is false. The filter does not display the
packet.
For example, to display TCP packets that contain the SYN field, use tcp.flags.syn.

346
Variable type Description
Uses colons (:), dots (.), or hyphens (-) to break up the MAC address into two or four
segments.
For example, to display packets that contain a destination MAC address of ffff.ffff.ffff, use
MAC address
one of the following expressions:
(6 bytes) • eth.dst==ff:ff:ff:ff:ff:ff.
• eth.dst==ff-ff-ff-ff-ff-ff.
• eth.dst ==ffff.ffff.ffff.

Represented in dotted decimal notation.


For example:
IPv4 address • To display IPv4 packets that are sent to or from 192.168.0.1, use
ip.addr==192.168.0.1.
• To display IPv4 packets that are sent to or from 129.111.0.0/16, use
ip.addr==129.111.0.0/16.

Represented in colon hexadecimal notation.


For example:
IPv6 address
• To display IPv6 packets that are sent to or from 1::1, use ipv6.addr==1::1.
• To display IPv6 packets that are sent to or from 1::/64, use ipv6.addr==1::/64.

Character string.
String For example, to display HTTP packets that contain the string HTTP/1.1 for the request
version field, use http.request version=="HTTP/1.1".

Display filter rule operators


Logical operators are left associative. They group from left to right. The [ ] operator has the highest
priority. The not operator has the highest priority. The and and or operators have the same priority.
Logical operators
Table 36 Logical operators for display filter rules

Nonalphanumeric Alphanumeric
Description
symbol symbol
No alphanumeric
Used with protocol qualifiers. For more information, see "The
[] symbol is
proto[…] expression."
available.

Displays packets that do not match the condition connected to


! not
this operator.

Joins two conditions.


&& and Use this operator to display traffic that matches both
conditions.
Joins two conditions.
|| or Use this operator to display traffic that matches either of the
conditions.

347
Relational operators
Table 37 Relational operators for display filter rules

Nonalphanumeric Alphanumeric
Description
symbol symbol
Equal to.
== eq For example, ip.src==10.0.0.5 displays packets with the
source IP address as 10.0.0.5.
Not equal to.
!= ne For example, ip.src!=10.0.0.5 displays packets whose source
IP address is not 10.0.0.5.
Greater than.
> gt For example, frame.len>100 displays frames with a length
greater than 100 bytes.
Less than.
< lt For example, frame.len<100 displays frames with a length less
than 100 bytes.
Greater than or equal to.
>= ge For example, frame.len ge 0x100 displays frames with a
length greater than or equal to 256 bytes.
Less than or equal to.
<= le For example, frame.len le 0x100 displays frames with a length
less than or equal to 256 bytes.

Display filter rule expressions


Logical expression
Use this type of expression to display packets that match the result of logical operations.
Logical expressions contain keywords and logical operators. For example, ftp or icmp displays all
FTP packets and ICMP packets.
Relational expression
Use this type of expression to display packets that match the result of comparison operations.
Relational expressions contain keywords and relational operators. For example, ip.len<=28 displays
IP packets that contain a value of 28 or fewer bytes in the length field.
Packet field expression
Use this type of expression to display packets that contain a specific field.
Packet field expressions contain only packet field strings. For example, tcp.flags.syn displays all
TCP packets that contain the SYN bit field.
The proto[…] expression
Use this type of expression to display packets that contain specific field values.
This type of expression contains the following elements:
• proto—Specifies a protocol layer or packet field.
• […]—Matches a number of bytes relative to a protocol layer or packet field. Values for the
bytes to be matched must be a hexadecimal integer string. The expression in brackets can use
the following formats:

348
 [n:m]—Matches a total of m bytes after an offset of n bytes from the beginning of the
specified protocol layer or field. To match only 1 byte, you can use both [n] and [n:1]
formats. For example, eth.src[0:3]==00:00:83 matches an Ethernet frame if the first three
bytes of its source MAC address are 0x00, 0x00, and 0x83. The eth.src[2] == 83
expression matches an Ethernet frame if the third byte of its source MAC address is 0x83.
 [n-m]—Matches a total of (m-n+1) bytes, starting from the (n+1)th byte relative to the
beginning of the specified protocol layer or packet field. For example, eth.src[1-2]==00:83
matches an Ethernet frame if the second and third bytes of its source MAC address are
0x00 and 0x83, respectively.

Restrictions and guidelines: Packet capture


To capture packets forwarded through chips, first configure a traffic behavior to mirror the traffic to
the CPU.
To capture packets forwarded by the CPU, enable packet capture directly.
The packet capture feature captures only frames that are 9196 bytes long or shorter.

Installing feature image


1. Install the packet capture feature image by using boot-loader or install commands. For
more information about the commands, see Fundamentals Command Reference.
2. Log out and then log in to the device again.

Configuring feature image-based packet capture


Restrictions and guidelines
After configuring feature image-based packet capture, you cannot configure any other commands at
the CLI until the capture finishes or is stopped.
There might be a delay for the capture to stop because of heavy traffic.

Saving captured packets to a file


To configure feature image-based packet capture and save the captured packets to a file, execute
the following command in user view:
packet-capture interface interface-type interface-number
[ capture-filter capt-expression | limit-captured-frames limit |
limit-frame-size bytes | autostop filesize kilobytes | autostop duration
seconds | autostop files numbers | capture-ring-buffer filesize kilobytes
| capture-ring-buffer duration seconds | capture-ring-buffer files
numbers ] * write filepath [ raw | { brief | verbose } ] *

Displaying specific captured packets


To configure feature image-based packet capture and display specific packet data, execute the
following command in user view:
packet-capture interface interface-type interface-number
[ capture-filter capt-expression | display-filter disp-expression |

349
limit-captured-frames limit | limit-frame-size bytes | autostop duration
seconds ] * [ raw | { brief | verbose } ] *

Stopping packet capture


About this task
Use this task to manually stop packet capture.
Procedure
To stop feature image-based packet capture, press Ctrl+C.

Displaying the contents in a packet file


About this task
Use this task to display the contents of a .pcap or .pcapng file on the device. Alternatively, you can
transfer the file to a PC and use Wireshark to display the file content.
Restrictions and guidelines
To stop displaying the contents, press Ctrl+C.
Procedure
To display the contents in a local packet file, execute the following command in user view:
packet-capture read filepath [ display-filter disp-expression ] [ raw |
{ brief | verbose } ] *

Packet capture configuration examples


Example: Configuring feature image-based packet capture
Network configuration
As shown in Figure 104, capture incoming IP packets of VLAN 3 on Layer 2 interface
GigabitEthernet 1/0/1 that meet the following conditions:
• Sent from 192.168.1.10 or 192.168.1.11 to 192.168.1.1.
• Forwarded through the CPU or chips.

350
Figure 104 Network diagram

VLAN 3

GE1/0/1
192.168.1.1/24

VLAN 3

192.168.1.10/24 192.168.1.11/24

Procedure
1. Install the packet capture feature.
# Display the device version information.
<Device> display version
HPE Comware Software, Version 7.1.070, Demo 01
Copyright (c) 2004-2019 Hewlett Packard Enterprise development LP. All rights
reserved.
HPE 5520 24G 4SFP+ HI Switch (R8M25A) is 0 weeks, 0 days, 5 hours, 33 minutes
Last reboot reason : Cold reboot
Boot image: flash:/boot-01.bin
Boot image version: 7.1.070, Demo 01
Compiled Oct 20 2019 16:00:00
System image: flash:/system-01.bin
System image version: 7.1.070, Demo 01
Compiled Oct 20 2019 16:00:00
...
# Prepare a packet capture feature image that is compatible with the current boot and system
images.
# Download the packet capture feature image to the device. In this example, the image is stored
on the TFTP server at 192.168.1.1.
<Device> tftp 192.168.1.1 get packet-capture-01.bin
Press CTRL+C to abort.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 11.3M 0 11.3M 0 0 155k 0 --:--:-- 0:01:14 --:--:-- 194k
Writing file...Done.
# Install the packet capture feature image on all IRF member devices and commit the software
change. In this example, there are two IRF member devices.
<Device> install activate feature flash:/packet-capture-01.bin slot 1
Verifying the file flash:/packet-capture-01.bin on slot 1....Done.
Identifying the upgrade methods....Done.
Upgrade summary according to following table:

351
flash:/packet-capture-01.bin
Running Version New Version
None Demo 01

Slot Upgrade Way


1 Service Upgrade
Upgrading software images to compatible versions. Continue? [Y/N]:y
This operation might take several minutes, please wait....................Done.
<Device> install activate feature flash:/packet-capture-01.bin slot 2
Verifying the file flash:/packet-capture-01.bin on slot 2....Done.
Identifying the upgrade methods....Done.
Upgrade summary according to following table:

flash:/packet-capture-01.bin
Running Version New Version
None Demo 01

Slot Upgrade Way


2 Service Upgrade
Upgrading software images to compatible versions. Continue? [Y/N]:y
This operation might take several minutes, please wait....................Done.
<Device> install commit
This operation will take several minutes, please wait.......................Done.
# Log out and then log in to the device again so you can execute the packet-capture
interface and packet-capture read commands.
2. Apply a QoS policy to the incoming direction of GigabitEthernet 1/0/1 to capture packets from
192.168.1.10 or 192.168.1.11 to 192.168.1.1 that are forwarded through chips.
# Create an IPv4 advanced ACL to match packets that are sent from 192.168.1.10 or
192.168.1.11 to 192.168.1.1.
<Device> system-view
[Device] acl advanced 3000
[Device-acl-ipv4-adv-3000] rule permit ip source 192.168.1.10 0 destination
192.168.1.1 0
[Device-acl-ipv4-adv-3000] rule permit ip source 192.168.1.11 0 destination
192.168.1.1 0
[Device-acl-ipv4-adv-3000] quit
# Configure a traffic behavior to mirror traffic to the CPU.
[Device] traffic behavior behavior1
[Device-behavior-behavior1] mirror-to cpu
[Device-behavior-behavior1] quit
# Configure a traffic class to use the ACL to match traffic.
[Device] traffic classifier classifier1
[Device-classifier-class1] if-match acl 3000
[Device-classifier-class1] quit
# Configure a QoS policy. Associate the traffic class with the traffic behavior.
[Device] qos policy user1
[Device-qospolicy-user1] classifier classifier1 behavior behavior1
[Device-qospolicy-user1] quit

352
# Apply the QoS policy to the incoming direction of GigabitEthernet 1/0/1.
[Device] interface gigabitethernet 1/0/1
[Device-GigabitEthernet1/0/1] qos apply policy user1 inbound
[Device-GigabitEthernet1/0/1] quit
[Device] quit
3. Enable packet capture.
# Capture incoming traffic on GigabitEthernet 1/0/1. Set the maximum number of captured
packets to 10. Save the captured packets to the flash:/a.pcap file.
<Device> packet-capture interface gigabitethernet 1/0/1 capture-filter "vlan 3 and
src 192.168.1.10 or 192.168.1.11 and dst 192.168.1.1" limit-captured-frames 10 write
flash:/a.pcap
Capturing on 'GigabitEthernet1/0/1'
10

Verifying the configuration


# Telnet to 192.168.1.1 from 192.168.1.10. (Details not shown.)
# Display the contents in the packet file on the device.
<Device> packet-capture read flash:/a.pcap
1 0.000000 192.168.1.10 -> 192.168.1.1 TCP 62 6325 > telnet [SYN] Seq=0 Win=65535 Len=0
MSS=1460 SACK_PERM=1
2 0.000061 192.168.1.10 -> 192.168.1.1 TCP 60 6325 > telnet [ACK] Seq=1 Ack=1 Win=65535
Len=0
3 0.024370 192.168.1.10 -> 192.168.1.1 TELNET 60 Telnet Data ...
4 0.024449 192.168.1.10 -> 192.168.1.1 TELNET 78 Telnet Data ...
5 0.025766 192.168.1.10 -> 192.168.1.1 TELNET 65 Telnet Data ...
6 0.035096 192.168.1.10 -> 192.168.1.1 TELNET 60 Telnet Data ...
7 0.047317 192.168.1.10 -> 192.168.1.1 TCP 60 6325 > telnet [ACK] Seq=42 Ack=434
Win=65102 Len=0
8 0.050994 192.168.1.10 -> 192.168.1.1 TCP 60 6325 > telnet [ACK] Seq=42 Ack=436
Win=65100 Len=0
9 0.052401 192.168.1.10 -> 192.168.1.1 TCP 60 6325 > telnet [ACK] Seq=42 Ack=438
Win=65098 Len=0
10 0.057736 192.168.1.10 -> 192.168.1.1 TCP 60 6325 > telnet [ACK] Seq=42 Ack=440
Win=65096 Len=0

353
Configuring VCF fabric
About VCF fabric
Based on OpenStack Networking (Neutron), the Virtual Converged Framework (VCF) solution
provides virtual network services from Layer 2 to Layer 7 for cloud tenants. This solution breaks the
boundaries between the network, cloud management, and terminal platforms and transforms the IT
infrastructure to a converged framework to accommodate all applications. It also implements
automated topology discovery and automated deployment of underlay networks and overlay
networks to reduce the administrators' workload and speed up network deployment and upgrade.

VCF fabric topology


Topology for a Layer 2 data center VCF fabric
In a Layer 2 data center VCF fabric, a device has one of the following roles:
• Spine node—Connects to leaf nodes.
• Leaf node—Connects to servers.
• Border node—Located at the border of a VCF fabric to provide access to the external network.
Spine nodes and leaf nodes form a large Layer 2 network, which can be a VLAN, a VXLAN with a
centralized IP gateway, or a VXLAN with distributed IP gateways. For more information about
centralized IP gateways and distributed IP gateways, see VXLAN Configuration Guide.
Figure 105 Topology for a Layer 2 data center VCF fabric
Spine Spine Spine

VXLAN/VLAN

Leaf Leaf Leaf Border

vSwitch vSwitch

VM VM VM VM

Topology for a Layer 3 data center VCF fabric


In a Layer 3 data center VCF fabric, a device has one of the following roles:
• Spine node—Connects to aggregate nodes.
• Aggregate node—Resides on the distribution layer and is located between leaf nodes and
spine nodes.
• Leaf node—Connects to servers.

354
• Border node—Located at the border of a VCF fabric to provide access to the external network.
OSPF runs on the Layer 3 networks between the spine and aggregate nodes and between the
aggregate and leaf nodes. VXLAN is used to set up the Layer 2 overlay network.
Figure 106 Topology for a Layer 3 data center VCF fabric

Spine Spine

VXLAN
Aggr Aggr Border

Leaf Leaf Leaf Leaf

VCF fabric topology for a campus network


In a campus VCF fabric, a device has one of the following roles:
• Spine node—Connects to leaf nodes.
• Leaf node—Connects to access nodes.
• Access node—Connects to an upstream leaf node and downstream terminal devices.
Cascading of access nodes is supported.
• Border node—Located at the border of a VCF fabric to provide access to the external network.
Spine nodes and leaf nodes form a large Layer 2 network, which can be a VLAN, a VXLAN with a
centralized IP gateway, or a VXLAN with distributed IP gateways. For more information about
centralized IP gateways and distributed IP gateways, see VXLAN Configuration Guide.

355
Figure 107 VCF fabric topology for a campus network
Spine Spine

VXLAN/VLAN

Leaf Leaf Border


Leaf

Access Access Access Access

AC

AP

Neutron overview
Neutron concepts and components
Neutron is a component in OpenStack architecture. It provides networking services for VMs,
manages virtual network resources (including networks, subnets, DHCP, virtual routers), and creates
an isolated virtual network for each tenant. Neutron provides a unified network resource model,
based on which VCF fabric is implemented.
The following are basic concepts in Neutron:
• Network—A virtual object that can be created. It provides an independent network for each
tenant in a multitenant environment. A network is equivalent to a switch with virtual ports which
can be dynamically created and deleted.
• Subnet—An address pool that contains a group of IP addresses. Two different subnets
communicate with each other through a router.
• Port—A connection port. A router or a VM connects to a network through a port.
• Router—A virtual router that can be created and deleted. It performs routing selection and data
forwarding.
Neutron has the following components:
• Neutron server—Includes the daemon process neutron-server and multiple plug-ins
(neutron-*-plugin). The Neutron server provides an API and forwards the API calls to the
configured plugin. The plug-in maintains configuration data and relationships between routers,
networks, subnets, and ports in the Neutron database.
• Plugin agent (neutron-*-agent)—Processes data packets on virtual networks. The choice of
plug-in agents depends on Neutron plug-ins. A plug-in agent interacts with the Neutron server
and the configured Neutron plug-in through a message queue.

356
• DHCP agent (neutron-dhcp-agent)—Provides DHCP services for tenant networks.
• L3 agent (neutron-l3-agent)—Provides Layer 3 forwarding services to enable inter-tenant
communication and external network access.
Neutron deployment
Neutron needs to be deployed on servers and network devices.
Table 38 shows Neutron deployment on a server.
Table 38 Neutron deployment on a server

Node Neutron components


• Neutron server
• Neutron DB
Controller node
• Message server (such as RabbitMQ server)
• ML2 Driver
• neutron-openvswitch-agent
Network node
• neutron-dhcp-agent
• neutron-openvswitch-agent
Compute node
• LLDP

Table 39 shows Neutron deployments on a network device.


Table 39 Neutron deployments on a network device

Network type Network device Neutron components


• neutron-l2-agent
Spine
Centralized VXLAN IP gateway • neutron-l3-agent
deployment
Leaf neutron-l2-agent
Spine N/A
Distributed VXLAN IP gateway
deployment • neutron-l2-agent
Leaf
• neutron-l3-agent

357
Figure 108 Example of Neutron deployment for centralized gateway deployment

OpenStack Network Controller Compute Node Compute Node

Vswitch Vswitch
Neutron Server
Neutron
Type Driver DB
L3 Service
Mesh Driver V V V V V V V V V V
(My SQL) M M M M M M M M M M

Physical Server Physical Server Physical Server

Message Server (RabbitMQ)

Neutron L2 agent Neutron L2 agent

Neutron L3 agent Neutron L3 agent

Spine Spine

L2 agent L2 agent L2 agent L2 agent

Leaf Leaf Leaf Leaf

Figure 109 Example of Neutron deployment for distributed gateway deployment


Neutron L3 agent

Spine Spine

L3 agent L3 agent L3 agent L3 agent


L2 agent L2 agent L2 agent L2 agent
L2 agent L2 agent L2 agent L2 agent

Leaf Leaf Leaf Leaf

Message Server (RabbitMQ)

OpenStack Network Controller Compute Node Compute Node

Vswitch Vswitch
Neutron Server
Neutron
Type Driver DB
L3 Service
Mesh Driver V V V V V V V V V V
(My SQL) M M M M M M M M M M

Physical Server Physical Server Physical Server

Automated VCF fabric deployment


VCF provides the following features to ease deployment:
• Automated topology discovery.
In a VCF fabric, each device uses LLDP to collect local topology information from
directly-connected peer devices. The local topology information includes connection interfaces,
roles, MAC addresses, and management interface addresses of the peer devices. If multiple

358
spine nodes exist in a VCF fabric, the master spine node collects the topology for the entire
network.
• Automated underlay network deployment.
Automated underlay network deployment sets up a Layer 3 underlay network (a physical Layer
3 network) for users. It is implemented by automatically executing configurations (such as IRF
configuration and Layer 3 reachability configurations) in user-defined template files.
• Automated overlay network deployment.
Automated overlay network deployment sets up an on-demand and application-oriented
overlay network (a virtual network built on top of the underlay network). It is implemented by
automatically obtaining the overlay network configuration (including VXLAN and EVPN
configuration) from the Neutron server.

Process of automated VCF fabric deployment


The device finishes automated VCF fabric deployment as follows:
1. Starts up without loading configuration and then obtains an IP address, the IP address of the
TFTP server, and a template file name from the DHCP server.
2. Determines the name of the template file to be downloaded based on the device role and the
template file name obtained from the DHCP server. For example, 1_leaf.template represents a
template file for leaf nodes.
3. Downloads the template file from the TFTP server.
4. Parses the template file and performs the following operations:
 Deploys static configurations that are independent from the VCF fabric topology.
 Deploys dynamic configurations according to the VCF fabric topology.
The topology process notifies the automation process of creation, deletion, and status
change of neighbors. Based on the topology information, the automation process completes
role discovery, automatic aggregation, and IRF fabric setup.

Template file
A template file contains the following contents:
• System-predefined variables—The variable names cannot be edited, and the variable values
are set by the VCF topology discovery feature.
• User-defined variables—The variable names and values are defined by the user. These
variables include the username and password used to establish a connection with the
RabbitMQ server, network type, and so on. The following are examples of user-defined
variables:
#USERDEF
_underlayIPRange = 10.100.0.0/16
_master_spine_mac = 1122-3344-5566
_backup_spine_mac = aabb-ccdd-eeff
_username = aaa
_password = aaa
_rbacUserRole = network-admin
_neutron_username = openstack
_neutron_password = 12345678
_neutron_ip = 172.16.1.136
_loghost_ip = 172.16.1.136
_network_type = centralized-vxlan
……

359
• Static configurations—Static configurations are independent from the VCF fabric topology
and can be directly executed. The following are examples of static configurations:
#STATICCFG
#
clock timezone beijing add 08:00:00
#
lldp global enable
#
stp global enable
#
• Dynamic configurations—Dynamic configurations are dependent on the VCF fabric topology.
The device first obtains the topology information through LLDP and then executes dynamic
configurations. The following are examples of dynamic configurations:
#
interface $$_underlayIntfDown
port link-mode route
ip address unnumbered interface LoopBack0
ospf 1 area 0.0.0.0
ospf network-type p2p
lldp management-address arp-learning
lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0
#

VCF fabric task at a glance


To configure a VCF fabric, perform the following tasks:
• Configuring automated VCF fabric deployment
No tasks are required to be made on the device for automated VCF fabric deployment.
However, you must make related configuration on the DHCP server and the TFTP server so the
device can download and parse a template file to complete automated VCF fabric deployment.
• (Optional.) Adjust VCF fabric deployment
If the device cannot obtain or parse the template file to complete automated VCF fabric
deployment, choose the following tasks as needed:
 Enabling VCF fabric topology discovery
 Configuring automated underlay network deployment
 Configuring automated overlay network deployment

Configuring automated VCF fabric deployment


Restrictions and guidelines
On a campus network, links between two access nodes cascaded through GigabitEthernet
interfaces and links between leaf nodes and access nodes are automatically aggregated. For links
between spine nodes and leaf nodes, the trunk permit vlan command is automatically
executed.
On a campus network where multiple leaf nodes that have an access node attached form an IRF
fabric, make sure only one link exists between each leaf node and its connected access node.

360
Do not perform link migration when devices in the VCF fabric are in the process of coming online or
powering down after the automated VCF fabric deployment finishes. A violation might cause
link-related configuration fails to update.
The version format of a template file for automated VCF fabric deployment is x.y. Only the x part is
examined during a version compatibility check. For successful automated deployment, make sure x
in the version of the template file to be used is not greater than x in the supported version. To display
the supported version of the template file for automated VCF fabric deployment, use the display
vcf-fabric underlay template-version command.
If the template file does not include IRF configurations, the device does not save the configurations
after executing all configurations in the template file. To save the configurations, use the save
command.
Two devices with the same role can automatically set up an IRF fabric only when the IRF physical
interfaces on the devices are connected.
Two IRF member devices in an IRF fabric use the following rules to elect the IRF master during
automated VCF fabric deployment:
• If the uptime of both devices is shorter than two hours, the device with the higher bridge MAC
address becomes the IRF master.
• If the uptime of one device is equal to or longer than two hours, that device becomes the IRF
master.
• If the uptime of both devices are equal to or longer than two hours, the IRF fabric cannot be set
up. You must manually reboot one of the member devices. The rebooted device will become the
IRF subordinate.
If the IRF member ID of a device is not 1, the IRF master might reboot during automatic IRF fabric
setup.
Procedure
1. Finish the underlay network planning (such as IP address assignment, reliability design, and
routing deployment) based on user requirements.
2. Configure the DHCP server.
Configure the IP address of the device, the IP address of the TFTP server, and names of
template files saved on the TFTP server. For more information, see the user manual of the
DHCP server.
3. Configure the TFTP server.
Create template files and save the template files to the TFTP server.
4. (Optional.) Configure the NTP server.
5. Connect the device to the VCF fabric and start the device.
After startup, the device uses a management Ethernet interface or VLAN-interface 1 to connect
to the fabric management network. Then, it downloads the template file corresponding to its
device role and parses the template file to complete automated VCF fabric deployment.
6. (Optional.) Save the deployed configuration.
If the template file does not include IRF configurations, the device will not save the
configurations after executing all configurations in the template file. To save the configurations,
use the save command. For more information about this command, see configuration file
management commands in Fundamentals Command Reference.

Enabling VCF fabric topology discovery


1. Enter system view.
system-view

361
2. Enable LLDP globally.
lldp global enable
By default, LLDP is disabled globally.
You must enable LLDP globally before you enable VCF fabric topology discovery, because the
device needs LLDP to collect topology data of directly-connected devices.
3. Enable VCF fabric topology discovery.
vcf-fabric topology enable
By default, VCF fabric topology discovery is disabled.

Configuring automated underlay network


deployment
Specify the template file for automated underlay network
deployment
1. Enter system view.
system-view
2. Specify the template file for automated underlay network deployment.
vcf-fabric underlay autoconfigure template
By default, no template file is specified for automated underlay network deployment.

Specifying the role of the device in the VCF fabric


About this task
Perform this task to change the role of the device in the VCF fabric.
Restrictions and guidelines
If the device completes automated underlay network deployment by automatically downloading and
parsing a template file, reboot the device after you change the device role. In this way, the device can
obtain the template file corresponding to the new role and complete the automated underlay network
deployment.
To use devices that have come online after automated deployment to form an IRF fabric, make sure
all member devices in the IRF fabric have the same VCF fabric role.
Procedure
1. Enter system view.
system-view
2. Specify the role of the device in the VCF fabric.
vcf-fabric role { access | aggr | leaf | spine }
By default, the device is a leaf node.
3. Return to system view.
quit
4. Reboot the device.
reboot
For the new role to take effect, you must reboot the device.

362
Configuring the device as a master spine node
About this task
If multiple spine nodes exist on a VCF fabric, you must configure a device as the master spine node
to collect the topology for the entire VCF fabric network.
Procedure
1. Enter system view.
system-view
2. Configure the device as a master spine node.
vcf-fabric spine-role master
By default, the device is not a master spine node.

Setting the NETCONF username and password


About this task
During automated underlay network deployment, a spine node establishes NETCONF sessions with
other devices in the underlay network to collect the global topology. If a user changes the username
or password on a device in the undelay network, the NETCONF session between the device and the
spine node is disconnected. Thun, the spine node fails to sense the topology change. To resolve the
issue, perform this task to change the NETCONF username or password on the spine node.
Restrictions and guidelines
This feature is available on spine nodes. Changing the NETCONF username or password will
disconnect NETCONF sessions between the spine node and leaf nodes. If multiple spine nodes
exist in the VCF fabric, changing the NETCONF username or password on the master spine node
will also disconnect NETCONF sessions between the master spine node and other spine nodes.
Make sure all devices in the underlay network use the same username and password for NETCONF
session establishment.
Procedure
1. Enter system view.
system-view
2. Set the NETCONF username.
vcf-fabric underlay netconf-username username
By default, the device uses the NETCONF username defined in the template file for automated
VCF fabric deployment.
3. Set the NETCONF password.
vcf-fabric underlay netconf-password { cipher | simple } string
By default, the device uses the NETCONF password defined in the template file for automated
VCF fabric deployment.

Pausing automated underlay network deployment


About this task
If you pause automated underlay network deployment, the VCF fabric will save the current status of
the device. It will not respond to new LLDP events, set up the IRF fabric, aggregate links, or discover
uplink or downlink interfaces.

363
Perform this task if all devices in the VCF fabric complete automated deployment and new devices
are to be added to the VCF fabric.
Procedure
1. Enter system view.
system-view
2. Pause automated underlay network deployment.
vcf-fabric underlay pause
By default, automated underlay network deployment is not paused.

Configuring automated overlay network


deployment
Restrictions and guidelines for automated overlay network
deployment
If the network type is VLAN or VXLAN with a centralized IP gateway, perform this task on both the
spine node and the leaf nodes.
If the network type is VXLAN with distributed IP gateways, perform this task on leaf nodes.
As a best practice, do not perform any of the following tasks while the device is communicating with
a RabbitMQ server:
• Change the source IPv4 address for the device to communicate with RabbitMQ servers.
• Bring up or shut down a port connected to the RabbitMQ server.
If you do so, it will take the CLI a long time to respond to the l2agent enable, undo l2agent
enable, l3agent enable, or undo l3agent enable command.
Automated overlay network deployment is not supported on aggregate or access nodes.

Automated overlay network deployment tasks at a glance


To configure automated overlay network deployment, perform the following tasks:
1. Configuring parameters for the device to communicate with RabbitMQ servers
2. Specifying the network type
3. Enabling L2 agent
On a VLAN network or a VXLAN network with a centralized IP gateway, perform this task on
both spine nodes and leaf nodes.
On a VXLAN network with distribute IP gateways, perform this task only on leaf nodes.
4. Enabling L3 agent
On a VLAN network or a VXLAN network with a centralized IP gateway, perform this task only
on spine nodes.
On a VXLAN network with distribute IP gateways, perform this task only on leaf nodes.
5. Configuring the border node
Perform this task only when the device is the border node.
6. (Optional.) Enabling local proxy ARP
7. (Optional.) Configuring the MAC address of VSI interfaces

364
Prerequisites for automated overlay network deployment
Before you configure automated overlay network deployment, you must complete the following
tasks:
1. Install OpenStack Neutron components and plugins on the controller node in the VCF fabric.
2. Install OpenStack Nova components, openvswitch, and neutron-ovs-agent on compute nodes
in the VCF fabric.
3. Make sure LLDP and automated VCF fabric topology discovery are enabled.

Configuring parameters for the device to communicate with


RabbitMQ servers
About this task
In the VCF fabric, the device communicates with the Neutron server through RabbitMQ servers. You
must specify the IP address, login username, login password, and listening port for the device to
communicate with RabbitMQ servers.
Restrictions and guidelines
Make sure the RabbitMQ server settings on the device are the same as those on the controller node.
If the durable attribute of RabbitMQ queues is set on the Neutron server, you must enable creation of
RabbitMQ durable queues on the device so that RabbitMQ queues can be correctly created.
When you set the RabbitMQ server parameters or remove the settings, make sure the routes
between the device and the RabbitMQ server is reachable. Otherwise, the CLI does not respond
until the TCP connection between the device and the RabbitMQ server is terminated.
Multiple virtual hosts might exist on the RabbitMQ server. Each virtual host can independently
provide RabbitMQ services for the device. For the device to correctly communicate with the Neutron
server, specify the same virtual host on the device and the Neutron server.
Procedure
1. Enter system view.
system-view
2. Enable Neutron and enter Neutron view.
neutron
By default, Neutron is disabled.
3. Specify the IPv4 address, port number, and MPLS L3VPN instance of a RabbitMQ server.
rabbit host ip ipv4-address [ port port-number ] [ vpn-instance
vpn-instance-name ]
By default, no IPv4 address or MPLS L3VPN instance of a RabbitMQ server is specified, and
the port number of a RabbitMQ server is 5672.
4. Specify the source IPv4 address for the device to communicate with RabbitMQ servers.
rabbit source-ip ipv4-address [ vpn-instance vpn-instance-name ]
By default, no source IPv4 address is specified for the device to communicate with RabbitMQ
servers. The device automatically selects a source IPv4 address through the routing protocol to
communicate with RabbitMQ servers.
5. (Optional.) Enable creation of RabbitMQ durable queues.
rabbit durable-queue enable
By default, RabbitMQ non-durable queues are created.
6. Configure the username for the device to establish a connection with a RabbitMQ server.

365
rabbit user username
By default, the device uses username guest to establish a connection with a RabbitMQ server.
7. Configure the password for the device to establish a connection with a RabbitMQ server.
rabbit password { cipher | plain } string
By default, the device uses plaintext password guest to establish a connection with a
RabbitMQ server.
8. Specify a virtual host to provide RabbitMQ services.
rabbit virtual-host hostname
By default, the virtual host / provides RabbitMQ services for the device.
9. Specify the username and password for the device to deploy configurations through RESTful.
restful user username password { cipher | plain } password
By default, no username or password is configured for the device to deploy configurations
through RESTful.

Specifying the network type


About this task
After you change the network type of the VCF fabric where the device resides, Neutron deploys new
configuration to all devices according to the new network type.
Procedure
1. Enter system view.
system-view
2. Enter Neutron view.
neutron
3. Specify the network type.
network-type { centralized-vxlan | distributed-vxlan | vlan }
By default, the network type is VLAN.

Enabling L2 agent
About this task
Layer 2 agent (L2 agent) responds to OpenStack events such as network creation, subnet creation,
and port creation. It deploys Layer 2 networking to provide Layer 2 connectivity within a virtual
network and Layer 2 isolation between different virtual networks
Restrictions and guidelines
On a VLAN network or a VXLAN network with a centralized IP gateway, perform this task on both
spine nodes and leaf nodes.
On a VXLAN network with distribute IP gateways, perform this task only on leaf nodes.
Procedure
1. Enter system view.
system-view
2. Enter Neutron view.
neutron
3. Enable the L2 agent.

366
l2agent enable
By default, the L2 agent is disabled.

Enabling L3 agent
About this task
Layer 3 agent (L3 agent) responds to OpenStack events such as virtual router creation, interface
creation, and gateway configuration. It deploys the IP gateways to provide Layer 3 forwarding
services for VMs.
Restrictions and guidelines
On a VLAN network or a VXLAN network with a centralized IP gateway, perform this task only on
spine nodes.
On a VXLAN network with distribute IP gateways, perform this task only on leaf nodes.
Procedure
1. Enter system view.
system-view
2. Enter Neutron view.
neutron
3. Enable the L3 agent.
L3agent enable
By default, the L3 agent is disabled.

Configuring the border node


About this task
On a VXLAN network with a centralized IP gateway or on a VLAN network, configure a spine node as
the border node. On a VXLAN network with distributed IP gateways, configure a leaf node as the
border.
You can use the following methods to configure the IP address of the border gateway:
• Manually specify the IP address of the border gateway.
• Enable the border node service on the border gateway and create the external network and
routers on the OpenStack Dashboard. Then, VCF fabric automatically deploys the routing
configuration to the device to implement connectivity between tenant networks and the external
network.
If the manually specified IP address is different from the IP address assigned by VCF fabric, the IP
address assigned by VCF fabric takes effect.
The border node connects to the external network through an interface which belongs to the global
VPN instance. For the traffic from the external network to reach a tenant network, the border node
needs to add the routes of the tenant VPN instance into the routing table of the global VPN instance.
You must configure export route targets of the tenant VPN instance as import route targets of the
global VPN instance. This setting enables the global VPN instance to import routes of the tenant
VPN instance.
Procedure
1. Enter system view.
system-view
2. Enter Neutron view.

367
neutron
3. Enable the border node service.
border enable
By default, the device is not a border node.
4. (Optional.) Specify the IPv4 address of the border gateway.
gateway ip ipv4-address
By default, the IPv4 address of the border gateway is not specified.
5. Configure export route targets for a tenant VPN instance.
vpn-target target export-extcommunity
By default, no export route targets are configured for a tenant VPN instance.
6. (Optional.) Configure import route targets for a tenant VPN instance.
vpn-target target import-extcommunity
By default, no import route targets are configured for a tenant VPN instance.

Enabling local proxy ARP


About this task
This feature enables the device to use the MAC address of VSI interfaces to answer ARP requests
for MAC addresses of VMs on a different site from the requesting VMs.
Restrictions and guidelines
Perform this task only on leaf nodes on a VXLAN network with distributed IP gateways.
This configuration takes effect on VSI interfaces that are created after the proxy-arp enable
command is executed. It does not take effect on existing VSI interfaces.
Procedure
1. Enter system view.
system-view
2. Enter Neutron view.
neutron
3. Enable local proxy ARP.
proxy-arp enable
By default, local proxy ARP is disabled.

Configuring the MAC address of VSI interfaces


About this task
After you perform this task, VCF fabric assigns the MAC address to all VSI interfaces newly created
by automated overlay network deployment on the device.
Restrictions and guidelines
Perform this task only on leaf nodes on a VXLAN network with distributed IP gateways.
This configuration takes effect only on VSI interfaces newly created after this command is executed.
Procedure
1. Enter system view.
system-view

368
2. Enter Neutron view.
neutron
3. Configure the MAC address of VSI interfaces.
vsi-mac mac-address
By default, no MAC address is configured for VSI interfaces.

Display and maintenance commands for VCF


fabric
Execute display commands in any view.

Task Command
Display the role of the device in the VCF fabric. display vcf-fabric role
Display VCF fabric topology information. display vcf-fabric topology
Display information about automated underlay display vcf-fabric underlay
network deployment. autoconfigure
Display the supported version and the current version display vcf-fabric underlay
of the template file for automated VCF fabric
provisioning.
template-version

369
Using Ansible for automated
configuration management
About Ansible
Ansible is a configuration tool programmed in Python. It uses SSH to connect to devices.

Ansible network architecture


As shown in Figure 110, an Ansible system consists of the following elements:
• Manager—A host installed with the Ansible environment. For more information about the
Ansible environment, see Ansible documentation.
• Managed devices—Devices to be managed. These devices do not need to install any agent
software. They only need to be able to act as an SSH server. The manager communicates with
managed devices through SSH to deploy configuration files.
HPE devices can act as managed devices.
Figure 110 Ansible network architecture

Manager

Network

Device A Device B Device C

How Ansible works


The following the steps describe how Ansible works:
1. On the manager, create a configuration file and specify the destination device.
2. The manager (SSH client) initiates an SSH connection to the device (SSH server).
3. The manager deploys the configuration file to the device.
4. After receiving a configuration file from the manager, the device loads the configuration file.

Restrictions and guidelines


Not all services modules are configurable through Ansible. To identify the service modules that you
can configure by using Ansible, access the Comware 7 Python library.

370
Configuring the device for management with
Ansible
Before you use Ansible to configure the device, complete the following tasks:
• Configure a time protocol (NTP or PTP) or manually configure the system time on the Ansible
server and the device to synchronize their system time. For more information about NTP and
PTP configuration, see Network Management and Monitoring Configuration Guide.
• Configure the device as an SSH server. For more information about SSH configuration, see
Security Configuration Guide.

Device setup examples for management with


Ansible
Example: Setting up the device for management with Ansible
Network configuration
As shown in Figure 111, enable SSH server on the device and use the Ansible manager to manage
the device over SSH.
Figure 111 Network diagram
SSH server SSH client

Device Ansible
Manager

Prerequisites
Assign IP addresses to the device and manager so you can access the device from the manager.
(Details not shown.)
Procedure
1. Configure a time protocol (NTP or PTP) or manually configure the system time on both the
device and manager so they use the same system time. (Details not shown.)
2. Configure the device as an SSH server:
# Create local key pairs. (Details not shown.)
# Create a local user named abc and set the password to 123456 in plain text.
<Device> system-view
[Device]local-user abc
[Device-luser-manage-abc] password simple 123456
# Assign the network-admin user role to the user and authorize the user to use SSH, HTTP, and
HTTPS services.
[Device-luser-manage-abc] authorization-attribute user-role network-admin
[Device-luser-manage-abc] service-type ssh http https
[Device-luser-manage-abc] quit
# Enable NETCONF over SSH.
[Device] netconf ssh server enable

371
# Enable scheme authentication for SSH login and assign the network-admin user role to the
login users.
[Device] line vty 0 63
[Device-line-vty0-63] authentication-mode scheme
[Device-line-vty0-63] user-role network-admin
[Device-line-vty0-63] quit
# Enable the SSH server.
[Device] ssh server enable
# Authorize SSH user abc to use all service types, including SCP, SFTP, Stelnet, and
NETCONF. Set the authentication method to password.
[Device] ssh user abc service-type all authentication-type password
# Enable the SFTP server or SCP server.
 If the device supports SFTP, enable the SFTP server.
[Device] sftp server enable
 If the device does not support SFTP, enable the SCP server.
[Device] scp server enable

Procedure
Install Ansible on the manager. Create a configuration script and deploy the script. For more
information, see the relevant documents.

372
Configuring Chef
About Chef
Chef is an open-source configuration management tool. It uses the Ruby language. You can use the
Ruby language to create cookbooks and save them to a server, and then use the server for
centralized configuration enforcement and management.

Chef network framework


Figure 112 Chef network framework
Chef server Workstation

Chef client A Chef client B Chef client C

As shown in Figure 112, Chef operates in a client/server network framework. Basic Chef network
components include the Chef server, Chef clients, and workstations.
Chef server
The Chef server is used to centrally manage Chef clients. It has the following functions:
• Creates and deploys cookbooks to Chef clients on demand.
• Creates .pem key files for Chef clients and workstations. Key files include the following two
types:
 User key file—Stores user authentication information for a Chef client or a workstation. The
Chef server uses this file to verify the validity of a Chef client or workstation. Before the Chef
client or workstation initiates a connection to the Chef server, make sure the user key file is
downloaded to the Chef client or workstation.
 Organization key file—Stores authentication information for an organization. For
management convenience, you can classify Chef clients or workstations that have the same
type of attributes into organizations. The Chef server uses organization key files to verify the
validity of organizations. Before a Chef client or workstation initiates a connection to the
Chef server, make sure the organization key file is downloaded to the Chef client or
workstation.
For information about installing and configuring the Chef server, see the official Chef website at
https://www.chef.io/.
Workstation
Workstations provide the interface for you to interact with the Chef server. You can create or modify
cookbooks on a workstation and then upload the cookbooks to the Chef server.

373
A workstation can be hosted by the same host as the Chef server. For information about installing
and configuring the workstation, see the official Chef website at
https://www.chef.io/.
Chef client
Chef clients are network devices managed by the Chef server. Chef clients download cookbooks
from the Chef server and use the settings in the cookbooks.
The device supports Chef 12.3.0 client.

Chef resources
Chef uses Ruby to define configuration items. A configuration item is defined as a resource. A
cookbook contains a set of resources for one feature.
Chef manages types of resources. Each resource has a type, a name, one or more properties, and
one action. Every property has a value. The value specifies the state desired for the resource. You
can specify the state of a device by setting values for properties regardless of how the device enters
the state. The following resource example shows how to configure a device to create VLAN 2 and
configure the description for VLAN 2.
netdev_vlan 'vlan2' do
vlan_id 2
description 'chef-vlan2'
action :create
end

The following are the resource type, resource name, properties, and actions:
• netdev_vlan—Type of the resource.
• vlan2—Name of the resource. The name is the unique identifier of the resource.
• do/end—Indicates the beginning and end of a Ruby block that contains properties and actions.
All Chef resources must be written by using the do/end syntax.
• vlan_id—Property for specifying a VLAN. In this example, VLAN 2 is specified.
• description—Property for configuring the description. In this example, the description for
VLAN 2 is chef-vlan2.
• create—Action for creating or modifying a resource. If the resource does not exist, this action
creates the resource. If the resource already exists, this action modifies the resource with the
new settings. This action is the default action for Chef. If you do not specify an action for a
resource, the create action is used.
• delete—Action for deleting a resource.
Chef supports only the create and delete actions.
For more information about resource types supported by Chef, see "Chef resources."

Chef configuration file


You can manually configure a Chef configuration file. A Chef configuration file contains the following
items:
• Attributes for log messages generated by a Chef client.
• Directories for storing the key files on the Chef server and Chef client.
• Directory for storing the resource files on the Chef client.
After Chef starts up, the Chef client sends its key file specified in the Chef configuration file to the
Chef server for authentication request. The Chef server compares its local key file for the client with

374
the received key file. If the two files are consistent, the Chef client passes the authentication. The
Chef client then downloads the resource file to the directory specified in the Chef configuration file,
loads the settings in the resource file, and outputs log messages as specified.
Table 40 Chef configuration file description

Item Description
Severity level for log messages.
Available values include :auto, :debug, :info, :warn, :error,
and :fatal. The severity levels in ascending order are listed as follows:
• :debug
(Optional.) log_level • :info
• :warn (:auto)
• :error
• :fatal
The default severity level is :auto, which is the same as :warn.
Log output mode:
• STDOUT—Outputs standard Chef success log messages to a
file. With this mode, you can specify the destination file for
outputting standard Chef success log messages when you
execute the third-part-process start command. The
standard Chef error log messages are output to the configuration
terminal.
• STDERR—Outputs standard Chef error log messages to a file.
log_location With this mode, you can specify the destination file for outputting
standard Chef error log messages when you execute the
third-part-process start command. The standard
Chef success log messages are output to the configuration
terminal.
• logfilepath—Outputs all log messages to a file, for example,
flash:/cheflog/a.log.
If you specify none of the options, all log messages are output to the
configuration terminal.
Chef client name.
node_name A Chef client name is used to identify a Chef client. It is different from
the device name configured by using the sysname command.

URL of the Chef server and name of the organization created on the
Chef server, in the format of
https://localhost:port/organizations/ORG_NAME.
chef_server_url The localhost argument represents the name or IP address of the Chef
server. The port argument represents the port number of the Chef
server.
The ORG_NAME argument represents the name of the organization.
Path and name of the local organization key file, in the format of
validation_key
flash:/chef/validator.pem.

Path and name of the local user key file, in the format of
client_key
flash:/chef/client.pem.

Path for the resource files, in the format of


cookbook_path
[ 'flash:/chef-repo/cookbooks' ].

Restrictions and guidelines: Chef configuration


The Chef server cannot run a lower version than Chef clients.

375
Prerequisites for Chef
Before configuring Chef on the device, complete the following tasks on the device:
• Enable NETCONF over SSH. The Chef server sends configuration information to Chef clients
through NETCONF over SSH. For information about NETCONF over SSH, see "Configuring
NETCONF."
• Configure SSH login. Chef clients communicate with the Chef server through SSH. For
information about SSH login, see Fundamentals Configuration Guide.

Starting Chef
Configuring the Chef server
1. Create key files for the workstation and the Chef client.
2. Create a Chef configuration file for the Chef client.
For more information about configuring the Chef server, see the Chef server installation and
configuration guides.

Configuring a workstation
1. Create the working path for the workstation.
2. Create the directory for storing the Chef configuration file for the workstation.
3. Create a Chef configuration file for the workstation.
4. Download the key file for the workstation from the Chef server to the directory specified in the
workstation configuration file.
5. Create a Chef resource file.
6. Upload the resource file to the Chef server.
For more information about configuring a workstation, see the workstation installation and
configuration guides.

Configuring a Chef client


1. Download the key file from the Chef server to a directory on the Chef client.
The directory must be the same as the directory specified in the Chef client configuration file.
2. Download the Chef configuration file from the Chef server to a directory on the Chef client.
The directory must be the same as the directory that will be specified by using the
--config=filepath option in the third-part-process start command.
3. Start Chef on the device:
a. Enter system view.
system-view
b. Start Chef.
third-part-process start name chef-client arg --config=filepath
--runlist recipe[Directory]
By default, Chef is shut down.

376
Parameter Description
Specifies the path and name of the Chef
--config=filepath
configuration file.
Specifies the name of the directory that contains
--runlist recipe[Directory] files and subdirectories associated with the
resource.

For more information about the third-part-process start command, see


"Monitoring and maintaining processes."

Shutting down Chef


Prerequisites
Before you shut down Chef, execute the display process all command to identify the ID of the
Chef process. This command displays information about all processes on the device. Check the
following fields:
• THIRD—This field displays Y for a third-party process.
• COMMAND—This field displays chef-client /opt/ruby/b for the Chef process.
• PID—Process ID.
Procedure
1. Enter system view.
system-view
2. Shut down Chef.
third-part-process stop pid pid-list
For more information about the third-part-process stop command, see "Monitoring
and maintaining processes."

Chef configuration examples


Example: Configuring Chef
Network configuration
As shown in Figure 113, the device is connected to the Chef server. Use Chef to configure the device
to create VLAN 3.
Figure 113 Network diagram

Chef client Chef server Workstation


1.1.1.1/24 1.1.1.2/24

Procedure
1. Configure the Chef server:
# Create user key file admin.pem for the workstation. Specify the workstation username as
Herbert George Wells, the Email address as [email protected], and the password as 123456.

377
$ chef-server-ctl user-create Herbert George Wells [email protected] 123456
–filename=/etc/chef/admin.pem
# Create organization key file admin_org.pem for the workstation. Specify the abbreviated
organization name as ABC and the organization name as ABC Technologies Co., Limited.
Associate the organization with the user Herbert.
$ chef-server-ctl org-create ABC_org "ABC Technologies Co., Limited"
–association_user Herbert –filename =/etc/chef/admin_org.pem
# Create user key file client.pem for the Chef client. Specify the Chef client username as
Herbert George Wells, the Email address as [email protected], and the password as 123456.
$ chef-server-ctl user-create Herbert George Wells [email protected] 123456
–filename=/etc/chef/client.pem
# Create organization key file validator.pem for the Chef client. Specify the abbreviated
organization name as ABC and the organization name as ABC Technologies Co., Limited.
Associate the organization with the user Herbert.
$ chef-server-ctl org-create ABC "ABC Technologies Co., Limited" –association_user
Herbert –filename =/etc/chef/validator.pem
# Create Chef configuration file chefclient.rb for the Chef client.
log_level :info
log_location STDOUT
node_name 'Herbert'
chef_server_url 'https://1.1.1.2:443/organizations/abc'
validation_key 'flash:/chef/validator.pem'
client_key 'flash:/chef/client.pem'
cookbook_path [ 'flash:/chef-repo/cookbooks' ]
2. Configure the workstation:
# Create the chef-repo directory on the workstation. This directory will be used as the working
path.
$ mkdir /chef-repo
# Create the .chef directory. This directory will be used to store the Chef configuration file for
the workstation.
$ mkdir –p /chef-repo/.chef
# Create Chef configuration file knife.rb in the /chef-repo/.chef0 directory.
log_level :info
log_location STDOUT
node_name 'admin'
client_key '/root/chef-repo/.chef/admin.pem'
validation_key '/root/chef-repo/.chef/admin_org.pem'
chef_server_url 'https://chef-server:443/organizations/abc'
# Use TFTP or FTP to download the key files for the workstation from the Chef server to the
/chef-repo/.chef directory on the workstation. (Details not shown.)
# Create resource directory netdev.
$ knife cookbook create netdev
After the command is executed, the netdev directory is created in the current directory. The
directory contains files and subdirectories for the resource. The recipes directory stores the
resource file.
# Create resource file default.rb in the recipes directory.
netdev_vlan 'vlan3' do
vlan_id 3
action :create
end

378
# Upload the resource file to the Chef server.
$ knife cookbook upload –all
3. Configure the Chef client:
# Configure SSH login and enable NETCONF over SSH on the device. (Details not shown.)
# Use TFTP or FTP to download Chef configuration file chefclient.rb from the Chef server to
the root directory of the Flash memory on the Chef client. Make sure this directory is the same
as the directory specified by using the --config=filepath option in the
third-part-process start command.
# Use TFTP or FTP to download key files validator.pem and client.pem from the Chef server
to the flash:/chef/ directory.
# Start Chef. Specify the Chef configuration file name and path as flash:/chefclient.rb and the
resource file name as netdev.
<ChefClient> system-view
[ChefClient] third-part-process start name chef-client arg
--config=flash:/chefclient.rb --runlist recipe[netdev]
After the command is executed, the Chef client downloads the resource file from the Chef
server and loads the settings in the resource file.

379
Chef resources
netdev_device
Use this resource to specify a device name for a Chef client, and specify the SSH username and
password used by the client to connect to the Chef server.
Properties and action
Table 41 Properties and action for netdev_device

Property/Action
Description Value type and restrictions
name
String, case insensitive.
hostname Specifies the device name.
Length: 1 to 64 characters.

Specifies the username for String, case sensitive.


user
SSH login. Length: 1 to 55 characters.
String, case sensitive.
Length and form requirements in non-FIPS mode:
Specifies the password for
password
SSH login. • 1 to 63 characters when in plaintext form.
• 1 to 110 characters when in hashed form.
• 1 to 117 characters when in encrypted form.
Symbol:
• create—Establishes a NETCONF connection
Specifies the action for the to the Chef server.
action
resource. • delete—Closes the NETCONF connection to
the Chef server.
The default action is create.

Resource example
# Configure the device name as ChefClient, and set the SSH username and password to user and
123456 for the Chef client.
netdev_device 'device' do
hostname "ChefClient"
user "user"
passwd "123456"
end

netdev_interface
Use this resource to configure attributes for an interface.
Properties
Table 42 Properties for netdev_interface

Property name Description Property type Value type and restrictions


Specifies an interface by
ifindex Index Unsigned integer.
its index.

380
Property name Description Property type Value type and restrictions
Configures the String, case sensitive.
description description for the N/A
interface. Length: 1 to 255 characters.

Specifies the Symbol:


admin management state for the N/A • up—Brings up the interface.
interface. • down—Shuts down the interface.

Symbol:
• auto—Autonegotiation.
• 10m—10 Mbps.
Specifies the interface • 100m—100 Mbps.
speed N/A
rate. • 1g—1 Gbps.
• 2.5g—2.5 Gbps.
• 10g—10 Gbps.
• 40g—40 Gbps.

Symbol:
• full—Full-duplex mode.
• half—Half-duplex mode.
duplex Sets the duplex mode. N/A
• auto—Autonegotiation.
This attribute applies only to Ethernet
interfaces.
Symbol:
• access—Sets the link type of the
interface to Access.
• trunk—Sets the link type of the
Sets the link type for the interface to Trunk.
linktype N/A
interface.
• hybrid—Sets the link type of the
interface to Hybrid.
This attribute applies only to Layer 2
Ethernet interfaces.
Symbol:
Sets the operation mode
portlayer N/A • bridge—Layer 2 mode.
for the interface.
• route—Layer 3 mode.

Unsigned integer in bytes. The value


Sets the MTU permitted range depends on the interface type.
mtu N/A
by the interface. This attribute applies only to Layer 3
Ethernet interface.

Resource example
# Configure the following attributes for Ethernet interface 2:
• Interface description—ifindex2.
• Management state—Up.
• Interface rate—Autonegotiation.
• Duplex mode—Autonegotiation.
• Link type—Hybrid.
• Operation mode—Layer 2.
• MTU—1500 bytes.
netdev_interface 'ifindex2' do

381
ifindex 2
description 'ifindex2'
admin 'up'
speed 'auto'
duplex 'auto'
linktype 'hybrid'
portlayer 'bridge'
mtu 1500
end

netdev_l2_interface
Use this resource to configure VLAN attributes for a Layer 2 Ethernet interface.
Properties
Table 43 Properties for netdev_l2_interface

Property name Description Property type Value type and restrictions


Specifies a Layer 2
ifindex Ethernet interface by its Index Unsigned integer.
index.

Specifies the PVID for Unsigned integer.


pvid N/A
the interface. Value range: 1 to 4094.
String, a comma separated list of VLAN
IDs or VLAN ID ranges, for example,
Specifies the VLANs 1,2,3,5-8,10-20.
permit_vlan_list permitted by the N/A Value range for each VLAN ID: 1 to
interface. 4094.
The string cannot end with a comma (,),
hyphen (-), or space.
String, a comma separated list of VLAN
IDs or VLAN ID ranges, for example,
1,2,3,5-8,10-20.
Specifies the VLANs Value range for each VLAN ID: 1 to
from which the interface 4094.
untagged_vlan_list N/A
sends packets after
removing VLAN tags. The string cannot end with a comma (,),
hyphen (-), or space.
A VLAN cannot be on the untagged list
and the tagged list at the same time.
String, a comma separated list of VLAN
IDs or VLAN ID ranges, for example,
1,2,3,5-8,10-20.
Specifies the VLANs Value range for each VLAN ID: 1 to
from which the interface 4094.
tagged_vlan_list N/A
sends packets without
removing VLAN tags. The string cannot end with a comma (,),
hyphen (-), or space.
A VLAN cannot be on the untagged list
and the tagged list at the same time.

382
Resource example
# Specify the PVID as 2 for interface 5, and configure the interface to permit packets from VLANs 2
through 6. Configure the interface to forward packets from VLAN 3 after removing VLAN tags and
forward packets from VLANs 2, 4, 5, and 6 without removing VLAN tags.
netdev_l2_interface 'ifindex5' do
ifindex 5
pvid 2
permit_vlan_list '2-6'
tagged_vlan_list '2,4-6'
untagged_vlan_list '3'
end

netdev_l2vpn
Use this resource to enable or disable L2VPN.
Properties
Table 44 Properties for netdev_l2vpn

Property name Description Value type and restrictions


Symbol:
• true—Enables L2VPN.
Enable Enables or disables L2VPN.
• false—Disables L2VPN.
The default value is false.

Resource example
# Enable L2VPN.
netdev_l2vpn 'l2vpn' do
enable true
end

netdev_lagg
Use this resource to create, modify, or delete an aggregation group.
Properties and action
Table 45 Properties and action for netdev_lagg

Property/Action Property
Description Value type and restrictions
name type
Unsigned integer.
The value range for a Layer 2
Specifies an
group_id Index aggregation group is 1 to 1024.
aggregation group ID.
The value range for a Layer 3
aggregation group is 16385 to 17408.
Symbol:
Specifies the • static—Static.
linkmode N/A
aggregation mode.
• dynamic—Dynamic.

383
Property/Action Property
Description Value type and restrictions
name type
String, a comma separated list of
interface indexes or interface index
Specifies the indexes ranges, for example, 1,2,3,5-8,10-20.
of the interfaces that The string cannot end with a comma
addports N/A
you want to add to the (,), hyphen (-), or space.
aggregation group. An interface index cannot be on the
list of adding interfaces and the list of
removing interfaces at the same time.
String, a comma separated list of
interface indexes or interface index
Specifies the indexes ranges, for example, 1,2,3,5-8,10-20.
of the interfaces that
The string cannot end with a comma
deleteports you want to remove N/A
(,), hyphen (-), or space.
from the aggregation
group. An interface index cannot be on the
list of adding interfaces and the list of
removing interfaces at the same time.
Symbol:
• create—Creates or modifies an
Specifies the action aggregation group.
action N/A
for the resource. • delete—Deletes an aggregation
group.
The default action is create.

Resource example
# Create aggregation group 16386 and set the aggregation mode to static. Add interfaces 1 through
3 to the group, and remove interface 8 from the group.
netdev_lag 'lagg16386' do
group_id 16386
linkmode 'static'
addports '1-3'
deleteports '8'
end

netdev_vlan
Use this resource to create, modify, or delete a VLAN, or configure the name and description for the
VLAN.
Properties and action
Table 46 Properties and action for netdev_vlan

Property/Action Property
Description Value type and restrictions
name type
Unsigned integer.
vlan_id Specifies a VLAN ID. Index
Value range: 1 to 4094.

Configures the description for String, case sensitive.


description N/A
the VLAN. Length: 1 to 255 characters.

384
Property/Action Property
Description Value type and restrictions
name type
String, case sensitive.
vlan_name Configures the VLAN name. N/A
Length: 1 to 32 characters.
Symbol:
• create—Creates or modifies a
Specifies the action for the VLAN.
action N/A
resource.
• delete—Deletes a VLAN.
The default action is create.

Resource example
# Create VLAN 2, configure the description as vlan2, and configure the VLAN name as vlan2.
netdev_vlan 'vlan2' do
vlan_id 2
description 'vlan2'
vlan_name ‘vlan2’
end

netdev_vsi
Use this resource to create, modify, or delete a Virtual Switch Instance (VSI).
Properties and action
Table 47 Properties and action for netdev_vsi

Property/Action Property
Description Value type and restrictions
name type
String, case sensitive.
vsiname Specifies a VSI name. Index
Length: 1 to 31 characters.
Symbol:
Enable or disable the • up—Enables the VSI.
admin N/A
VSI. • down—Disables the VSI.
The default value is up.

Symbol:
Specifies the action • create—Creates or modifies a VSI.
action N/A
for the resource. • delete—Deletes a VSI.
The default action is create.

Resource example
# Create the VSI vsia and enable the VSI.
netdev_vsi 'vsia' do
vsiname 'vsia'
admin 'up'
end

385
netdev_vte
Use this resource to create or delete a tunnel.
Properties and action
Table 48 Properties and action for netdev_vte

Property/Action
Description Property type Value type and restrictions
name
Specifies a tunnel
vte_id Index Unsigned integer.
ID.
Unsigned integer:
• 1—IPv4 GRE tunnel mode.
• 2—IPv6 GRE tunnel mode.
• 3—IPv4 over IPv4 tunnel mode.
• 4—Manual IPv6 over IPv4 tunnel
mode.
• 8—IPv6 over IPv6 or IPv4 tunnel
mode.
Sets the tunnel
mode
mode.
N/A • 16—IPv4 IPsec tunnel mode.
• 17—IPv6 IPsec tunnel mode.
• 24—UDP-encapsulated IPv4 VXLAN
tunnel mode.
• 25—UDP-encapsulated IPv6 VXLAN
tunnel mode.
You must specify the tunnel mode when
creating a tunnel. After the tunnel is created,
you cannot change the tunnel mode.
Symbol:
Specifies the • create—Creates a tunnel.
action action for the N/A
• delete—Deletes a tunnel.
resource.
The default action is create.

Resource example
# Create UDP-encapsulated IPv4 VXLAN tunnel 2.
netdev_vte 'vte2' do
vte_id 2
mode 24
end

netdev_vxlan
Use this resource to create, modify, or delete a VXLAN.

386
Properties and action
Table 49 Properties and action for netdev_vxlan

Property/Action Property
Description Value type and restrictions
name type
Unsigned integer.
vxlan_id Specifies a VXLAN ID. Index
Value range: 1 to 16777215.
String, case sensitive.
Length: 1 to 31 characters.
vsiname Specifies the VSI name. N/A You must specify the VSI name when
creating a VSI. After the VSI is created,
you cannot change its name.
String, a comma separated list of tunnel
interface IDs or tunnel interface ID
Specifies the tunnel ranges, for example, 1,2,3,5-8,10-20.
interfaces to be The string cannot end with a comma (,),
add_tunnels N/A
associated with the hyphen (-), or space.
VXLAN. A tunnel interface ID cannot be on the
list of adding interfaces and the list of
removing interfaces at the same time.
String, a comma separated list of tunnel
interface IDs or tunnel interface ID
Removes the association ranges, for example, 1,2,3,5-8,10-20.
between the specified The string cannot end with a comma (,),
delete_tunnels N/A
tunnel interfaces and the hyphen (-), or space.
VXLAN. A tunnel interface ID cannot be on the
list of adding interfaces and the list of
removing interfaces at the same time.
Symbol:
• create—Creates or modifies a
Specifies the action for VXLAN.
action N/A
the resource.
• delete—Deletes a VXLAN.
The default action is create.

Resource example
# Create VXLAN 10, configure the VSI name as vsia, add tunnel interfaces 2 and 4 to the VXLAN,
and remove tunnel interfaces 1 and 3 from the VXLAN.
netdev_vxlan 'vxlan10' do
vxlan_id 10
visname 'vsia'
add_tunnels '2,4'
delete_tunnels '1,3'
end

387
Configuring Puppet
About Puppet
Puppet is an open-source configuration management tool. It provides the Puppet language. You can
use the Puppet language to create configuration manifests and save them to a server. You can then
use the server for centralized configuration enforcement and management.

Puppet network framework


Figure 114 Puppet network framework

Puppet master

Puppet agent A Puppet agent B Puppet agent C

As shown in Figure 114, Puppet operates in a client/server network framework. In the framework, the
Puppet master (server) stores configuration manifests for Puppet agents (clients). The Puppet
agents establish SSL connections to the Puppet master to obtain their respective latest
configurations.
Puppet master
The Puppet master runs the Puppet daemon process to listen to requests from Puppet agents,
authenticates Puppet agents, and sends configurations to Puppet agents on demand.
For information about installing and configuring a Puppet master, see the official Puppet website at
https://puppetlabs.com/.
Puppet agent
HPE devices support Puppet 3.7.3 agent. The following is the communication process between a
Puppet agent and the Puppet master:
1. The Puppet agent sends an authentication request to the Puppet master.
2. The Puppet agent checks with the Puppet master for the authentication result periodically
(every two minutes by default). Once the Puppet agent passes the authentication, a connection
is established to the Puppet master.
3. After the connection is established, the Puppet agent sends a request to the Puppet master
periodically (every 30 minutes by default) to obtain the latest configuration.
4. After obtaining the latest configuration, the Puppet agent compares the configuration with its
running configuration. If a difference exists, the Puppet agent overwrites its running
configuration with the newly obtained configuration.
5. After overwriting the running configuration, the Puppet agent sends a feedback to the Puppet
master.

388
Puppet resources
A Puppet resource is a unit of configuration. Puppet uses manifests to store resources.
Puppet manages types of resources. Each resource has a type, a title, and one or more attributes.
Every attribute has a value. The value specifies the state desired for the resource. You can specify
the state of a device by setting values for attributes regardless of how the device enters the state.
The following resource example shows how to configure a device to create VLAN 2 and configure
the description for VLAN 2.
netdev_vlan{'vlan2':
ensure => undo_shutdown,
id => 2,
description => 'sales-private',
require => Netdev_device['device'],
}

The following are the resource type and title:


• netdev_vlan—Type of the resource. The netdev_vlan type resources are used for VLAN
configuration.
• vlan2—Title of the resource. The title is the unique identifier of the resource.
The example contains the following attributes:
• ensure—Creates, modifies, or deletes a VLAN. To create a VLAN, set the attribute value to
undo_shutdown. To delete a VLAN, set the attribute value to shutdown.
• id—Specifies a VLAN by its ID. In this example, VLAN 2 is specified.
• description—Configures the description for the VLAN. In this example, the description for
VLAN 2 is sales-private.
• require—Indicates that the resource depends on another resource (specified by resource type
and title). In this example, the resource depends on a netdev_device type resource titled
device.
For information about resource types supported by Puppet, see "Puppet resources."

Restrictions and guidelines: Puppet configuration


The Puppet master cannot run a lower Puppet version than Puppet agents.

Prerequisites for Puppet


Before configuring Puppet on the device, complete the following tasks on the device:
• Enable NETCONF over SSH. The Puppet master sends configuration information to Puppet
agents through NETCONF over SSH connections. For information about NETCONF over SSH,
see "Configuring NETCONF."
• Configure SSH login. Puppet agents communicate with the Puppet master through SSH. For
information about SSH login, see Fundamentals Configuration Guide.
• For successful communication, verify that the Puppet master and agents use the same system
time. You can manually set the same system time for the Puppet master and agents or
configure them to use a time synchronization protocol such as NTP. For more information about
the time synchronization protocols, see "Configuring NTP."

389
Starting Puppet
Configuring resources
1. Install and configure the Puppet master.
2. Create manifests for Puppet agents on the Puppet master.
For more information, see the Puppet master installation and configuration guides.

Configuring a Puppet agent


1. Enter system view.
system-view
2. Start Puppet.
third-part-process start name puppet arg agent --certname=certname
--server=server
By default, Puppet is shut down.

Parameter Description
--certname=certname Specifies the address of the Puppet agent.

--server=server Specifies the address of the Puppet master.

After the Puppet process starts up, the Puppet agent sends an authentication request to the
Puppet master. For more information about the third-part-process start command,
see "Monitoring and maintaining processes".

Signing a certificate for the Puppet agent


To sign a certificate for the Puppet agent, execute the puppet cert sign certname command
on the Puppet master.
After the signing operation succeeds, the Puppet agent establishes a connection to the Puppet
master and requests configuration information from the Puppet master.

Shutting down Puppet on the device


Prerequisites
Execute the display process all command to identify the ID of the Puppet process. This
command displays information about all processes on the device. Check the following fields:
• THIRD—This field displays Y for a third-party process.
• PID—Process ID.
• COMMAND—This field displays puppet /opt/ruby/bin/pu for the Puppet process.
Procedure
1. Enter system view.
system-view
2. Shut down Puppet.
third-part-process stop pid pid-list

390
For more information about the third-part-process stop command, see "Monitoring
and maintaining processes".

Puppet configuration examples


Example: Configuring Puppet
Network configuration
As shown in Figure 115, the device is connected to the Puppet master. Use Puppet to configure the
device to perform the following operations:
• Set the SSH login username and password to user and passwd, respectively.
• Create VLAN 3.
Figure 115 Network diagram

Puppet agent Puppet master


1.1.1.1/24 1.1.1.2/24

Procedure
1. Configure SSH login and enable NETCONF over SSH on the device. (Details not shown.)
2. On the Puppet master, create the modules/custom/manifests directory in the /etc/puppet/
directory for storing configuration manifests.
$ mkdir -p /etc/puppet/modules/custom/manifests
3. Create configuration manifest init.pp in the /etc/puppet/modules/custom/manifests directory
as follows:
netdev_device{'device':
ensure => undo_shutdown,
username => 'user',
password => 'passwd',
ipaddr => '1.1.1.1',
}
netdev_vlan{'vlan3':
ensure => undo_shutdown,
id => 3,
require => Netdev_device['device'],
}
4. Start Puppet on the device.
<PuppetAgent> system-view
[PuppetAgent] third-part-process start name puppet arg agent --certname=1.1.1.1
--server=1.1.1.2
5. Configure the Puppet master to authenticate the request from the Puppet agent.
$ puppet cert sign 1.1.1.1

After passing the authentication, the Puppet agent requests the latest configuration for it from the
Puppet master.

391
Puppet resources
netdev_device
Use this resource to specify the following items:
• Name for a Puppet agent.
• IP address, SSH username, and SSH password used by the agent to connect to a Puppet
master.
Attributes
Table 50 Attributes for netdev_device

Attribute name Description Value type and restrictions


Symbol:
• undo_shutdown—Establishes a NETCONF
connection to the Puppet master.
• shutdown—Closes the NETCONF
Establishes a NETCONF
connection between the Puppet agent and the
connection to the Puppet
ensure Puppet master.
master or closes the
connection. • present—Establishes a NETCONF
connection to the Puppet master.
• absent—Closes the NETCONF connection
between the Puppet agent and the Puppet
master.

String, case sensitive.


hostname Specifies the device name.
Length: 1 to 64 characters.
ipaddr Specifies an IP address. String, in dotted decimal notation.

Specifies the username for String, case sensitive.


username
SSH login. Length: 1 to 55 characters.
String, case sensitive.
Length and form requirements in non-FIPS mode:
Specifies the password for
password
SSH login. • 1 to 63 characters when in plaintext form.
• 1 to 110 characters when in hashed form.
• 1 to 117 characters when in encrypted form.

Resource example
# Configure the device name as PuppetAgent. Specify the IP address, SSH username, and SSH
password for the agent to connect to the Puppet master as 1.1.1.1, user, and 123456, respectively.
netdev_device{'device':
ensure => undo_shutdown,
username => 'user',
password => '123456',
ipaddr => '1.1.1.1',
hostname => 'PuppetAgent'
}

392
netdev_interface
Use this resource to configure attributes for an interface.
Attributes
Table 51 Attributes for netdev_interface

Attribute
Attribute name Description Value type and restrictions
type
Specifies an
ifindex interface by its Index Unsigned integer.
index.

Configures the Symbol:


ensure attributes of the N/A • undo_shutdown
interface. • present.

Configures the String, case sensitive.


description description for the N/A
interface. Length: 1 to 255 characters.

Specifies the Symbol:


admin management state N/A • up—Brings up the interface.
for the interface. • down—Shuts down the interface.

Symbol:
• auto—Autonegotiation.
• 10m—10 Mbps.
Specifies the • 100m—100 Mbps.
speed N/A
interface rate. • 1g—1 Gbps.
• 2.5g—2.5 Gbps.
• 10g—10 Gbps.
• 40g—40 Gbps.

Symbol:
• full—Full-duplex mode.
Sets the duplex • half—Half-duplex mode.
duplex N/A
mode. • auto—Autonegotiation.
This attribute applies only to Ethernet
interfaces.
Symbol:
• access—Sets the link type of the
interface to Access.
• trunk—Sets the link type of the interface
Sets the link type for to Trunk.
linktype N/A
the interface.
• hybrid—Sets the link type of the
interface to Hybrid.
This attribute applies only to Layer 2 Ethernet
interfaces.

Sets the operation Symbol:


portlayer mode for the N/A • bridge—Layer 2 mode.
interface. • route—Layer 3 mode.

393
Attribute
Attribute name Description Value type and restrictions
type
Unsigned integer in bytes. The value range
Sets the MTU depends on the interface type.
mtu permitted by the N/A
interface. This attribute applies only to Layer 3 Ethernet
interface.

Resource example
# Configure the following attributes for Ethernet interface 2:
• Interface description—puppet interface 2.
• Management state—Up.
• Interface rate—Autonegotiation.
• Duplex mode—Autonegotiation.
• Link type—Hybrid.
• Operation mode—Layer 2.
• MTU—1500 bytes.
netdev_interface{'ifindex2':
ifindex => 2,
ensure => undo_shutdown,
description => 'puppet interface 2',
admin => up,
speed => auto,
duplex => auto,
linktype => hybrid,
portlayer => bridge,
mut => 1500,
require => Netdev _device['device'],
}

netdev_l2_interface
Use this resource to configure the VLAN attributes for a Layer 2 Ethernet interface.
Attributes
Table 52 Attributes for netdev_l2_interface

Attribute
Attribute name Description Value type and restrictions
type
Specifies a Layer 2
ifindex Ethernet interface by its Index Unsigned integer.
index.

Configures the attributes Symbol:


ensure of the Layer 2 Ethernet N/A • undo_shutdown
interface. • present

Specifies the PVID for the Unsigned integer.


pvid N/A
interface. Value range: 1 to 4094.

394
Attribute
Attribute name Description Value type and restrictions
type
String, a comma separated list of VLAN
IDs or VLAN ID ranges, for example,
1,2,3,5-8,10-20.
Specifies the VLANs
permit_vlan_list N/A Value range for each VLAN ID: 1 to
permitted by the interface.
4094.
The string cannot end with a comma (,),
hyphen (-), or space.
String, a comma separated list of VLAN
IDs or VLAN ID ranges, for example,
1,2,3,5-8,10-20.
Specifies the VLANs from Value range for each VLAN ID: 1 to
which the interface sends 4094.
untagged_vlan_list N/A
packets after removing
VLAN tags. The string cannot end with a comma (,),
hyphen (-), or space.
A VLAN cannot be on the untagged list
and the tagged list at the same time.
String, a comma separated list of VLAN
IDs or VLAN ID ranges, for example,
1,2,3,5-8,10-20.
Specifies the VLANs from Value range for each VLAN ID: 1 to
which the interface sends 4094.
tagged_vlan_list N/A
packets without removing
VLAN tags. The string cannot end with a comma (,),
hyphen (-), or space.
A VLAN cannot be on the untagged list
and the tagged list at the same time.

Resource example
# Specify the PVID as 2 for interface 3, and configure the interface to permit packets from VLANs 1
through 6. Configure the interface to forward packets from VLANs 1 through 3 after removing VLAN
tags and forward packets from VLANs 4 through 6 without removing VLAN tags.
netdev_l2_interface{'ifindex3':
ifindex => 3,
ensure => undo_shutdown,
pvid => 2,
permit_vlan_list => '1-6',
untagged_vlan_list => '1-3',
tagged_vlan_list => '4,6'
require => Netdev _device['device'],
}

netdev_l2vpn
Use this resource to enable or disable L2VPN.

395
Attributes
Table 53 Attributes for netdev_l2vpn

Attribute name Description Value type and restrictions


Symbol:
ensure Enables or disables L2VPN. • enable—Enables L2VPN.
• disable—Disables L2VPN.

Resource example
# Enable L2VPN.
netdev_l2vpn{'l2vpn':
ensure => enable,
require => Netdev_device['device'],
}

netdev_lagg
Use this resource to create, modify, or delete an aggregation group.
Attributes
Table 54 Attributes for netdev_lagg

Attribute
Attribute name Description Value type and restrictions
type
Unsigned integer.
The value range for a Layer 2
Specifies an aggregation group is 1 to 1024.
group_id Index
aggregation group ID. The value range for a Layer 3
aggregation group is 16385 to
17408.
Symbol:
Creates, modifies, or • present—Creates or modifies
ensure deletes the N/A the aggregation group.
aggregation group. • absent—Deletes the
aggregation group.
Symbol:
Specifies the • static—Static.
linkmode N/A
aggregation mode.
• dynamic—Dynamic.

String, a comma separated list of


interface indexes or interface index
ranges, for example,
Specifies the indexes 1,2,3,5-8,10-20.
of the interfaces that The string cannot end with a comma
addports N/A
you want to add to the (,), hyphen (-), or space.
aggregation group. An interface index cannot be on the
list of adding interfaces and the list of
removing interfaces at the same
time.

396
Attribute
Attribute name Description Value type and restrictions
type
String, a comma separated list of
interface indexes or interface index
ranges, for example,
Specifies the indexes 1,2,3,5-8,10-20.
of the interfaces that
The string cannot end with a comma
deleteports you want to remove N/A
(,), hyphen (-), or space.
from the aggregation
group. An interface index cannot be on the
list of adding interfaces and the list of
removing interfaces at the same
time.

Resource example
# Add interfaces 1 and 2 to aggregation group 2, and remove interfaces 3 and 4 from the group.
netdev_lagg{ 'lagg2':
group_id => 2,
ensure => present,
addports => '1,2',
deleteports => '3,4',
require => Netdev _device['device'],
}

netdev_vlan
Use this resource to create, modify, or delete a VLAN or configure the description for the VLAN.
Attributes
Table 55 Attributes for netdev_vlan

Attribute
Attribute name Description Value type and restrictions
type
Symbol:
• undo_shutdown—Creates or
modifies a VLAN.
Creates, modifies, or • shutdown—Deletes a VLAN.
ensure N/A
deletes a VLAN.
• present—Creates or modifies a
VLAN.
• absent—Deletes a VLAN.

Unsigned integer.
id Specifies the VLAN ID. Index
Value range: 1 to 4094.
Configures the String, case sensitive.
description description for the N/A
VLAN. Length: 1 to 255 characters.

Resource example
# Create VLAN 2, and configure the description as sales-private for VLAN 2.
netdev_vlan{'vlan2':
ensure => undo_shutdown,
id => 2,

397
description => 'sales-private',
require => Netdev_device['device'],
}

netdev_vsi
Use this resource to create, modify, or delete a Virtual Switch Instance (VSI).
Attributes
Table 56 Attributes for netdev_vsi

Attribute
Attribute name Description Value type and restrictions
type
String, case sensitive.
vsiname Specifies a VSI name. Index
Length: 1 to 31 characters.
Symbol:
Creates, modifies, or • present—Creates or modifies
ensure N/A
deletes the VSI. the VSI.
• absent—Deletes the VSI.

Configures the String, case sensitive.


description N/A
description for the VSI. Length: 1 to 80 characters.

Resource example
# Create the VSI vsia.
netdev_vsi{'vsia':
ensure => present,
vsiname => 'vsia',
require => Netdev_device['device'],
}

netdev_vte
Use this resource to create or delete a tunnel.
Attributes
Table 57 Attributes for netdev_vte

Attribute
Attribute name Description Value type and restrictions
type
Specifies a tunnel
id Index Unsigned integer.
ID.

Symbol:
Creates or deletes • present—Creates the tunnel.
ensure N/A
the tunnel.
• absent—Deletes the tunnel.

398
Attribute
Attribute name Description Value type and restrictions
type

Unsigned integer:
• 1—IPv4 GRE tunnel mode.
• 2—IPv6 GRE tunnel mode.
• 3—IPv4 over IPv4 tunnel mode.
• 4—Manual IPv6 over IPv4 tunnel mode.
• 8—IPv6 or IPv4 over IPv6 tunnel mode.
Sets the tunnel • 16—IPv4 IPsec tunnel mode.
mode N/A
mode. • 17—IPv6 IPsec tunnel mode.
• 24—UDP-encapsulated IPv4 VXLAN
tunnel mode.
• 25—UDP-encapsulated IPv6 VXLAN
tunnel mode.
You must specify the tunnel mode when
creating a tunnel. After the tunnel is created,
you cannot change the tunnel mode.

Resource example
# Create UDP-encapsulated IPv4 VXLAN tunnel 2.
netdev_vte{'vte2':
ensure => present,
id => 2,
mode => 24,
require => Netdev_device['device'],
}

netdev_vxlan
Use this resource to create, modify, or delete a VXLAN.

399
Attributes
Table 58 Attributes for netdev_vxlan

Attribute
Attribute name Description Value type and restrictions
type
Unsigned integer.
vxlan_id Specifies a VXLAN ID. Index
Value range: 1 to 16777215.
Symbol:
Creates or deletes the • present—Creates or modifies the
ensure N/A
VXLAN. VXLAN.
• absent—Deletes the VXLAN.

String, case sensitive.


Length: 1 to 31 characters.
vsiname Specifies the VSI name. N/A You must specify the VSI name when
creating a VSI. After the VSI is created,
you cannot change the name.
String, a comma separated list of tunnel
interface IDs or tunnel interface ID
Specifies the tunnel ranges, for example, 1,2,3,5-8,10-20.
interfaces to be The string cannot end with a comma (,),
add_tunnels N/A
associated with the hyphen (-), or space.
VXLAN. A tunnel interface ID cannot be on the
list of adding interfaces and the list of
removing interfaces at the same time.
String, a comma separated list of tunnel
interface IDs or tunnel interface ID
Removes the association ranges, for example, 1,2,3,5-8,10-20.
between the specified The string cannot end with a comma (,),
delete_tunnels N/A
tunnel interfaces and the hyphen (-), or space.
VXLAN. A tunnel interface ID cannot be on the
list of adding interfaces and the list of
removing interfaces at the same time.

Resource example
# Create VXLAN 10, configure the VSI name as vsia, and associate tunnel interfaces 7 and 8 with
VXLAN 10.
netdev_vxlan{'vxlan10':
ensure => present,
vxlan_id => 10,
vsiname => 'vsia',
add_tunnels => '7-8',
require=>Netdev_device['device'],
}

400
Configuring EPS agent
About EPS and EPS agent
HPE iMC Endpoints Profiling System (EPS) detects, identifies, and monitors network-wide endpoints,
including cameras, PCs, servers, switches, routers, firewalls, APs, printers, and ATMs. New
endpoints and abnormal endpoints can be timely discovered and identified.
As shown in Figure 116, an EPS system contains the EPS server, scanners, and endpoints.
Figure 116 EPS network architecture

EPS server
The EPS server is a server installed with the iMC EPS component. It acts as the console to assign
instructions to scanners, receive scan results, compare scan results with baselines, and support
change auditing.
Scanner
A scanner is a device installed with iMC EScan and enabled with EPS agent. The scanner scans and
manages endpoints by following the instruction of the EPS server and reports scan results to the
EPS server.
EPS agent is based on the Comware platform and integrates the EPS scanner function. This chapter
describes how to configure EPS agent.
Endpoint
An endpoint refers to a device scanned by a scanner. EPS supports user-defined endpoint types and
the following system-defined endpoint types: camera, PC, server, switch, router, firewall, AP, printe,
ATM, unknown, and other.

Restrictions: Hardware compatibility with EPS


agent
The device does not support scanning endpoints connected to the gateway. If the console assigns a
gateway-based scan task, no endpoints will be scanned.

401
Configuring EPS agent
1. Enter system view.
system-view
2. Configure EPS server parameters
eps server ip-address port port-number [ key { cipher | simple }
key-value ] [ log-level level-value ] [ use-server-setting ]
By default, EPS server parameters are not configured.
3. Enable EPS agent.
eps agent enable
By default, EPS agent is disabled.

Display and maintenance commands for EPS


agent
Execute display commands in any view.

Task Command
Display scanned endpoint information. display eps agent discovery

EPS agent configuration examples


Example: Configuring EPS agent
Network configuration
As shown in Figure 117Figure 117Figure 117Figure 117Figure 117Figure 117Figure 117, configure
the scanner to enable it to scan endpoints based on the instruction of the EPS server.

402
Figure 117 Network diagram

EPS server

20.1.1.2/24

Internet

GE1/0/2
20.1.1.1/24

Scanner

GE1/0/1
10.1.1.1/24

Endpoints

Procedure
# Specify 20.1.1.2 and 4500 as the IP address and port number of the EPS server, respectively.
<Sysname> system-view
[Sysname] eps server 20.1.1.2 port 4500

# Enable EPS agent.


[Sysname] eps agent enable

Verifying the configuration


# Verify that the scanned information is displayed after the EPS server issues a scan instruction.
[Sysname] display eps agent discovery
IP-Address MAC-Address OS-Type OS-ver Vendor Product
100.100.100.100 cc:ef:48:9b:5f:c1 IOS 12.X Cisco C3560E

403
Configuring eMDI
About eMDI
Enhanced Media Delivery Index (eMDI) is a solution to audio and video service quality monitoring
and fault locating. It is intended to solve problems caused by packet loss, packet sequence errors,
and jitters.
eMDI monitors and analyzes specific TCP or RTP packets on each node of an IP network in real time,
providing data for you to quickly locate network faults.

eMDI implementation
The eMDI feature is implemented on the basis of eMDI instances. Each eMDI instance monitors one
data flow as defined by its parameters. It inspects fields of packets, periodically collects statistics,
and reports to the NMS.
Figure 118 shows the eMDI deployment and operating process.
1. Upon service quality deterioration, you can obtain the data flow characteristics from the
headers of interested packets, such as the source and destination IP addresses.
2. You deploy eMDI on each device that the data flow traverses by using the NMS or CLI.
3. The devices monitor the data flows, collect statistics, and report monitoring data to the NMS.
4. The NMS collects and analyzes the monitoring data and displays the analysis results
graphically.
5. Based on the analysis results, you can quickly locate the network faults.

404
Figure 118 eMDI network diagram

IP network

Core Issue eMDI settings


device Report eMDI data

Video data flow

Aggregation
device

Access
device

User A User B

Monitored items
Table 59 and Table 60 list the monitored items for TCP packets and UDP packets. RTP packets are
carried in UDP packets.
Table 59 Monitored items for TCP packets

Item Description Calculation formula Remarks


• Rate (bps) = Total length
of packets received
within a monitoring
period / Monitoring
Packet input rate, in bps or period This item is intended for
Rate
pps. unidirectional flows.
• Rate (pps) = Number of
packets received within
a monitoring period /
Monitoring period

405
Item Description Calculation formula Remarks
If no packets are lost, the
sequence number of the
current packet plus the length
of the current packet is the
expected sequence number
of the next packet. If the
UPLR = (Number of lost sequence number of the next
Upstream packet loss rate, in upstream packets / (Number packet is greater than the
UPLR units of one hundred of received packets + expected sequence number,
thousandth. Number of lost upstream packet loss has occurred on
packets)) / 100000 an upstream device. The
number of lost packets can be
calculated based on the
average packet size.
This item is intended for
unidirectional flows.
If no packets are lost, the
sequence number of the
current packet plus the length
of the current packet is the
• Number of lost expected sequence number
downstream packets = of the next packet. If the
Total number of lost sequence number of the next
Downstream packet loss rate, packets – Number of lost packet is smaller than the
DPLR in units of one hundred upstream packets expected sequence number,
thousandth. • DPLR = (Number of lost the packet is considered to be
downstream packets / a retransmitted packet. The
Number of received number of retransmitted
packets) / 100000 packets is considered to be
the total number of lost
packets.
This item is intended for
unidirectional flows.
The timestamp for a received
non-retransmitted packet is
recorded as T1.
The sequence number of the
packet plus the length of the
packet is the expected
sequence number of the next
packet.
If the sequence number of an
ACK packet received from a
Average downstream round
DRTT DRTT = T2 – T1 downstream device is greater
trip time, in microseconds.
than or equal to the expected
number, the timestamp of the
ACK packet is recorded as
T2.
DRTT is equal to the average
value of the differences
between the T2 values and
the corresponding T1 values.
This item is intended for
bidirectional flows.

406
Item Description Calculation formula Remarks
T1 indicates the timestamp
when the device finds a SYN
packet sent by the client for a
connection.
T2 indicates the timestamp
when the device finds a SYN
ACK packet returned by the
Average upstream round trip server.
URTT URTT = T2 – T1
time, in microseconds.
Because URTT calculation
depends on TCP control
packets, it might not be
calculated within a monitoring
period.
This item is intended for
bidirectional flows.

Table 60 Monitored items for UDP packets

Item Description Calculation formula Remarks


If the difference between the
sequence number of current
RTP packet and the greatest
sequence number of earlier
RTP-LR = (Number of lost RTP packets is greater than
packets / (Number of 1, packet loss has occurred.
RTP packet loss rate, in units received packets + Number If the sequence number of the
RTP-LR
of one hundred thousandth. of lost packets – Number of current RTP packet is smaller
out-of-order packets)) / than the sequence numbers
100000 of all earlier RTP packets, the
packet is an out-of-order
packet.
This item is intended for
unidirectional flows.
RTP-SER = (Number of
out-of-order packets /
RTP packet sequence error
(Number of received packets This item is intended for
RTP-SER rate, in units of one hundred
+ Number of lost packets – unidirectional flows.
thousandth.
Number of out-of-order
packets)) / 100000
T1 represents the interval
between the time when the
sender sends the first packet
and the time when the sender
sends the last packet.
T2 represents the interval
RTP packet jitter, in
Jitter Jitter = T2 – T1 between the time when the
microseconds.
receiver receives the first
packet and the time when the
receiver receives the last
packet.
This item is intended for
unidirectional flows.

407
Item Description Calculation formula Remarks
On a service provider network
with the retransmission
threshold set, you can
compare the RTP-LP value
with the retransmission
threshold to identify the
The RTP-LP value uses the existence of faults.
Maximum number of maximum number of
The retransmission
RTP-LP continuously lost RTP continuously lost RTP
mechanism can address loss
packets. packets within a monitoring
of a small number of packets.
period.
The greater the RTP-LP
value is, the less the
retransmission mechanism
can help.
This item is intended for
unidirectional flows.
The RTP-ELF value is
calculated based on the FEC
sliding window and packet
loss threshold. After each
window slide, the system
RTP effective loss factor for If the total number of packet
identifies whether the number
Forward Error Correct (FEC), loss windows is greater than
RTP-ELF of lost data packets in the
in units of one hundred 0 within a monitoring period, a
window within the monitoring
thousandth. video fault exists.
period is greater than the
threshold. If yes, the system
increases RTP-ELF by 1. The
window sliding stops when
the monitoring period expires.

Restrictions and guidelines


eMDI is required on each node that monitored data flows traverse.
An eMDI instance can monitor only one data flow. The data flows for different eMDI instances cannot
be the same or overlapped.
The following data flows are overlapped:
• flow ipv4 tcp source 1.1.1.1 destination 2.2.2.2
• flow ipv4 tcp source 1.1.1.1 destination 2.2.2.2 destination-port 20
The following data flows are not overlapped:
• flow ipv4 tcp source 1.1.1.1 destination 2.2.2.2 destination-port 10
• flow ipv4 tcp source 1.1.1.1 destination 2.2.2.2 destination-port 20
The clock-rate keyword for the flow ipv4 udp command does not result in UDP flow
overlapping.
A started eMDI instance automatically stops upon a master/subordinate or active/standby switchover.
You must execute the start command again to start the instance.
To modify the configuration for an eMDI instance that is already started, stop the instance first by
executing the stop command.
Incomplete eMDI data might cause deviations in the monitoring results. The eMDI data is incomplete
in the following situations:
• Not all packets of the monitored data flow pass through the device.

408
• All packets of the monitored data flow pass through the device, but not all of the packets are
forwarded by the same member device.

Prerequisites
Identify the characteristics of the data flows to be monitored so you can specify data flows for eMDI
instances.

Configure eMDI
Configuring an eMDI instance
1. Enter system view.
system-view
2. Enable eMDI and enter its view.
emdi
By default, eMDI is disabled.
3. Create an eMDI instance and enter its view.
instance instance-name
By default, no eMDI instance exists.
4. (Optional.) Configure a description for the eMDI instance.
description text
By default, an eMDI instance does not have a description.
5. Specify the data flow to be monitored by using the eMDI instance.
Choose one option as needed:
 Specify a TCP data flow.
flow ipv4 tcp source source-ip-address destination
destination-ip-address [ destination-port
destination-port-number | source-port source-port-number | vlan
vlan-id | vni vxlan-id ] *
 Specify a UDP data flow.
flow ipv4 udp source source-ip-address destination
destination-ip-address [ destination-port
destination-port-number | source-port source-port-number | vlan
vlan-id | vni vxlan-id | pt pt-value | clock-rate clock-rate-value ]
*
By default, no data flow is specified for an eMDI instance.
Specifying more options defines a more granular data flow and provides more helpful
monitoring data and analysis results. For example, to monitor the data from 10.0.0.1 to
10.0.1.1/100, specify the destination port number. If you do not specify the destination port
number, the instance monitors the flow that the first received matching packet represents,
which might cause deviations on the monitoring results.
6. (Optional.) Set the alarm thresholds for the eMDI instance.
alarm { dplr | rtp-lr | rtp-ser | uplr } threshold threshold-value
By default, the alarm thresholds for an eMDI instance are all 100.
A TCP data flow supports only the dplr and uplr keywords and a UDP data flow supports
only the rtp-lr and rtp-ser keywords.

409
7. (Optional.) Set the eMDI alarm logging threshold, that is, the number of consecutive alarms to
be suppressed before logging the event.
alarm suppression times times-value
By default, the eMDI alarm logging threshold is 3.
8. (Optional.) Specify the sliding window size and packet loss threshold for the UDP data flow to
be monitored.
fec { window window-size | threshold threshold-value } *
By default, the sliding window size and packet loss threshold for the UDP data flow is 100 and 5,
respectively.
9. (Optional.) Specify the eMDI instance lifetime.
lifetime { seconds seconds | minutes minutes | hours hours | days days }
By default, the eMDI instance lifetime is 1 hour.
10. (Optional.) Specify the monitoring period for the eMDI instance.
monitor-period period-value
By default, the monitoring period for an eMDI instance is 60 seconds.

Starting an eMDI instance


1. Enter system view.
system-view
2. Enter eMDI view.
emdi
3. Enter eMDI instance view.
instance instance-name
4. Start the eMDI instance.
Start

Stopping an eMDI instance


To stop a single eMDI instance:
1. Enter system view.
system-view
2. Enter eMDI view.
emdi
3. Enter eMDI instance view.
instance instance-name
4. Stop the eMDI instance.
Stop
To stop all eMDI instances:
5. Enter system view.
system-view
6. Enter eMDI view.
emdi
7. Stop all eMDI instances.
stop all

410
Display and maintenance commands for eMDI
Execute display commands in any view and reset commands in user view.

Task Command
display emdi instance [ name instance-name |
Display eMDI instance information.
id instance-id ] [ verbose ]
Display eMDI instance resource
display emdi resource
information.

display emdi statistics { instance-name


Display monitoring statistics for an
eMDI instance.
instance-name | instance-id instance-id }
[ number number ] [ abnormal | verbose ]
reset emdi statistics [ instance-name
Clear eMDI statistics. instance-name | instance-id
instance-id-value ]

eMDI configuration examples


Example: Configuring an eMDI instance for a UDP data flow
Network configuration
As shown in Figure 119, the camera sends an RTP video data flow to the server through Device A,
Device B, and Device C. The server is experiencing an erratic video display problem. Configure
eMDI on Device A, Device B, and Device C to locate the faulty link or device.
Figure 119 Network diagram

Camera Device A Device B Device C Server

192.168.1.2/24 10.1.1.2/24

Prerequisites
Assign IP addresses to the devices. Make sure all devices can reach each other.
Procedure
1. Configure Device A:
# Enable eMDI.
<DeviceA> system-view
[DeviceA] emdi
# Create an eMDI instance named as test.
[DeviceA-emdi] instance test
# Specify the UDP data flow for the instance to monitor. Set the source and destination IP
addresses to 192.168.1.2 and 10.1.1.2, respectively.
[DeviceA-emdi-instance-test] flow ipv4 udp source 192.168.1.2 destination 10.1.1.2
# Set the monitoring period to 10 seconds.
[DeviceA-emdi-instance-test] monitor-period 10

411
# Set the eMDI instance lifetime to 30 minutes.
[DeviceA-emdi-instance-test] lifetime minutes 30
# Start the eMDI instance.
[DeviceA-emdi-instance-test] start
2. Configure Device B and Device C as Device A is configured. (Details not shown.)
Verifying the configuration
After the eMDI instance runs for a while, display brief monitoring statistics for the eMDI instance on
each device to locate the fault.
# Display brief monitoring statistics for eMDI instance test on Device A.
<DeviceA> display emdi statistics instance-name test
Instance name : test
Instance ID : 1
Monitoring period: 10 sec
Protocol : UDP

Unit for RTP-LR, RTP-SER and RTP-ELF is 1/100000


Timestamp Status RTP-LR RTP-SER Jitter(us) RTP-LP RTP-ELF
2019/09/17 16:17:20 Normal 0 0 2560 0 0
2019/09/17 16:17:10 Abnormal 50000 0 2459 1 100000
2019/09/17 16:17:00 Abnormal 12634 33333 5236 3 23356

412
Configuring SQA
About SQA
Service quality analysis (SQA) enables the device to identify SIP-based multimedia traffic and
preferentially forward this traffic to ensure multimedia service quality. Use this feature to decrease
multimedia stuttering and improve user experience.

Restrictions and guidelines: SQA configuration


SQA does not support analysis of encrypted SIP protocol packets.

Configuring SQA
1. Enter system view.
system-view
2. Enter SQA view.
sqa
3. Enable SQA.
sqa-sip enable
By default, SQA is disabled.
4. Specify the SIP listening port number.
sqa-sip port port-number
By default, the SIP listening port number is 5060.
Make sure the SIP listening port number on the device is the same as that on the SIP server.
5. (Optional.) Specify an IP address range for SQA.
sqa-sip filter address start-address end-address
By default, no IP address range is specified for SQA. The device performs SQA on all SIP
packets.
After this command is executed, the device performs SQA only on SIP calls in the specified IP
address range.
If you execute this command multiple times, the most recent configuration takes effect.

Display and maintenance commands for SQA


Execute display commands in any view .

Task Command
display sqa sip call [ [ call-id call-id ]
Display SIP call information.
verbose ]
Display SIP call statistics. display sqa sip call-statistics

413
SQA configuration examples
Example: Configuring SQA
Network configuration
As shown in Figure 120, the SIP server installed with third-party VoIP server software acts as both
the SIP proxy server and SIP registrar to manage SIP UA registration and SIP calls. Host A and Host
B installed with third-party client software can place calls to each other.
Configure SQA on Device A, Device B, and Device C to optimize multimedia traffic of SIP calls to
provide high qualified multimedia services.
Figure 120 Network diagram
Device A

IP network

SIP server

Device B Device C

PC A PC B

6.159.107.200/16 6.159.108.44/16

Prerequisites
Assign IP addresses to interfaces, and make sure the devices can reach each other.
Procedure
1. Configure the SIP server:
Install third-party VoIP server software, register UAs, and specify the usernames (phone
numbers) and passwords for the clients. (Details not shown.)
2. Configure PC A and PC B:
Install third-party VoIP client software, specify the IP address of the SIP server, and configure
the username (phone number), password, and other parameters on each PC. (Details not
shown.)
3. Configure Device A:
# Enable SQA.
<DeviceA> system-view
[DeviceA] sqa
[DeviceA-sqa] sqa-sip enable
# Specify 5066 as the SIP listening port number. (Make sure this SIP listening port number is
the same as that on the SIP server.)
[DeviceA-sqa] sqa-sip port 5066
# Specify an IP address range for SQA. (Make sure Host A's IP address is included in the
range.)
[DeviceA-sqa] sqa-sip filter address 6.159.107.200 6.159.107.200
[DeviceA-sqa] quit

414
4. Configure Device B and Device C:
# Configure Device B and Device C in the same way as Device A is configured. (Details not
shown.)
Verifying the configuration
# Place a call from Host A to Host B. (Details not shown.)
# Display brief information about all SIP calls on Device A.
<DeviceA> display sqa sip call
Caller Callee CallId
6.159.107.207:49172 6.159.108.51:52410 3101326658
6.159.107.203:49172 6.159.108.47:52410 4530332933
6.159.107.208:49172 6.159.108.52:52410 4445702693
6.159.107.206:49172 6.159.108.50:52410 8263542841
6.159.107.201:49172 6.159.108.45:52410 4752123310
6.159.107.200:49172 6.159.108.44:52410 99462146

The output shows that Device A has performed SQA on the call that Host A placed to Host B.
# Display detailed information about the SIP call that Host A placed to Host B.
<DeviceA> display sqa sip call call-id 99462146 verbose
Call ID: 99462146
Caller information:
IP: 6.159.107.200 Port: 49172 MAC: a036-9fd4-b5bd
Tag: [email protected]
Callee information:
IP: 6.159.108.44 Port: 52410 MAC: a036-9fd4-b5bc
Tag: [email protected]
Type: Audio VLAN: 0 StartTime: 2019-7-14 15:40:39
MOS(F): 88 MOS(R): 86
Forward flow (octet): 4793344 Forward flow (packet): 4681
Reverse flow (octet): 3810304 Reverse flow (packet): 3721

The output shows that the MOS value for both forward and reverse traffic is high, which indicates the
quality of the audio service is high.

415
Document conventions and icons
Conventions
This section describes the conventions used in the documentation.
Command conventions

Convention Description
Boldface Bold text represents commands and keywords that you enter literally as shown.
Italic Italic text represents arguments that you replace with actual values.

[] Square brackets enclose syntax choices (keywords or arguments) that are optional.
Braces enclose a set of required syntax choices separated by vertical bars, from which
{ x | y | ... }
you select one.

Square brackets enclose a set of optional syntax choices separated by vertical bars,
[ x | y | ... ]
from which you select one or none.

Asterisk marked braces enclose a set of required syntax choices separated by vertical
{ x | y | ... } *
bars, from which you select at least one.

Asterisk marked square brackets enclose optional syntax choices separated by vertical
[ x | y | ... ] *
bars, from which you select one choice, multiple choices, or none.

The argument or keyword and argument combination before the ampersand (&) sign
&<1-n>
can be entered 1 to n times.

# A line that starts with a pound (#) sign is comments.

GUI conventions

Convention Description
Window names, button names, field names, and menu items are in Boldface. For
Boldface
example, the New User window opens; click OK.
Multi-level menus are separated by angle brackets. For example, File > Create >
>
Folder.

Symbols

Convention Description
An alert that calls attention to important information that if not understood or followed
WARNING! can result in personal injury.
An alert that calls attention to important information that if not understood or followed
CAUTION: can result in data loss, data corruption, or damage to hardware or software.

IMPORTANT: An alert that calls attention to essential information.

NOTE: An alert that contains additional or supplementary information.

TIP: An alert that provides helpful information.

416
Network topology icons
Convention Description

Represents a generic network device, such as a router, switch, or firewall.

Represents a routing-capable device, such as a router or Layer 3 switch.

Represents a generic switch, such as a Layer 2 or Layer 3 switch, or a router that


supports Layer 2 forwarding and other Layer 2 features.

Represents an access controller, a unified wired-WLAN module, or the access


controller engine on a unified wired-WLAN switch.

Represents an access point.

T Represents a wireless terminator unit.

T Represents a wireless terminator.

Represents a mesh access point.

Represents omnidirectional signals.

Represents directional signals.

Represents a security product, such as a firewall, UTM, multiservice security


gateway, or load balancing device.

Represents a security module, such as a firewall, load balancing, NetStream, SSL


VPN, IPS, or ACG module.

Examples provided in this document


Examples in this document might use devices that differ from your device in hardware model,
configuration, or software version. It is normal that the port numbers, sample output, screenshots,
and other information in the examples differ from what you have on your device.

417
Support and other resources
Accessing Hewlett Packard Enterprise Support
• For live assistance, go to the Contact Hewlett Packard Enterprise Worldwide website:
www.hpe.com/assistance
• To access documentation and support services, go to the Hewlett Packard Enterprise Support
Center website:
www.hpe.com/support/hpesc
Information to collect
• Technical support registration number (if applicable)
• Product name, model or version, and serial number
• Operating system name and version
• Firmware version
• Error messages
• Product-specific reports and logs
• Add-on products or components
• Third-party products or components

Accessing updates
• Some software products provide a mechanism for accessing software updates through the
product interface. Review your product documentation to identify the recommended software
update method.
• To download product updates, go to either of the following:
 Hewlett Packard Enterprise Support Center Get connected with updates page:
www.hpe.com/support/e-updates
 Software Depot website:
www.hpe.com/support/softwaredepot
• To view and update your entitlements, and to link your contracts, Care Packs, and warranties
with your profile, go to the Hewlett Packard Enterprise Support Center More Information on
Access to Support Materials page:
www.hpe.com/support/AccessToSupportMaterials

IMPORTANT:
Access to some updates might require product entitlement when accessed through the Hewlett
Packard Enterprise Support Center. You must have an HP Passport set up with relevant
entitlements.

418
Websites
Website Link
Networking websites

Hewlett Packard Enterprise Information Library for


www.hpe.com/networking/resourcefinder
Networking
Hewlett Packard Enterprise Networking website www.hpe.com/info/networking
Hewlett Packard Enterprise My Networking website www.hpe.com/networking/support
Hewlett Packard Enterprise My Networking Portal www.hpe.com/networking/mynetworking
Hewlett Packard Enterprise Networking Warranty www.hpe.com/networking/warranty
General websites

Hewlett Packard Enterprise Information Library www.hpe.com/info/enterprise/docs


Hewlett Packard Enterprise Support Center www.hpe.com/support/hpesc
Hewlett Packard Enterprise Support Services Central ssc.hpe.com/portal/site/ssc/
Contact Hewlett Packard Enterprise Worldwide www.hpe.com/assistance
Subscription Service/Support Alerts www.hpe.com/support/e-updates
Software Depot www.hpe.com/support/softwaredepot
Customer Self Repair (not applicable to all devices) www.hpe.com/support/selfrepair
Insight Remote Support (not applicable to all devices) www.hpe.com/info/insightremotesupport/docs

Customer self repair


Hewlett Packard Enterprise customer self repair (CSR) programs allow you to repair your product. If
a CSR part needs to be replaced, it will be shipped directly to you so that you can install it at your
convenience. Some parts do not qualify for CSR. Your Hewlett Packard Enterprise authorized
service provider will determine whether a repair can be accomplished by CSR.
For more information about CSR, contact your local service provider or go to the CSR website:
www.hpe.com/support/selfrepair

Remote support
Remote support is available with supported devices as part of your warranty, Care Pack Service, or
contractual support agreement. It provides intelligent event diagnosis, and automatic, secure
submission of hardware event notifications to Hewlett Packard Enterprise, which will initiate a fast
and accurate resolution based on your product’s service level. Hewlett Packard Enterprise strongly
recommends that you register your device for remote support.
For more information and device support details, go to the following website:
www.hpe.com/info/insightremotesupport/docs

Documentation feedback
Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To help
us improve the documentation, send any errors, suggestions, or comments to Documentation
Feedback ([email protected]). When submitting your feedback, include the document title,

419
part number, edition, and publication date located on the front cover of the document. For online help
content, include the product name, product version, help edition, and publication date located on the
legal notices page.

420
Index
A port mirroring monitor port to remote probe VLAN,
298
access control
associating
SNMP MIB, 155
IPv6 NTP client/server association mode, 121
SNMP view-based MIB, 155
IPv6 NTP multicast association mode, 130
accessing
IPv6 NTP symmetric active/passive association
NTP access control, 105
mode, 124
SNMP access control mode, 156
NTP association mode, 108
ACS
NTP broadcast association mode, 104, 109, 125
CWMP ACS-CPE autoconnect, 253
NTP broadcast association mode+authentication,
action 135
Event MIB notification, 181 NTP client/server association mode, 104, 108,
Event MIB set, 181 120
address NTP client/server association
ping address reachability determination, 2 mode+authentication, 133
agent NTP multicast association mode, 104, 110, 127
sFlow agent+collector information NTP symmetric active/passive association mode,
configuration, 316 104, 108, 123
aggregation group attribute
Chef resources (netdev_lagg), 383 NETCONF session attribute, 200
Puppet resources (netdev_lagg), 396 authenticating
alarm CWMP CPE ACS authentication, 258
RMON alarm configuration, 174, 177 NTP, 106
RMON alarm group sample types, 173 NTP broadcast authentication, 114
RMON configuration, 171, 176 NTP broadcast mode+authentication, 135
RMON group, 172 NTP client/server mode authentication, 111
RMON private group, 172 NTP client/server mode+authentication, 133
application scenarios NTP configuration, 111
iNQA, 81 NTP multicast authentication, 116
applying NTP security, 105
flow mirroring QoS policy, 313 NTP symmetric active/passive mode
flow mirroring QoS policy (global), 313 authentication, 113
flow mirroring QoS policy (interface), 313 SNTP authentication, 139
flow mirroring QoS policy (VLAN), 313 auto
PoE profile, 150 CWMP ACS-CPE autoconnect, 253
architecture VCF fabric automated deployment, 358
NTP, 103 VCF fabric automated deployment process, 359
arithmetic VCF fabric automated underlay network
packet capture filter configuration (expr relop deployment configuration, 362, 364
expr expression), 345 autoconfiguration server (ACS)
packet capture filter configuration (proto CWMP, 251
[ exprsize ] expression), 345 CWMP ACS authentication parameters, 258
packet capture filter operator, 343 CWMP attribute configuration, 256
assigning CWMP attribute type (default)(CLI), 257
CWMP ACS attribute (preferred)(CLI), 257 CWMP attributes (preferred), 256
CWMP ACS attribute (preferred)(DHCP CWMP autoconnect parameters, 260
server), 256 CWMP CPE ACS provision code, 259
CWMP CPE connection interface, 259

421
HTTPS SSL client policy, 258 resources (netdev_lagg), 383
automated overlay network deployment resources (netdev_vlan), 384
border node configuration, 367 resources (netdev_vsi), 385
L2 agent, 366 resources (netdev_vte), 386
L3 agent, 367 resources (netdev_vxlan), 386
local proxy ARP, 368 server configuration, 376
MAC address of VSI interfaces, 368 shutdown, 377
network type specifying, 366 start, 376
RabbiMQ server communication parameters, workstation configuration, 376
365 classifying
automated underlay network deployment port mirroring classification, 290
pausing deployment, 363 CLI
automtaed underlay network deploying EAA configuration, 270, 277
template file, 359 EAA event monitor policy configuration, 278
B EAA monitor policy configuration
(CLI-defined+environment variables), 281
basic concepts
NETCONF CLI operations, 231, 232
iNQA, 79
NETCONF return to CLI, 239
benefits
client
iNQA, 79
Chef client configuration, 376
bidirectional
NQA client history record save, 30
port mirroring, 289
NQA client operation (DHCP), 12
Boolean
NQA client operation (DLSw), 24
Event MIB trigger test, 180
NQA client operation (DNS), 13
Event MIB trigger test configuration, 191
NQA client operation (FTP), 14
broadcast
NQA client operation (HTTP), 15
NTP association mode, 125
NQA client operation (ICMP echo), 10
NTP broadcast association mode, 104, 109,
NQA client operation (ICMP jitter), 11
114
NQA client operation (path jitter), 24
NTP broadcast association
mode+authentication, 135 NQA client operation (SNMP), 18
NTP broadcast mode dynamic associations NQA client operation (TCP), 18
max, 118 NQA client operation (UDP echo), 19
building NQA client operation (UDP jitter), 16
packet capture display filter, 346, 348 NQA client operation (UDP tracert), 20
packet capture filter, 342, 345 NQA client operation (voice), 22
NQA client operation scheduling, 31
C
NQA client statistics collection, 29
capturing NQA client template, 31
packet capture configuration, 342, 350 NQA client template (DNS), 33
packet capture configuration (feature NQA client template (FTP), 41
image-based), 350
NQA client template (HTTP), 38
Chef
NQA client template (HTTPS), 39
client configuration, 376
NQA client template (ICMP), 32
configuration, 373, 377, 377
NQA client template (RADIUS), 42
configuration file, 374
NQA client template (SSL), 43
network framework, 373
NQA client template (TCP half open), 35
resources, 374, 380
NQA client template (TCP), 34
resources (netdev_device), 380
NQA client template (UDP), 36
resources (netdev_interface), 380
NQA client template optional parameters, 44
resources (netdev_l2_interface), 382
NQA client threshold monitoring, 8, 27
resources (netdev_l2vpn), 383

422
NQA client+Track collaboration, 27 packet capture display filter operator, 347
NQA collaboration configuration, 68 packet capture filter operator, 343
NQA enable, 9 conditional match
NQA operation, 9 NETCONF data filtering, 219
NQA operation configuration (DHCP), 50 NETCONF data filtering (column-based), 216
NQA operation configuration (DLSw), 65 configuration
NQA operation configuration (DNS), 51 NETCONF configuration modification, 223
NQA operation configuration (FTP), 52 configuration file
NQA operation configuration (HTTP), 53 Chef configuration file, 374
NQA operation configuration (ICMP echo), 46 configuration management
NQA operation configuration (ICMP jitter), 47 Chef configuration, 373, 377, 377
NQA operation configuration (path jitter), 66 Puppet configuration, 388, 391, 391
NQA operation configuration (SNMP), 57 configure
NQA operation configuration (TCP), 58 RabbiMQ server communication parameters, 365
NQA operation configuration (UDP echo), 59 VCF fabric overlay network border node, 367
NQA operation configuration (UDP jitter), 54 configuring
NQA operation configuration (UDP tracert), 61 AMS, 90
NQA operation configuration (voice), 62 analyzer, 89
SNTP configuration, 107, 138, 141, 141 analyzer instance, 89
client/server analyzer parameters, 89
IPv6 NTP client/server association mode, 121 Chef, 373, 377, 377
NTP association mode, 104, 108 Chef client, 376
NTP client/server association mode, 111, 120 Chef server, 376
NTP client/server association Chef workstation, 376
mode+authentication, 133 collector, 86
NTP client/server mode dynamic associations collector instance, 87
max, 118 collector parameters, 86
clock CWMP, 251, 255, 261
NTP local clock as reference source, 110 CWMP ACS attribute, 256
close-wait timer (CWMP ACS), 261 CWMP ACS attribute (default)(CLI), 257
collaborating CWMP ACS attribute (preferred), 256
NQA client+Track function, 27 CWMP ACS autoconnect parameters, 260
NQA+Track collaboration, 7 CWMP ACS close-wait timer, 261
collecting CWMP ACS connection retry max number, 260
sFlow agent+collector information CWMP ACS periodic Inform feature, 260
configuration, 316
CWMP CPE ACS authentication parameters, 258
troubleshooting sFlow remote collector cannot
CWMP CPE ACS connection interface, 259
receive packets, 320
CWMP CPE ACS provision code, 259
common
CWMP CPE attribute, 258
information center standard system logs, 321
EAA, 270, 277
community
EAA environment variable (user-defined), 273
SNMPv1 community direct configuration, 159
EAA event monitor policy (CLI), 278
SNMPv1 community indirect configuration,
160 EAA event monitor policy (Track), 279
SNMPv1 configuration, 159, 159 EAA monitor policy, 274
SNMPv2c community direct configuration by EAA monitor policy (CLI-defined+environment
community name, 159 variables), 281
SNMPv2c community indirect configuration by EAA monitor policy (Tcl-defined), 277
creating SNMPv2c user, 160 eMDI, 404, 409, 409, 411
SNMPv2c configuration, 159, 159 end-to-end iNQA packet loss measurement, 92
comparing EPS agent, 401, 402, 402

423
Event MIB, 180, 182, 189 NQA client operation (path jitter), 24
Event MIB event, 183 NQA client operation (SNMP), 18
Event MIB trigger test, 185 NQA client operation (TCP), 18
Event MIB trigger test (Boolean), 191 NQA client operation (UDP echo), 19
Event MIB trigger test (existence), 189 NQA client operation (UDP jitter), 16
Event MIB trigger test (threshold), 187, 194 NQA client operation (UDP tracert), 20
feature image-based packet capture, 349 NQA client operation (voice), 22
flow mirroring, 309, 314 NQA client operation optional parameters, 26
flow mirroring traffic behavior, 312 NQA client statistics collection, 29
flow mirroring traffic class, 311 NQA client template, 31
information center, 321, 326, 338 NQA client template (DNS), 33
information center log output (console), 338 NQA client template (FTP), 41
information center log output (Linux log host), NQA client template (HTTP), 38
340 NQA client template (HTTPS), 39
information center log output (UNIX log host), NQA client template (ICMP), 32
339 NQA client template (RADIUS), 42
information center log suppression, 333 NQA client template (SSL), 43
information center log suppression for module, NQA client template (TCP half open), 35
334
NQA client template (TCP), 34
information center trace log file max size, 337
NQA client template (UDP), 36
iNQA, 79, 86, 92
NQA client template optional parameters, 44
IPv6 NTP client/server association mode, 121
NQA client threshold monitoring, 27
IPv6 NTP multicast association mode, 130
NQA client+Track collaboration, 27
IPv6 NTP symmetric active/passive
NQA collaboration, 68
association mode, 124
NQA operation (DHCP), 50
Layer 2 remote port mirroring, 295
NQA operation (DLSw), 65
Layer 2 remote port mirroring (egress port),
306 NQA operation (DNS), 51
Layer 2 remote port mirroring (reflector port NQA operation (FTP), 52
configurable), 303 NQA operation (HTTP), 53
local port mirroring, 292 NQA operation (ICMP echo), 46
local port mirroring (multi-monitoring-devices), NQA operation (ICMP jitter), 47
294 NQA operation (path jitter), 66
local port mirroring (multiple monitoring NQA operation (SNMP), 57
devices), 302 NQA operation (TCP), 58
local port mirroring (source port mode), 301 NQA operation (UDP echo), 59
local port mirroring group monitor port, 293 NQA operation (UDP jitter), 54
local port mirroring group source ports, 293 NQA operation (UDP tracert), 61
mirroring sources, 293 NQA operation (voice), 62
MP, 88 NQA server, 9
NETCONF, 197, 199 NQA template (DNS), 71
NQA, 7, 8, 46 NQA template (FTP), 75
NQA client history record save, 30 NQA template (HTTP), 74
NQA client operation, 9 NQA template (HTTPS), 75
NQA client operation (DHCP), 12 NQA template (ICMP), 70
NQA client operation (DLSw), 24 NQA template (RADIUS), 76
NQA client operation (DNS), 13 NQA template (SSL), 77
NQA client operation (FTP), 14 NQA template (TCP half open), 72
NQA client operation (HTTP), 15 NQA template (TCP), 72
NQA client operation (ICMP echo), 10 NQA template (UDP), 73
NQA client operation (ICMP jitter), 11 NTP, 102, 107, 120

424
NTP association mode, 108 SNMP, 155, 167
NTP broadcast association mode, 109, 125 SNMP common parameters, 158
NTP broadcast mode authentication, 114 SNMP logging, 165
NTP broadcast mode+authentication, 135 SNMP notification, 162
NTP client/server association mode, 108, 120 SNMPv1, 167
NTP client/server mode authentication, 111 SNMPv1 community, 159, 159
NTP client/server mode+authentication, 133 SNMPv1 community by community name, 159
NTP dynamic associations max, 118 SNMPv1 community by creating SNMPv1 user,
NTP local clock as reference source, 110 160
NTP multicast association mode, 110, 127 SNMPv1 host notification send, 163
NTP multicast mode authentication, 116 SNMPv2c, 167
NTP optional parameters, 117 SNMPv2c community, 159, 159
NTP symmetric active/passive association SNMPv2c community by community name, 159
mode, 108, 123 SNMPv2c community by creating SNMPv2c user,
NTP symmetric active/passive mode 160
authentication, 113 SNMPv2c host notification send, 163
packet capture, 342, 350 SNMPv3, 168
packet capture (feature image-based), 350 SNMPv3 group and user, 160
packet loss notification, 91 SNMPv3 group and user in FIPS mode, 162
PMM kernel thread deadloop detection, 286 SNMPv3 group and user in non-FIPS mode, 161
PMM kernel thread starvation detection, 287 SNMPv3 host notification send, 163
PoE, 143, 145 SNTP, 107, 138, 141, 141
PoE (uni-PSE device), 152, 152 SNTP authentication, 139
PoE PI power management, 149 SQA, 413, 413
PoE PI power max, 148 UDP data flow, 411
PoE power, 148 VCF fabric, 354, 360
PoE profile PI, 150 VCF fabric automated underlay network
PoE PSE power max, 148 deployment, 362, 364
PoE PSE power monitoring, 149 VCF fabric MAC address of VSI interfaces, 368
point-to-point iNQA packet loss measurement, connecting
96 CWMP ACS connection initiation, 260
port mirroring, 301 CWMP ACS connection retry max number, 260
port mirroring remote destination group CWMP CPE ACS connection interface, 259
monitor port, 297 console
port mirroring remote probe VLAN, 298 information center log output, 328
Puppet, 388, 391, 391 information center log output configuration, 338
remote port mirroring source group egress NETCONF over console session establishment,
port, 300 203
remote port mirroring source group reflector content
port, 299 packet file content display, 350
remote port mirroring source group source controlling
ports, 299
RMON history control entry, 173
RMON, 171, 176
converging
RMON alarm, 174, 177
VCF fabric configuration, 354, 360
RMON Ethernet statistics group, 176
cookbook
RMON history group, 176
Chef resources, 374
RMON statistics, 173
CPE
sFlow, 316, 318, 318
CWMP ACS-CPE autoconnect, 253
sFlow agent+collector information, 316
CPU
sFlow counter sampling, 318
flow mirroring configuration, 309, 314
sFlow flow sampling, 317
creating

425
local port mirroring group, 293 deadloop detection (Linux kernel PMM), 286
remote port mirroring destination group, 297 debugging
remote port mirroring source group, 298 feature module, 6
RMON Ethernet statistics entry, 173 system, 5
RMON history control entry, 173 system maintenance, 1
customer premise equipment (CPE) default
CPE WAN Management Protocol. Use CWMP information center log default output rules, 322
CWMP NETCONF non-default settings retrieval, 207
ACS attribute (default)(CLI), 257 system information default output rules
ACS attribute (preferred), 256 (diagnostic log), 322
ACS attribute configuration, 256 system information default output rules (hidden
ACS autoconnect parameters, 260 log), 323
ACS HTTPS SSL client policy, 258 system information default output rules (security
log), 322
ACS-CPE autoconnect, 253
system information default output rules (trace log),
autoconfiguration server (ACS), 251
323
basic functions, 251
deploying
configuration, 251, 255, 261
VCF fabric automated deployment, 358
connection establishment, 254
VCF fabric automated underlay network
CPE ACS authentication parameters, 258 deployment configuration, 364
CPE ACS connection interface, 259 deployment
CPE ACS provision code, 259 VCF fabric automated underlay network
CPE attribute configuration, 258 deployment configuration, 362
customer premise equpment (CPE), 251 destination
DHCP server, 251 information center system logs, 322
DNS server, 251 port mirroring, 289
enable, 256 port mirroring destination device, 289
how it works, 253 detecting
main/backup ACS switchover, 254 PMM kernel thread deadloop detection, 286
network framework, 251 PMM kernel thread starvation detection, 287
RPC methods, 253 PoE PD (nonstandard), 147
settings display, 261 determining
D ping address reachability, 2
device
data
Chef configuration, 373, 377, 377
feature image-based packet capture data
display filter, 349 Chef resources (netdev_device), 380
NETCONF configuration data retrieval (all configuration information retrieval, 204
modules), 211 CWMP configuration, 251, 255, 261
NETCONF configuration data retrieval feature image-based packet capture configuration,
(Syslog module), 212 349
NETCONF data entry retrieval (interface feature image-based packet capture file save,
table), 209 349
NETCONF filtering (column-based), 215 information center configuration, 321, 326, 338
NETCONF filtering (column-based) information center log output configuration
(conditional match), 216 (console), 338, 338
NETCONF filtering (column-based) (full information center log output configuration (Linux
match), 215 log host), 340
NETCONF filtering (column-based) (regex information center log output configuration (UNIX
match), 216 log host), 339
NETCONF filtering (conditional match), 219 information center system log types, 321
NETCONF filtering (regex match), 217 IPv6 NTP multicast association mode, 130
NETCONF filtering (table-based), 214 Layer 2 remote port mirroring (egress port), 306

426
Layer 2 remote port mirroring (reflector port SNMP common parameter configuration, 158
configurable), 303 SNMP configuration, 155, 167
Layer 2 remote port mirroring configuration, SNMP MIB, 155
295 SNMP notification, 162
local port mirroring (multi-monitoring-devices), SNMP view-based MIB access control, 155
294
SNMPv1 community configuration, 159, 159
local port mirroring (source port mode), 301
SNMPv1 community configuration by community
local port mirroring configuration, 292 name, 159
local port mirroring configuration (multiple SNMPv1 community configuration by creating
monitoring devices), 302 SNMPv1 user, 160
local port mirroring group monitor port, 293 SNMPv1 configuration, 167
NETCONF capability exchange, 204 SNMPv2c community configuration, 159, 159
NETCONF CLI operations, 231, 232 SNMPv2c community configuration by community
NETCONF configuration, 197, 199, 199 name, 159
NETCONF configuration modification, 222 SNMPv2c community configuration by creating
NETCONF device configuration+state SNMPv2c user, 160
information retrieval, 205 SNMPv2c configuration, 167
NETCONF information retrieval, 208 SNMPv3 configuration, 168
NETCONF management, 199 SNMPv3 group and user configuration, 160
NETCONF non-default settings retrieval, 207 SNMPv3 group and user configuration in FIPS
NETCONF running configuration lock/unlock, mode, 162
220, 221 SNMPv3 group and user configuration in
NETCONF session information retrieval, 209, non-FIPS mode, 161
213 device role
NETCONF session termination, 238 master spine node configuration, 363
NETCONF YANG file content retrieval, 208 VCF fabric automated underlay network device
NQA client operation, 9 role configuration, 362
NQA collaboration configuration, 68 DHCP
NQA operation configuration (DHCP), 50 CWMP DHCP server, 251
NQA operation configuration (DNS), 51 NQA client operation, 12
NQA server, 9 NQA operation configuration, 50
NTP architecture, 103 diagnosing
NTP broadcast association mode, 125 information center diagnostic log, 321
NTP broadcast mode+authentication, 135 information center diagnostic log save (log file),
NTP multicast association mode, 127 336
packet capture configuration (feature direction
image-based), 350 port mirroring (bidirectional), 289
PoE configuration, 143, 145 port mirroring (inbound), 289
PoE profile PI configuration, 150 port mirroring (outbound), 289
port mirroring configuration, 289, 301 disabling
port mirroring remote destination group, 297 information center interface link up/link down log
port mirroring remote source group, 298 generation, 334
port mirroring remote source group egress NTP message receiving, 118
port, 300 display
port mirroring remote source group reflector EPS agent, 402
port, 299 displaying
port mirroring remote source group source CWMP settings, 261
ports, 299 EAA settings, 277
port mirroring source device, 289 eMDI, 411
Puppet configuration, 388, 391, 391 Event MIB, 189
Puppet resources (netdev_device), 392 feature image-based packet capture data display
Puppet shutdown, 390 filter, 349

427
information center, 337 event monitor policy environment variable, 272
iNQA analyzer, 91 event monitor policy runtime, 272
iNQA collector, 91 event monitor policy user role, 272
NQA, 45 event source, 270
NTP, 120 how it works, 270
packet capture display filter configuration, 346, monitor policy, 271
348 monitor policy configuration, 274
packet file content, 350 monitor policy configuration
PMM, 284 (CLI-defined+environment variables), 281
PMM kernel threads, 287 monitor policy configuration (Tcl-defined), 277
PMM user processes, 286 monitor policy configuration restrictions, 274
PoE, 152 monitor policy configuration restrictions (Tcl), 276
port mirroring, 301 monitor policy suspension, 276
RMON settings, 175 RTM, 270
sFlow, 318 settings display, 277
SNMP settings, 166 echo
SNTP, 140 NQA client operation (ICMP echo), 10
SQA, 413 NQA operation configuration (ICMP echo), 46
user PMM, 285 NQA operation configuration (UDP echo), 59
VCF fabric, 369 egress port
DLSw Layer 2 remote port mirroring, 289
NQA client operation, 24 Layer 2 remote port mirroring (egress port), 306
NQA operation configuration, 65 port mirroring remote source group egress port,
DNS 300
CWMP DNS server, 251 electricity
NQA client operation, 13 PoE configuration, 143, 145
NQA client template, 33 Embedded Automation Architecture. Use EAA
NQA operation configuration, 51 eMDI
NQA template configuration, 71 configuration, 404, 411
domain display, 411
name system. Use DNS implementation, 404
DSCP monitoring indicators, 405
NTP packet value setting, 119 prerequisites, 409
DSL network restrictions and guidelines, 408
CWMP configuration, 251, 255 starting the instance, 410
duplicate log suppression, 333 stopping the instance, 410
dynamic enable
Dynamic Host Configuration Protocol. Use VCF fabric local proxy ARP, 368
DHCP VCF fabric overlay network L2 agent, 366
NTP dynamic associations max, 118 VCF fabric overlay network L3 agent, 367
E enabling
CWMP, 256
EAA
Event MIB SNMP notification, 188
configuration, 270, 277
information center, 328
environment variable configuration
information center duplicate log suppression, 333
(user-defined), 273
information center synchronous output, 333
event monitor, 270
information center system log SNMP notification,
event monitor policy action, 272
334
event monitor policy configuration (CLI), 278
measurement functionality, 90
event monitor policy configuration (Track), 279
NQA client, 9
event monitor policy element, 271
packet loss measurement, 88

428
PoE over-temperature protection, 151 NETCONF module report event subscription, 235
PoE PD detection (nonstandard), 147 NETCONF module report event subscription
PoE PI, 145 cancel, 236
SNMP agent, 157 NETCONF monitoring event subscription, 234
SNMP notification, 162 NETCONF syslog event subscription, 233
SNMP version, 157 RMON event group, 171
SNTP, 138 Event Management Information Base. See Event MIB
VCF fabric topology discovery, 361 Event MIB
environment configuration, 180, 182, 189
EAA environment variable configuration display, 189
(user-defined), 273 event actions, 181
EAA event monitor policy environment event configuration, 183
variable, 272 monitored object, 180
PoE over-temperature protection enable, 151 object owner, 182
EPS agent SNMP notification enable, 188
configuration, 401, 402, 402 trigger test configuration, 185
display, 402 trigger test configuration (Boolean), 191
establishing trigger test configuration (existence), 189
NETCONF over console sessions, 203 trigger test configuration (threshold), 187, 194
NETCONF over SOAP sessions, 202 exchanging
NETCONF over SSH sessions, 203 NETCONF capabilities, 204
NETCONF over Telnet sessions, 203 existence
NETCONF session, 200 Event MIB trigger test, 180
Ethernet Event MIB trigger test configuration, 189
CWMP configuration, 251, 255, 261
F
Layer 2 remote port mirroring configuration,
295 field
PoE configuration, 143, 145 packet capture display filter keyword, 346
port mirroring configuration, 289, 301 file
power over Ethernet. Use PoE Chef configuration file, 374
RMON Ethernet statistics group configuration, information center diagnostic log output
176 destination, 336
RMON statistics configuration, 173 information center log save (log file), 331
RMON statistics entry, 173 information center security log file management,
RMON statistics group, 171 336
sFlow configuration, 316, 318, 318 information center security log save (log file), 335
Ethernet interface NETCONF YANG file content retrieval, 208
Chef resources (netdev_l2_interface), 382 packet file content display, 350
Puppet resources (netdev_l2_interface), 394 filtering
event feature image-based packet capture data display,
349
EAA configuration, 270, 277
NETCONF column-based filtering, 214
EAA environment variable configuration
(user-defined), 273 NETCONF data (conditional match), 219
EAA event monitor, 270 NETCONF data (regex match), 217
EAA event monitor policy element, 271 NETCONF data filtering (column-based), 215
EAA event monitor policy environment NETCONF data filtering (table-based), 214
variable, 272 NETCONF table-based filtering, 214
EAA event source, 270 packet capture display filter configuration, 346,
EAA monitor policy, 271 348
NETCONF event subscription, 233, 237 packet capture filter configuration, 342, 345
FIPS compliance

429
information center, 326 H
NETCONF, 199 hidden log (information center), 321
SNMP, 156 history
FIPS mode NQA client history record save, 30
SNMPv3 group and user configuration, 162 RMON group, 171
firmware RMON history control entry, 173
PoE PSE firmware upgrade, 151 RMON history group configuration, 176
flow host
mirroring. See flow mirroring information center log output (log host), 330
Sampled Flow. Use sFlow HTTP
flow mirroring NQA client operation, 15
configuration, 309, 314 NQA client template, 38
QoS policy application, 313 NQA operation configuration, 53
QoS policy application (global), 313 NQA template configuration, 74
QoS policy application (interface), 313 HTTPS
QoS policy application (VLAN), 313 CWMP ACS HTTPS SSL client policy, 258
traffic behavior configuration, 312 NQA client template, 39
traffic class configuration, 311 NQA template configuration, 75
format
I
information center system logs, 323
NETCONF message, 197 ICMP
FTP NQA client operation (ICMP echo), 10
NQA client operation, 14 NQA client operation (ICMP jitter), 11
NQA client template, 41 NQA client template, 32
NQA operation configuration, 52 NQA collaboration configuration, 68
NQA template configuration, 75 NQA operation configuration (ICMP echo), 46
full match NQA operation configuration (ICMP jitter), 47
NETCONF data filtering (column-based), 215 NQA template configuration, 70
ping command, 1
G
identifying
generating tracert node failure, 4, 4
information center interface link up/link down image
log generation, 334
packet capture configuration (feature
get operation image-based), 350
SNMP, 156 packet capture feature image-based configuration,
SNMP logging, 165 349
group implementation
Chef resources (netdev_lagg), 383 eMDI, 404
local port mirroring group monitor port, 293 inbound
local port mirroring group source port, 293 port mirroring, 289
port mirroring group, 289 information
Puppet resources (netdev_lagg), 396 device configuration information retrieval, 204
RMON, 171 information center
RMON alarm, 172 configuration, 321, 326, 338
RMON Ethernet statistics, 171 default output rules (diagnostic log), 322
RMON event, 171 default output rules (hidden log), 323
RMON history, 171 default output rules (security log), 322
RMON private alarm, 172 default output rules (trace log), 323
SNMPv3 configuration in non-FIPS mode, 161 diagnostic log save (log file), 336
group and user display, 337
SNMPv3 configuration, 160 duplicate log suppression, 333

430
enable, 328 SNMP common parameter configuration, 158
FIPS compliance, 326 SNMP configuration, 155, 167
interface link up/link down log generation, 334 SNMP MIB, 155
log default output rules, 322 SNMP2c community configuration by community
log output (console), 328 name, 159
log output (log host), 330 SNMP2c community configuration by creating
log output (monitor terminal), 329 SNMPv2c user, 160
log output configuration (console), 338 SNMPv1 community configuration, 159, 159
log output configuration (Linux log host), 340 SNMPv1 community configuration by community
name, 159
log output configuration (UNIX log host), 339
SNMPv1 community configuration by creating
log output destinations, 328
SNMPv1 user, 160
log save (log file), 331
SNMPv2c community configuration, 159, 159
log storage period, 332
SNMPv3 group and user configuration, 160
log suppression configuration, 333
SNMPv3 group and user configuration in FIPS
log suppression for module, 334 mode, 162
maintain, 337 SNMPv3 group and user configuration in
security log file management, 336 non-FIPS mode, 161
security log management, 335 interval
security log save (log file), 335 CWMP ACS periodic Inform feature, 260
synchronous log output, 333 IP addressing
system information log types, 321 tracert, 2
system log destinations, 322 tracert node failure identification, 4, 4
system log formats and field descriptions, 323 IP services
system log levels, 321 NQA client history record save, 30
system log SNMP notification, 334 NQA client operation (DHCP), 12
trace log file max size, 337 NQA client operation (DLSw), 24
initiating NQA client operation (DNS), 13
CWMP ACS connection initiation, 260 NQA client operation (FTP), 14
iNQA NQA client operation (HTTP), 15
application scenarios, 81 NQA client operation (ICMP echo), 10
basic concepts, 79 NQA client operation (ICMP jitter), 11
benefits, 79 NQA client operation (path jitter), 24
compatibility, 85 NQA client operation (SNMP), 18
configuration, 79, 86, 92 NQA client operation (TCP), 18
guidelines, 85 NQA client operation (UDP echo), 19
instance configuration requirements, 85 NQA client operation (UDP jitter), 16
network configuration requirements, 85 NQA client operation (UDP tracert), 20
operating mechanism, 83 NQA client operation (voice), 22
prerequisites, 86 NQA client operation optional parameters, 26
restrictions, 85 NQA client operation scheduling, 31
iNQA analyzer NQA client statistics collection, 29
display, 91 NQA client template (DNS), 33
iNQA collector NQA client template (FTP), 41
display, 91 NQA client template (HTTP), 38
interface NQA client template (HTTPS), 39
Chef resources (netdev_interface), 380 NQA client template (ICMP), 32
Puppet resources (netdev_interface), 393 NQA client template (RADIUS), 42
Puppet resources (netdev_l2_interface), 394 NQA client template (SSL), 43
Internet NQA client template (TCP half open), 35
NQA configuration, 7, 8, 46 NQA client template (TCP), 34

431
NQA client template (UDP), 36 Puppet resources (netdev_l2vpn), 395
NQA client template optional parameters, 44 language
NQA client threshold monitoring, 27 Puppet configuration, 388, 391, 391
NQA client+Track collaboration, 27 Layer 2
NQA collaboration configuration, 68 port mirroring configuration, 289, 301
NQA configuration, 7, 8, 46 remote port mirroring, 290
NQA operation configuration (DHCP), 50 remote port mirroring (egress port), 306
NQA operation configuration (DLSw), 65 remote port mirroring (reflector port configurable),
NQA operation configuration (DNS), 51 303
NQA operation configuration (FTP), 52 remote port mirroring configuration, 295
NQA operation configuration (HTTP), 53 Layer 3
NQA operation configuration (ICMP echo), 46 port mirroring configuration, 289, 301
NQA operation configuration (ICMP jitter), 47 tracert, 2
NQA operation configuration (path jitter), 66 tracert node failure identification, 4, 4
NQA operation configuration (SNMP), 57 level
NQA operation configuration (TCP), 58 information center system logs, 321
NQA operation configuration (UDP echo), 59 link
NQA operation configuration (UDP jitter), 54 information center interface link up/link down log
NQA operation configuration (UDP tracert), 61 generation, 334
NQA operation configuration (voice), 62 Linux
NQA template configuration (DNS), 71 information center log host output configuration,
340
NQA template configuration (FTP), 75
kernel thread, 283
NQA template configuration (HTTP), 74
PMM, 283
NQA template configuration (HTTPS), 75
PMM kernel thread, 286
NQA template configuration (ICMP), 70
PMM kernel thread deadloop detection, 286
NQA template configuration (RADIUS), 76
PMM kernel thread display, 287
NQA template configuration (SSL), 77
PMM kernel thread maintain, 287
NQA template configuration (TCP half open),
72 PMM kernel thread starvation detection, 287
NQA template configuration (TCP), 72 PMM user process display, 286
NQA template configuration (UDP), 73 PMM user process maintain, 286
IPv6 Puppet configuration, 388, 391, 391
NTP client/server association mode, 121 loading
NTP multicast association mode, 130 NETCONF configuration, 226
NTP symmetric active/passive association local
mode, 124 NTP local clock as reference source, 110
port mirroring, 290
K
port mirroring configuration, 292
kernel thread port mirroring configuration
display, 287 (multi-monitoring-devices), 294
Linux process, 283 port mirroring group creation, 293
maintain, 287 port mirroring group monitor port, 293
PMM, 286 port mirroring group source port, 293
PMM deadloop detection, 286 locking
PMM starvation detection, 287 NETCONF running configuration, 220, 221
keyword log field description
packet capture filter, 342 information center system logs, 323
L logging
information center configuration, 321, 326, 338
L2VPN
information center diagnostic log save (log file),
Chef resources (netdev_l2vpn), 383 336

432
information center diagnostic logs, 321 packet capture filter operator, 343
information center duplicate log suppression, M
333
information center hidden logs, 321 maintaining
information center interface link up/link down information center, 337
log generation, 334 PMM kernel thread, 286
information center log default output rules, PMM kernel threads, 287
322 PMM Linux, 283
information center log output (console), 328 PMM user processes, 286
information center log output (log host), 330 process monitoring and maintenance. See PMM
information center log output (monitor user PMM, 285
terminal), 329 maintenance
information center log output configuration displaying EPS agent, 402
(console), 338 Management Information Base. Use MIB
information center log output configuration managing
(Linux log host), 340
information center security log file, 336
information center log output configuration
information center security logs, 335
(UNIX log host), 339
PoE PI power management configuration, 149
information center log save (log file), 331
manifest
information center log storage period, 332
Puppet resources, 389, 392
information center security log file
management, 336 matching
information center security log management, NETCONF data filtering (column-based), 215
335 NETCONF data filtering (column-based)
information center security log save (log file), (conditional match), 216
335 NETCONF data filtering (column-based) (full
information center security logs, 321 match), 215
information center standard system logs, 321 NETCONF data filtering (column-based) (regex
match), 216
information center synchronous log output,
333 NETCONF data filtering (conditional match), 219
information center system log destinations, NETCONF data filtering (regex match), 217
322 NETCONF data filtering (table-based), 214
information center system log formats and packet capture display filter configuration
field descriptions, 323 (proto[…] expression), 348
information center system log levels, 321 message
information center system log SNMP NETCONF format, 197
notification, 334 NTP message receiving disable, 118
information center trace log file max size, 337 NTP message source address, 117
SNMP configuration, 165 MIB
system information default output rules Event MIB configuration, 180, 182, 189
(diagnostic log), 322 Event MIB event actions, 181
system information default output rules Event MIB event configuration, 183
(hidden log), 323 Event MIB monitored object, 180
system information default output rules Event MIB object owner, 182
(security log), 322
Event MIB trigger test configuration, 185
system information default output rules (trace
log), 323 Event MIB trigger test configuration (Boolean),
191
logical
Event MIB trigger test configuration (existence),
packet capture display filter configuration 189
(logical expression), 348
Event MIB trigger test configuration (threshold),
packet capture display filter operator, 347 187, 194
packet capture filter configuration (logical SNMP, 155, 155
expression), 345
SNMP Get operation, 156

433
SNMP Set operation, 156 PMM kernel thread, 286
SNMP view-based access control, 155 PMM Linux, 283
mirroring PoE PSE power monitoring configuration, 149
flow. See flow mirroring process monitoring and maintenance. See PMM
port. See port mirroring user PMM, 285
mode multicast
NTP association, 108 IPv6 NTP multicast association mode, 130
NTP broadcast association, 104, 109 NTP multicast association mode, 104, 110, 127
NTP client/server association, 104, 108 NTP multicast mode authentication, 116
NTP multicast association, 104, 110 NTP multicast mode dynamic associations max,
NTP symmetric active/passive association, 118
104, 108 multimedia
PoE PSE firmware upgrade (full mode), 151 SQA configuration, 413
PoE PSE firmware upgrade (refresh mode), N
151
SNMP access control (rule-based), 156 NETCONF
SNMP access control (view-based), 156 capability exchange, 204
modifying Chef configuration, 373, 377, 377
NETCONF configuration, 222, 223 CLI operations, 231, 232
module CLI return, 239
feature module debug, 6 configuration, 197, 199
information center configuration, 321, 326, configuration data retrieval (all modules), 211
338 configuration data retrieval (Syslog module), 212
information center log suppression for module, configuration load, 226
334 configuration modification, 222, 223
NETCONF configuration data retrieval (all configuration rollback, 226
modules), 211 configuration rollback (configuration file-based),
NETCONF configuration data retrieval 227
(Syslog module), 212 configuration rollback (rollback point-based), 227
NETCONF module report event subscription, configuration save, 224
235 data entry retrieval (interface table), 209
NETCONF module report event subscription data filtering, 214
cancel, 236
data filtering (conditional match), 219
monitor terminal
data filtering (regex match), 217
information center log output, 329
device configuration, 199
monitoring
device configuration information retrieval, 204
EAA configuration, 270
device configuration+state information retrieval,
EAA environment variable configuration 205
(user-defined), 273
device management, 199
Event MIB configuration, 180, 182, 189
event subscription, 233, 237
Event MIB trigger test configuration (Boolean),
FIPS compliance, 199
191
information retrieval, 208
Event MIB trigger test configuration
(existence), 189 message format, 197
Event MIB trigger test configuration module report event subscription, 235
(threshold), 194 module report event subscription cancel, 236
local port mirroring configuration (multiple monitoring event subscription, 234
monitoring devices), 302 NETCONF over console session establishment,
NETCONF monitoring event subscription, 234 203
NQA client threshold monitoring, 27 NETCONF over SOAP session establishment,
NQA threshold monitoring, 8 202
PMM, 284 NETCONF over SSH session establishment, 203

434
NETCONF over Telnet session establishment, information center security log save (log file), 335
203 information center synchronous log output, 333
non-default settings retrieval, 207 information center system log SNMP notification,
over SOAP, 197 334
protocols and standards, 199 information center system log types, 321
Puppet configuration, 388, 391, 391 information center trace log file max size, 337
running configuration lock/unlock, 220, 221 Layer 2 remote port mirroring (egress port), 306
running configuration save, 225 Layer 2 remote port mirroring (reflector port
session attribute set, 200 configurable), 303
session establishment, 200 Layer 2 remote port mirroring configuration, 295
session establishment restrictions, 200 local port mirroring (multi-monitoring-devices),
session information retrieval, 209, 213 294
session termination, 238 local port mirroring (source port mode), 301
structure, 197 local port mirroring configuration, 292
supported operations, 240 local port mirroring configuration (multiple
monitoring devices), 302
syslog event subscription, 233
local port mirroring group monitor port, 293
VCF fabric automated underlay NETCONF
username and password setting, 363 local port mirroring group source port, 293
YANG file content retrieval, 208 Network Configuration Protocol. Use NETCONF
network Network Time Protocol. Use NTP
analyzer configuration, 89 NQA client history record save, 30
Chef network framework, 373 NQA client operation, 9
Chef resources, 374, 380 NQA client operation (DHCP), 12
collector configuration, 86 NQA client operation (DLSw), 24
eMDI configuration, 409 NQA client operation (DNS), 13
eMDI implementation, 404 NQA client operation (FTP), 14
Event MIB SNMP notification enable, 188 NQA client operation (HTTP), 15
Event MIB trigger test configuration (Boolean), NQA client operation (ICMP echo), 10
191 NQA client operation (ICMP jitter), 11
Event MIB trigger test configuration NQA client operation (path jitter), 24
(existence), 189 NQA client operation (SNMP), 18
Event MIB trigger test configuration NQA client operation (TCP), 18
(threshold), 194 NQA client operation (UDP echo), 19
feature module debug, 6 NQA client operation (UDP jitter), 16
flow mirroring configuration, 309, 314 NQA client operation (UDP tracert), 20
flow mirroring traffic behavior, 312 NQA client operation (voice), 22
information center diagnostic log save (log NQA client operation optional parameters, 26
file), 336 NQA client operation scheduling, 31
information center duplicate log suppression, NQA client statistics collection, 29
333
NQA client template, 31
information center interface link up/link down
NQA client threshold monitoring, 27
log generation, 334
NQA client+Track collaboration, 27
information center log output configuration
(console), 338 NQA collaboration configuration, 68
information center log output configuration NQA operation configuration (DHCP), 50
(Linux log host), 340 NQA operation configuration (DLSw), 65
information center log output configuration NQA operation configuration (DNS), 51
(UNIX log host), 339 NQA operation configuration (FTP), 52
information center log storage period, 332 NQA operation configuration (HTTP), 53
information center security log file NQA operation configuration (ICMP echo), 46
management, 336 NQA operation configuration (ICMP jitter), 47
NQA operation configuration (path jitter), 66

435
NQA operation configuration (SNMP), 57 SNMPv2c community configuration by community
NQA operation configuration (TCP), 58 name, 159
NQA operation configuration (UDP echo), 59 SNMPv2c community configuration by creating
NQA operation configuration (UDP jitter), 54 SNMPv2c user, 160
NQA operation configuration (UDP tracert), 61 SNMPv3 group and user configuration, 160
NQA operation configuration (voice), 62 SNMPv3 group and user configuration in FIPS
mode, 162
NQA server, 9
SNMPv3 group and user configuration in
NQA template configuration (DNS), 71
non-FIPS mode, 161
NQA template configuration (FTP), 75
tracert node failure identification, 4, 4
NQA template configuration (HTTP), 74
VCF fabric automated deployment, 358
NQA template configuration (HTTPS), 75
VCF fabric automated underlay network
NQA template configuration (ICMP), 70 deployment configuration, 362, 364
NQA template configuration (RADIUS), 76 VCF fabric Neutron deployment, 357
NQA template configuration (SSL), 77 VCF fabric topology, 354
NQA template configuration (TCP half open), network management
72
Chef configuration, 373, 377, 377
NQA template configuration (TCP), 72
CWMP basic functions, 251
NQA template configuration (UDP), 73
CWMP configuration, 251, 255, 261
NTP association mode, 108
EAA configuration, 270, 277
NTP message receiving disable, 118
EPS agent configuration, 401, 402, 402
ping network connectivity test, 1
Event MIB configuration, 180, 182, 189
PMM 3rd party process start, 284
information center configuration, 321, 326, 338
PMM 3rd party process stop, 284
iNQA configuration, 86
port mirroring remote destination group, 297
NETCONF configuration, 197
port mirroring remote source group, 298
NQA configuration, 7, 8, 46
port mirroring remote source group egress
NTP configuration, 102, 107, 120
port, 300
packet capture configuration, 342, 350
port mirroring remote source group reflector
port, 299 PMM Linux network, 283
port mirroring remote source group source PoE configuration (uni-PSE device), 152, 152
ports, 299 port mirroring configuration, 289, 301
Puppet network framework, 388 Puppet configuration, 388, 391, 391
Puppet resources, 389, 392 RMON configuration, 171, 176
quality analyzer. See NQA sFlow configuration, 316, 318, 318
RMON alarm configuration, 174, 177 SNMP configuration, 155, 167
RMON alarm group sample types, 173 SNMPv1 configuration, 167
RMON Ethernet statistics group configuration, SNMPv2c configuration, 167
176 SNMPv3 configuration, 168
RMON history group configuration, 176 SQA configuration, 413, 413
RMON statistics configuration, 173 stopping packet capture configuration, 350
RMON statistics function, 173 VCF fabric configuration, 354, 360
sFlow counter sampling configuration, 318 network monitoring
sFlow flow sampling configuration, 317 SQA configuration, 413
SNMP common parameter configuration, 158 Neutron
SNMPv1 community configuration, 159, 159 VCF fabric, 356
SNMPv1 community configuration by VCF fabric Neutron deployment, 357
community name, 159 NMM
SNMPv1 community configuration by creating CWMP ACS attributes, 256
SNMPv1 user, 160 CWMP ACS attributes (default)(CLI), 257
SNMPv2c community configuration, 159, 159 CWMP ACS attributes (preferred), 256
CWMP ACS autoconnect parameters, 260

436
CWMP ACS HTTPS SSL client policy, 258 information center log formats and field
CWMP basic functions, 251 descriptions, 323
CWMP configuration, 251, 255, 261 information center log levels, 321
CWMP CPE ACS authentication parameters, information center log output (console), 328
258 information center log output (log host), 330
CWMP CPE ACS connection interface, 259 information center log output (monitor terminal),
CWMP CPE ACS provision code, 259 329
CWMP CPE attributes, 258 information center log output configuration
CWMP framework, 251 (console), 338
CWMP settings display, 261 information center log output configuration (Linux
log host), 340
device configuration information retrieval, 204
information center log output configuration (UNIX
EAA configuration, 270, 277
log host), 339
EAA environment variable configuration
information center log output destinations, 328
(user-defined), 273
information center log save (log file), 331
EAA event monitor, 270
information center log storage period, 332
EAA event monitor policy configuration (CLI),
278 information center log suppression for module,
334
EAA event monitor policy configuration (Track),
279 information center maintain, 337
EAA event monitor policy element, 271 information center security log file management,
336
EAA event monitor policy environment
variable, 272 information center security log management, 335
EAA event source, 270 information center security log save (log file), 335
EAA monitor policy, 271 information center synchronous log output, 333
EAA monitor policy configuration, 274 information center system log SNMP notification,
334
EAA monitor policy configuration
(CLI-defined+environment variables), 281 information center system log types, 321
EAA monitor policy configuration (Tcl-defined), information center trace log file max size, 337
277 iNQA configuration, 86
EAA monitor policy suspension, 276 IPv6 NTP client/server association mode
EAA RTM, 270 configuration, 121
EAA settings display, 277 IPv6 NTP multicast association mode
configuration, 130
EPS agent configuration, 401, 402, 402
IPv6 NTP symmetric active/passive association
EPS agent display, 402
mode configuration, 124
feature image-based packet capture
Layer 2 remote port mirroring (egress port), 306
configuration, 349
Layer 2 remote port mirroring (reflector port
feature module debug, 6
configurable), 303
flow mirroring configuration, 309, 314
Layer 2 remote port mirroring configuration, 295
flow mirroring QoS policy application, 313
local port mirroring (multi-monitoring-devices),
flow mirroring traffic behavior, 312 294
information center configuration, 321, 326, local port mirroring (source port mode), 301
338
local port mirroring configuration, 292
information center diagnostic log save (log
local port mirroring configuration (multiple
file), 336
monitoring devices), 302
information center display, 337
local port mirroring configuration restrictions
information center duplicate log suppression, (multi-monitoring-devices), 294
333
local port mirroring group, 293
information center interface link up/link down
local port mirroring group monitor port, 293
log generation, 334
local port mirroring group source port, 293
information center log default output rules,
322 NETCONF capability exchange, 204
information center log destinations, 322 NETCONF CLI operations, 231, 232

437
NETCONF CLI return, 239 NQA client operation (TCP), 18
NETCONF configuration, 197, 199 NQA client operation (UDP echo), 19
NETCONF configuration data retrieval (all NQA client operation (UDP jitter), 16
modules), 211 NQA client operation (UDP tracert), 20
NETCONF configuration data retrieval NQA client operation (voice), 22
(Syslog module), 212 NQA client operation optional parameter
NETCONF configuration modification, 222, configuration restrictions, 26
223 NQA client operation optional parameters, 26
NETCONF data entry retrieval (interface NQA client operation restrictions (FTP), 14
table), 209
NQA client operation restrictions (ICMP jitter), 12
NETCONF data filtering, 214
NQA client operation restrictions (UDP jitter), 16
NETCONF device configuration+state
NQA client operation restrictions (UDP tracert), 20
information retrieval, 205
NQA client operation restrictions (voice), 22
NETCONF event subscription, 233, 237
NQA client operation scheduling, 31
NETCONF information retrieval, 208
NQA client statistics collection, 29
NETCONF module report event subscription,
235 NQA client statistics collection restrictions, 29
NETCONF module report event subscription NQA client template, 31
cancel, 236 NQA client template (DNS), 33
NETCONF monitoring event subscription, 234 NQA client template (FTP), 41
NETCONF non-default settings retrieval, 207 NQA client template (HTTP), 38
NETCONF over console session NQA client template (HTTPS), 39
establishment, 203 NQA client template (ICMP), 32
NETCONF over SOAP session establishment, NQA client template (RADIUS), 42
202 NQA client template (SSL), 43
NETCONF over SSH session establishment, NQA client template (TCP half open), 35
203 NQA client template (TCP), 34
NETCONF over Telnet session establishment, NQA client template (UDP), 36
203
NQA client template configuration restrictions, 31
NETCONF protocols and standards, 199
NQA client template optional parameter
NETCONF running configuration lock/unlock, configuration restrictions, 44
220, 221
NQA client template optional parameters, 44
NETCONF session establishment, 200
NQA client threshold monitoring, 27
NETCONF session information retrieval, 209,
NQA client threshold monitoring configuration
213
restrictions, 28
NETCONF session termination, 238
NQA client+Track collaboration, 27
NETCONF structure, 197
NQA client+Track collaboration restrictions, 27
NETCONF supported operations, 240
NQA collaboration configuration, 68
NETCONF syslog event subscription, 233
NQA configuration, 7, 8, 46
NETCONF YANG file content retrieval, 208
NQA display, 45
NQA client history record save, 30
NQA operation configuration (DHCP), 50
NQA client history record save restrictions, 30
NQA operation configuration (DLSw), 65
NQA client operation, 9
NQA operation configuration (DNS), 51
NQA client operation (DHCP), 12
NQA operation configuration (FTP), 52
NQA client operation (DLSw), 24
NQA operation configuration (HTTP), 53
NQA client operation (DNS), 13
NQA operation configuration (ICMP echo), 46
NQA client operation (FTP), 14
NQA operation configuration (ICMP jitter), 47
NQA client operation (HTTP), 15
NQA operation configuration (path jitter), 66
NQA client operation (ICMP echo), 10
NQA operation configuration (SNMP), 57
NQA client operation (ICMP jitter), 11
NQA operation configuration (TCP), 58
NQA client operation (path jitter), 24
NQA operation configuration (UDP echo), 59
NQA client operation (SNMP), 18
NQA operation configuration (UDP jitter), 54

438
NQA operation configuration (UDP tracert), 61 packet capture configuration (feature
NQA operation configuration (voice), 62 image-based), 350
NQA server, 9 packet capture display filter configuration, 346,
NQA server configuration restrictions, 9 348
NQA template configuration (DNS), 71 packet capture filter configuration, 342, 345
NQA template configuration (FTP), 75 packet file content display, 350
NQA template configuration (HTTP), 74 ping address reachability determination, 2
NQA template configuration (HTTPS), 75 ping command, 1
NQA template configuration (ICMP), 70 ping network connectivity test, 1
NQA template configuration (RADIUS), 76 PoE configuration, 143, 145
NQA template configuration (SSL), 77 PoE configuration (uni-PSE device), 152, 152
NQA template configuration (TCP half open), PoE display, 152
72 PoE over-temperature protection enable, 151
NQA template configuration (TCP), 72 PoE PD detection (nonstandard), 147
NQA template configuration (UDP), 73 PoE PI power management configuration, 149
NQA threshold monitoring, 8 PoE PI power max configuration, 148
NQA+Track collaboration, 7 PoE power configuration, 148
NTP architecture, 103 PoE profile application, 150
NTP association mode, 108 PoE profile PI configuration, 150
NTP authentication configuration, 111 PoE PSE firmware upgrade, 151
NTP broadcast association mode PoE PSE power max configuration, 148
configuration, 109, 125 PoE PSE power monitoring configuration, 149
NTP broadcast mode authentication PoE troubleshooting, 153
configuration, 114 port mirroring classification, 290
NTP broadcast mode+authentication, 135 port mirroring configuration, 289, 301
NTP client/server association mode port mirroring display, 301
configuration, 120 port mirroring remote destination group, 297
NTP client/server mode authentication port mirroring remote source group, 298
configuration, 111
RMON alarm configuration, 177
NTP client/server mode+authentication, 133
RMON configuration, 171, 176
NTP configuration, 102, 107, 120
RMON Ethernet statistics group configuration,
NTP display, 120 176
NTP dynamic associations max, 118 RMON group, 171
NTP local clock as reference source, 110 RMON history group configuration, 176
NTP message receiving disable, 118 RMON protocols and standards, 173
NTP message source address specification, RMON settings display, 175
117
sFlow agent+collector information configuration,
NTP multicast association mode, 110 316
NTP multicast association mode configuration, sFlow configuration, 316, 318, 318
127
sFlow counter sampling configuration, 318
NTP multicast mode authentication
sFlow display, 318
configuration, 116
sFlow flow sampling configuration, 317
NTP optional parameter configuration, 117
sFlow protocols and standards, 316
NTP packet DSCP value setting, 119
SNMP access control mode, 156
NTP protocols and standards, 106, 138
SNMP configuration, 155, 167
NTP security, 105
SNMP framework, 155
NTP symmetric active/passive association
mode configuration, 123 SNMP Get operation, 156
NTP symmetric active/passive mode SNMP host notification send, 163
authentication configuration, 113 SNMP logging configuration, 165
packet capture configuration, 342, 350 SNMP MIB, 155
SNMP notification, 162

439
SNMP protocol versions, 156 client history record save restrictions, 30
SNMP settings display, 166 client operation, 9
SNMP view-based MIB access control, 155 client operation (DHCP), 12
SNMPv1 configuration, 167 client operation (DLSw), 24
SNMPv2c configuration, 167 client operation (DNS), 13
SNMPv3 configuration, 168 client operation (FTP), 14
SNTP authentication, 139 client operation (HTTP), 15
SNTP configuration, 107, 138, 141, 141 client operation (ICMP echo), 10
SNTP display, 140 client operation (ICMP jitter), 11
SNTP enable, 138 client operation (path jitter), 24
system debugging, 1, 5 client operation (SNMP), 18
system information default output rules client operation (TCP), 18
(diagnostic log), 322 client operation (UDP echo), 19
system information default output rules client operation (UDP jitter), 16
(hidden log), 323 client operation (UDP tracert), 20
system information default output rules client operation (voice), 22
(security log), 322
client operation optional parameter configuration
system information default output rules (trace restrictions, 26
log), 323
client operation optional parameters, 26
system maintenance, 1
client operation restrictions (FTP), 14
tracert, 2
client operation restrictions (ICMP jitter), 12
tracert node failure identification, 4, 4
client operation restrictions (UDP jitter), 16
troubleshooting sFlow, 320
client operation restrictions (UDP tracert), 20
troubleshooting sFlow remote collector cannot
client operation restrictions (voice), 22
receive packets, 320
client operation scheduling, 31
VCF fabric configuration, 360
client operation scheduling restrictions, 31
VCF fabric topology discovery, 361
client statistics collection, 29
NMS
client statistics collection restrictions, 29
Event MIB SNMP notification enable, 188
client template (DNS), 33
RMON configuration, 171, 176
client template (FTP), 41
SNMP Notification operation, 156
client template (HTTP), 38
SNMP protocol versions, 156
client template (HTTPS), 39
SNMP Set operation, 156, 156
client template (ICMP), 32
node
client template (RADIUS), 42
Event MIB monitored object, 180
client template (SSL), 43
non-default
client template (TCP half open), 35
NETCONF non-default settings retrieval, 207
client template (TCP), 34
non-FIPS mode
client template (UDP), 36
SNMPv3 group and user configuration, 161
client template configuration, 31
notifying
client template configuration restrictions, 31
Event MIB SNMP notification enable, 188
client template optional parameter configuration
information center system log SNMP
restrictions, 44
notification, 334
client template optional parameters, 44
NETCONF syslog event subscription, 233
client threshold monitoring, 27
SNMP configuration, 155, 167
client threshold monitoring configuration
SNMP host notification send, 163
restrictions, 28
SNMP notification, 162
client+Track collaboration, 27
SNMP Notification operation, 156
client+Track collaboration restrictions, 27
NQA
collaboration configuration, 68
client enable, 9
configuration, 7, 8, 46
client history record save, 30
display, 45

440
how it works, 7 configuration, 102, 107, 120
operation configuration (DHCP), 50 configuration restrictions, 106
operation configuration (DLSw), 65 display, 120
operation configuration (DNS), 51 IPv6 client/server association mode configuration,
operation configuration (FTP), 52 121
operation configuration (HTTP), 53 IPv6 multicast association mode configuration,
operation configuration (ICMP echo), 46 130
operation configuration (ICMP jitter), 47 IPv6 symmetric active/passive association mode
configuration, 124
operation configuration (path jitter), 66
local clock as reference source, 110
operation configuration (SNMP), 57
message receiving disable, 118
operation configuration (TCP), 58
message source address specification, 117
operation configuration (UDP echo), 59
multicast association mode, 104
operation configuration (UDP jitter), 54
multicast association mode configuration, 110,
operation configuration (UDP tracert), 61
127
operation configuration (voice), 62
multicast mode authentication configuration, 116
server configuration, 9
multicast mode dynamic associations max, 118
server configuration restrictions, 9
optional parameter configuration, 117
template configuration (DNS), 71
packet DSCP value setting, 119
template configuration (FTP), 75
protocols and standards, 106, 138
template configuration (HTTP), 74
security, 105
template configuration (HTTPS), 75
SNTP authentication, 139
template configuration (ICMP), 70
SNTP configuration, 107, 138, 141, 141
template configuration (RADIUS), 76
SNTP configuration restrictions, 138
template configuration (SSL), 77
symmetric active/passive association mode, 104
template configuration (TCP half open), 72
symmetric active/passive association mode
template configuration (TCP), 72 configuration, 108, 123
template configuration (UDP), 73 symmetric active/passive mode authentication
threshold monitoring, 8 configuration, 113
Track collaboration function, 7 symmetric active/passive mode dynamic
NTP associations max, 118
access control, 105 O
architecture, 103
object
association mode configuration, 108
Event MIB monitored, 180
authentication, 106
Event MIB object owner, 182
authentication configuration, 111
operating mechanism
broadcast association mode, 104
iNQA, 83
broadcast association mode configuration,
109, 125 outbound
broadcast mode authentication configuration, port mirroring, 289
114 outputting
broadcast mode dynamic associations max, information center log configuration (console),
118 338
broadcast mode+authentication, 135 information center log configuration (Linux log
client/server association mode, 104 host), 340
client/server association mode configuration, information center log default output rules, 322
108, 120 information center logs configuration (UNIX log
client/server mode authentication host), 339
configuration, 111 information center synchronous log output, 333
client/server mode dynamic associations max, information logs (console), 328
118 information logs (log host), 330
client/server mode+authentication, 133 information logs (monitor terminal), 329

441
information logs to various destinations, 328 SNMP common parameter configuration, 158
over-temperature protection (PoE), 151 SNMPv3 group and user configuration in FIPS
mode, 162
P
path
packet NQA client operation (path jitter), 24
flow mirroring configuration, 309, 314 NQA operation configuration, 66
flow mirroring QoS policy application, 313 pause
flow mirroring traffic behavior, 312 automated underlay network deployment, 363
NTP DSCP value setting, 119 PD
packet capture display filter configuration PoE PD detection (nonstandard), 147
(packet field expression), 348
performing
port mirroring configuration, 289, 301
NETCONF CLI operations, 231, 232
SNTP configuration, 107, 138, 141, 141
PI
packet capture
PoE enable, 145
capture filter keywords, 342
PoE profile PI configuration, 150
capture filter operator, 343
power management configuration, 149
configuration, 342, 350
power max configuration, 148
display filter configuration, 346, 348
troubleshooting PoE profile, 154
display filter configuration (logical expression),
troubleshooting priority failure, 153
348
ping
display filter configuration (packet field
expression), 348 address reachability determination, 1, 2
display filter configuration (proto[…] network connectivity test, 1
expression), 348 system maintenance, 1
display filter configuration (relational PMM
expression), 348 3rd party process start, 284
display filter keyword, 346 3rd party process stop, 284
display filter operator, 347 display, 284
feature image-based configuration, 349, 350 kernel thread deadloop detection, 286
feature image-based file save, 349 kernel thread maintain, 286
feature image-based packet data display filter, kernel thread monitoring, 286
349 kernel thread starvation detection, 287
file content display, 350 Linux kernel thread, 283
filter configuration, 342, 345 Linux network, 283
filter configuration (expr relop expr Linux user, 283
expression), 345 monitor, 284
filter configuration (logical expression), 345 user PMM display, 285
filter configuration (proto [ exprsize ] user PMM maintain, 285
expression), 345
user PMM monitor, 285
filter configuration (vlan vlan_id expression),
PoE
345
configuration, 143, 145
stopping, 350
configuration (uni-PSE device), 152, 152
parameter
display, 152
CWMP CPE ACS authentication, 258
enable for PI, 145
NQA client history record save, 30
over-temperature protection enable, 151
NQA client operation optional parameters, 26
PD detection (nonstandard), 147
NQA client template optional parameters, 44
PI power management configuration, 149
NTP dynamic associations max, 118
PI power max configuration, 148
NTP local clock as reference source, 110
power configuration, 148
NTP message receiving disable, 118
profile application, 150
NTP message source address, 117
profile PI configuration, 150
NTP optional parameter configuration, 117

442
PSE firmware upgrade, 151 Layer 2 remote port mirroring configuration
PSE power max configuration, 148 (reflector port configurable), 303
PSE power monitoring configuration, 149 Layer 2 remote port mirroring configuration
troubleshoot, 153 restrictions, 295
troubleshoot PI critical priority failure, 153 Layer 2 remote port mirroring egress port
configuration restrictions, 300
troubleshoot PoE profile failure, 154
Layer 2 remote port mirroring reflector port
policy
configuration restrictions, 299
CWMP ACS HTTPS SSL client policy, 258
Layer 2 remote port mirroring remote destination
EAA configuration, 270, 277 group configuration restrictions, 297
EAA environment variable configuration Layer 2 remote port mirroring remote probe VLAN
(user-defined), 273 configuration restrictions, 298, 298, 298
EAA event monitor policy configuration (CLI), Layer 2 remote port mirroring source port
278 configuration restrictions, 299
EAA event monitor policy configuration (Track), local configuration, 292
279
local group creation, 293
EAA event monitor policy element, 271
local group monitor port, 293
EAA event monitor policy environment
local group monitor port configuration restrictions,
variable, 272
293
EAA monitor policy, 271
local group source port, 293
EAA monitor policy configuration, 274
local mirroring configuration (source port mode),
EAA monitor policy configuration 301
(CLI-defined+environment variables), 281
local port mirroring, 290
EAA monitor policy configuration (Tcl-defined),
local port mirroring configuration
277
(multi-monitoring-devices), 294
EAA monitor policy suspension, 276
local port mirroring configuration (multiple
flow mirroring QoS policy application, 313 monitoring devices), 302
port local port mirroring configuration restrictions
IPv6 NTP client/server association mode, 121 (multi-monitoring-devices), 294
IPv6 NTP multicast association mode, 130 mirroring source configuration, 293
IPv6 NTP symmetric active/passive monitor port to remote probe VLAN assignment,
association mode, 124 298
mirroring. See port mirroring remote probe VLAN, 298
NTP association mode, 108 remote destination group creation, 297
NTP broadcast association mode, 125 remote destination group monitor port, 297
NTP broadcast mode+authentication, 135 remote source group creation, 298
NTP client/server association mode, 120 terminology, 289
NTP client/server mode+authentication, 133 portal authentication
NTP configuration, 102, 107, 120 configuration, 413
NTP multicast association mode, 127 power
NTP symmetric active/passive association over Ethernet. Use PoE
mode, 123 PoE configuration, 148
SNTP configuration, 107, 138, 141, 141 PoE configuration (uni-PSE device), 152, 152
port mirroring PoE PI max configuration, 148
classification, 290 PoE PI power management configuration, 149
configuration, 289, 301 PoE PSE max configuration, 148
configuration restrictions, 292 sourcing equipment. Use PSE
display, 301 private
Layer 2 remote configuration, 295 RMON private alarm group, 172
Layer 2 remote port mirroring, 290 procedure
Layer 2 remote port mirroring configuration applying flow mirroring QoS policy, 313
(egress port), 306
applying flow mirroring QoS policy (global), 313

443
applying flow mirroring QoS policy (interface), configuring Event MIB trigger test (Boolean), 191
313 configuring Event MIB trigger test (existence),
applying flow mirroring QoS policy (VLAN), 189
313 configuring Event MIB trigger test (threshold), 187,
applying PoE profile, 150 194
assigning CWMP ACS attribute configuring feature image-based packet capture,
(preferred)(CLI), 257 349
assigning CWMP ACS attribute configuring flow mirroring, 314
(preferred)(DHCP server), 256 configuring flow mirroring traffic behavior, 312
canceling subscription to NETCONF module configuring flow mirroring traffic class, 311
report event, 236 configuring information center, 326
configuring a Puppet agent, 390 configuring information center log output
configuring border node, 367 (console), 338
configuring Chef, 377, 377 configuring information center log output (Linux
configuring Chef client, 376 log host), 340
configuring Chef server, 376 configuring information center log output (UNIX
configuring Chef workstation, 376 log host), 339
configuring CWMP, 255, 261 configuring information center log suppression,
configuring CWMP ACS attribute, 256 333
configuring CWMP ACS attribute configuring information center log suppression for
(default)(CLI), 257 module, 334
configuring CWMP ACS attribute (preferred), configuring information center trace log file max
256 size, 337
configuring CWMP ACS autoconnect configuring iNQA, 86
parameters, 260 configuring IPv6 NTP client/server association
configuring CWMP ACS close-wait timer, 261 mode, 121
configuring CWMP ACS connection retry max configuring IPv6 NTP multicast association mode,
number, 260 130
configuring CWMP ACS periodic Inform configuring IPv6 NTP symmetric active/passive
feature, 260 association mode, 124
configuring CWMP CPE ACS authentication configuring Layer 2 remote port mirroring, 295
parameters, 258 configuring Layer 2 remote port mirroring (egress
configuring CWMP CPE ACS connection port), 306
interface, 259 configuring Layer 2 remote port mirroring
configuring CWMP CPE ACS provision code, (reflector port configurable), 303
259 configuring local port mirroring, 292
configuring CWMP CPE attribute, 258 configuring local port mirroring (multiple monitor
configuring EAA environment variable ports), 294
(user-defined), 273 configuring local port mirroring (multiple
configuring EAA event monitor policy (CLI), monitoring devices), 302
278 configuring local port mirroring (source port
configuring EAA event monitor policy (Track), mode), 301
279 configuring local port mirroring group monitor port,
configuring EAA monitor policy, 274 293
configuring EAA monitor policy configuring local port mirroring group source ports,
(CLI-defined+environment variables), 281 293
configuring EAA monitor policy (Tcl-defined), configuring MAC address of VSI interfaces, 368
277 configuring master spinde node, 363
configuring EPS agent, 402, 402 configuring mirroring sources, 293
configuring Event MIB, 182 configuring NETCONF, 199
configuring Event MIB event, 183 configuring NQA, 8
configuring Event MIB trigger test, 185 configuring NQA client history record save, 30
configuring NQA client operation, 9

444
configuring NQA client operation (DHCP), 12 configuring NQA operation (UDP echo), 59
configuring NQA client operation (DLSw), 24 configuring NQA operation (UDP jitter), 54
configuring NQA client operation (DNS), 13 configuring NQA operation (UDP tracert), 61
configuring NQA client operation (FTP), 14 configuring NQA operation (voice), 62
configuring NQA client operation (HTTP), 15 configuring NQA server, 9
configuring NQA client operation (ICMP echo), configuring NQA template (DNS), 71
10 configuring NQA template (FTP), 75
configuring NQA client operation (ICMP jitter), configuring NQA template (HTTP), 74
11 configuring NQA template (HTTPS), 75
configuring NQA client operation (path jitter), configuring NQA template (ICMP), 70
24
configuring NQA template (RADIUS), 76
configuring NQA client operation (SNMP), 18
configuring NQA template (SSL), 77
configuring NQA client operation (TCP), 18
configuring NQA template (TCP half open), 72
configuring NQA client operation (UDP echo),
configuring NQA template (TCP), 72
19
configuring NQA template (UDP), 73
configuring NQA client operation (UDP jitter),
16 configuring NTP, 107
configuring NQA client operation (UDP tracert), configuring NTP association mode, 108
20 configuring NTP broadcast association mode,
configuring NQA client operation (voice), 22 109, 125
configuring NQA client operation optional configuring NTP broadcast mode authentication,
parameters, 26 114
configuring NQA client statistics collection, 29 configuring NTP broadcast mode+authentication,
135
configuring NQA client template, 31
configuring NTP client/server association mode,
configuring NQA client template (DNS), 33
108, 120
configuring NQA client template (FTP), 41
configuring NTP client/server mode
configuring NQA client template (HTTP), 38 authentication, 111
configuring NQA client template (HTTPS), 39 configuring NTP client/server
configuring NQA client template (ICMP), 32 mode+authentication, 133
configuring NQA client template (RADIUS), 42 configuring NTP dynamic associations max, 118
configuring NQA client template (SSL), 43 configuring NTP local clock as reference source,
configuring NQA client template (TCP half 110
open), 35 configuring NTP multicast association mode, 110,
configuring NQA client template (TCP), 34 127
configuring NQA client template (UDP), 36 configuring NTP multicast mode authentication,
configuring NQA client template optional 116
parameters, 44 configuring NTP optional parameters, 117
configuring NQA client threshold monitoring, configuring NTP symmetric active/passive
27 association mode, 108, 123
configuring NQA client+Track collaboration, configuring NTP symmetric active/passive mode
27 authentication, 113
configuring NQA collaboration, 68 configuring packet capture (feature image-based),
configuring NQA operation (DHCP), 50 350
configuring NQA operation (DLSw), 65 configuring PMM kernel thread deadloop
configuring NQA operation (DNS), 51 detection, 286
configuring NQA operation (FTP), 52 configuring PMM kernel thread starvation
detection, 287
configuring NQA operation (HTTP), 53
configuring PoE, 145
configuring NQA operation (ICMP echo), 46
configuring PoE (uni-PSE device), 152, 152
configuring NQA operation (ICMP jitter), 47
configuring PoE PI power management, 149
configuring NQA operation (path jitter), 66
configuring PoE PI power max, 148
configuring NQA operation (SNMP), 57
configuring PoE power, 148
configuring NQA operation (TCP), 58

445
configuring PoE profile PI, 150 configuring SNTP authentication, 139
configuring PoE PSE power max, 148 configuring SQA, 413
configuring PoE PSE power monitoring, 149 configuring VCF fabric, 360
configuring port mirroring monitor port to configuring VCF fabric automated underlay
remote probe VLAN assignment, 298 network deployment, 362, 364
configuring port mirroring remote destination creating local port mirroring group, 293
group monitor port, 297 creating port mirroring remote destination group
configuring port mirroring remote probe VLAN, on the destination device, 297
298 creating port mirroring remote source group on
configuring port mirroring remote source the source device, 298
group egress port, 300 creating RMON Ethernet statistics entry, 173
configuring port mirroring remote source creating RMON history control entry, 173
group reflector port, 299 debugging feature module, 6
configuring port mirroring remote source determining ping address reachability, 2
group source ports, 299
disabling information center interface link up/link
configuring Puppet, 391, 391 down log generation, 334
configuring RabbiMQ server communication disabling NTP message interface receiving, 118
parameters, 365
displaying CWMP settings, 261
configuring resources, 390
displaying EAA settings, 277
configuring RMON alarm, 174, 177
displaying eMDI, 411
configuring RMON Ethernet statistics group,
displaying Event MIB, 189
176
displaying information center, 337
configuring RMON history group, 176
displaying iNQA analyzer, 91
configuring RMON statistics, 173
displaying iNQA collector, 91
configuring sFlow, 318, 318
displaying NMM sFlow, 318
configuring sFlow agent+collector information,
316 displaying NQA, 45
configuring sFlow counter sampling, 318 displaying NTP, 120
configuring sFlow flow sampling, 317 displaying packet file content, 350
configuring SNMP common parameters, 158 displaying PMM, 284
configuring SNMP logging, 165 displaying PMM kernel threads, 287
configuring SNMP notification, 162 displaying PMM user processes, 286
configuring SNMPv1, 167 displaying PoE, 152
configuring SNMPv1 community, 159, 159 displaying port mirroring, 301
configuring SNMPv1 community by displaying RMON settings, 175
community name, 159 displaying SNMP settings, 166
configuring SNMPv1 host notification send, displaying SNTP, 140
163 displaying SQA, 413
configuring SNMPv2c, 167 displaying user PMM, 285
configuring SNMPv2c community, 159, 159 displaying VCF fabric, 369
configuring SNMPv2c community by enabling CWMP, 256
community name, 159 enabling Event MIB SNMP notification, 188
configuring SNMPv2c host notification send, enabling information center, 328
163 enabling information center duplicate log
configuring SNMPv3, 168 suppression, 333
configuring SNMPv3 group and user, 160 enabling information center synchronous log
configuring SNMPv3 group and user in FIPS output, 333
mode, 162 enabling information center system log SNMP
configuring SNMPv3 group and user in notification, 334
non-FIPS mode, 161 enabling L2 agent, 366
configuring SNMPv3 host notification send, enabling L3 agent, 367
163 enabling local proxy ARP, 368
configuring SNTP, 107, 141, 141

446
enabling NQA client, 9 performing NETCONF CLI operations, 231, 232
enabling PoE over-temperature protection, retrieving device configuration information, 204
151 retrieving NETCONF configuration data (all
enabling PoE PD detection (nonstandard), modules), 211
147 retrieving NETCONF configuration data (Syslog
enabling PoE PI, 145 module), 212
enabling SNMP agent, 157 retrieving NETCONF data entry (interface table),
enabling SNMP notification, 162 209
enabling SNMP version, 157 retrieving NETCONF information, 208
enabling SNTP, 138 retrieving NETCONF non-default settings, 207
enabling VCF fabric topology discovery, 361 retrieving NETCONF session information, 209,
establishing NETCONF over console 213
sessions, 203 retrieving NETCONF YANG file content
establishing NETCONF over SOAP sessions, information, 208
202 returning to NETCONF CLI, 239
establishing NETCONF over SSH sessions, rolling back NETCONF configuration, 226
203 rolling back NETCONF configuration
establishing NETCONF over Telnet sessions, (configuration file-based), 227
203 rolling back NETCONF configuration (rollback
establishing NETCONF session, 200 point-based), 227
exchanging NETCONF capabilities, 204 saving feature image-based packet capture to file,
filtering feature image-based packet capture 349
data display, 349 saving information center diagnostic logs (log file),
filtering NETCONF data, 214 336
filtering NETCONF data (conditional match), saving information center log (log file), 331
219 saving information center security logs (log file),
filtering NETCONF data (regex match), 217 335
identifying tracert node failure, 4, 4 saving NETCONF configuration, 224
loading NETCONF configuration, 226 saving NETCONF running configuration, 225
locking NETCONF running configuration, 220, scheduling CWMP ACS connection initiation, 260
221 scheduling NQA client operation, 31
maintaining information center, 337 setting information center log storage period, 332
maintaining PMM kernel thread, 286 setting NETCONF session attribute, 200
maintaining PMM kernel threads, 287 setting NTP packet DSCP value, 119
maintaining PMM user processes, 286 setting VCF fabric automated underlay network
maintaining user PMM, 285 NETCONF username and password, 363
managing information center security log, 335 shutting down Chef, 377
managing information center security log file, shutting down Puppet (on device), 390
336 signing a certificate for Puppet agent, 390
modifying NETCONF configuration, 222, 223 specifying automated underlay network
monitoring PMM, 284 deployment template file, 362
monitoring PMM kernel thread, 286 specifying CWMP ACS HTTPS SSL client policy,
258
monitoring user PMM, 285
specifying NTP message source address, 117
outputting information center logs (console),
328 specifying overlay network type, 366
outputting information center logs (log host), specifying VCF fabric automated underlay
330 network device role, 362
outputting information center logs (monitor starting Chef, 376
terminal), 329 starting PMM 3rd party process, 284
outputting information center logs to various starting Puppet, 390
destinations, 328 stopping PMM 3rd party process, 284
pausing underlay network deployment, 363 subscribing to NETCONF events, 233, 237

447
subscribing to NETCONF module report event, QoS
235 flow mirroring configuration, 309, 314
subscribing to NETCONF monitoring event, flow mirroring QoS policy application, 313
234 quality
subscribing to NETCONF syslog event, 233 SQA configuration, 413
suspending EAA monitor policy, 276
R
terminating NETCONF session, 238
testing network connectivity with ping, 1 RADIUS
troubleshooting sFlow remote collector cannot NQA client template, 42
receive packets, 320 NQA template configuration, 76
unlocking NETCONF running configuration, real-time
220, 221 event manager. See RTM
upgrading PSE firmware, 151 reflector port
process Layer 2 remote port mirroring, 289
monitoring and maintenance. See PMM port mirroring remote source group reflector port,
profile 299
PoE profile application, 150 regex match
PoE profile PI configuration, 150 NETCONF data filtering, 217
protocols and standards NETCONF data filtering (column-based), 216
NETCONF, 197, 199 regular expression. Use regex
NTP, 106, 138 relational
packet capture display filter keyword, 346 packet capture display filter configuration
RMON, 173 (relational expression), 348
sFlow, 316 remote
SNMP configuration, 155, 167 Layer 2 remote port mirroring, 295
SNMP versions, 156 port mirroring destination group, 297
provision code (ACS), 259 port mirroring destination group monitor port, 297
PSE port mirroring destination group remote probe
PoE configuration, 143, 145 VLAN, 298
PoE configuration (uni-PSE device), 152, 152 port mirroring monitor port to remote probe VLAN
assignment, 298
power max configuration, 148
port mirroring source group, 298
power monitoring configuration, 149
port mirroring source group egress port, 300
Puppet
port mirroring source group reflector port, 299
configuration, 388, 391, 391
port mirroring source group remote probe VLAN,
configuring a Puppet agent, 390
298
configuring resources, 390
port mirroring source group source ports, 299
network framework, 388
Remote Network Monitoring. Use RMON
Puppet agent certificate signing, 390
remote probe VLAN
resources, 389, 392
Layer 2 remote port mirroring, 289
resources (netdev_device), 392
local port mirroring (multi-monitoring-devices),
resources (netdev_interface), 393 294
resources (netdev_l2_interface), 394 port mirroring monitor port to remote probe VLAN
resources (netdev_l2vpn), 395 assignment, 298
resources (netdev_lagg), 396 port mirroring remote destination group, 298
resources (netdev_vlan), 397 port mirroring remote source group, 298
resources (netdev_vsi), 398 reporting
resources (netdev_vte), 398 NETCONF module report event subscription, 235
resources (netdev_vxlan), 399 NETCONF module report event subscription
shutting down (on device), 390 cancel, 236
start, 390 resource
Q Chef, 374, 380

448
Chef netdev_device, 380 NQA server configuration, 9
Chef netdev_interface, 380 NTP configuration, 106
Chef netdev_l2_interface, 382 port mirroring configuration, 292
Chef netdev_l2vpn, 383 port mirroringlocal port mirroring configuration
Chef netdev_lagg, 383 (multi-monitoring-devices), 294
Chef netdev_vlan, 384 RMON alarm configuration, 174
Chef netdev_vsi, 385 RMON history control entry creation, 173
Chef netdev_vte, 386 SNMPv1 community configuration, 159
Chef netdev_vxlan, 386 SNMPv2 community configuration, 159
Puppet, 389, 392 SNMPv3 group and user configuration, 160
Puppet netdev_device, 392 SNTP configuration, 106
Puppet netdev_interface, 393 SNTP configuration restrictions, 138
Puppet netdev_l2_interface, 394 retrieivng
Puppet netdev_l2vpn, 395 device configuration information, 204
Puppet netdev_lagg, 396 retrieving
Puppet netdev_vlan, 397 NETCONF configuration data (all modules), 211
Puppet netdev_vsi, 398 NETCONF configuration data (Syslog module),
Puppet netdev_vte, 398 212
Puppet netdev_vxlan, 399 NETCONF data entry (interface table), 209
restrictions NETCONF device configuration+state
information, 205
EAA monitor policy configuration, 274
NETCONF information, 208
EAA monitor policy configuration (Tcl), 276
NETCONF non-default settings, 207
Layer 2 remote port configuration, 295
NETCONF session information, 209, 213
Layer 2 remote port mirroring egress port
configuration, 300 NETCONF YANG file content, 208
Layer 2 remote port mirroring reflector port returning
configuration, 299 NETCONF CLI return, 239
Layer 2 remote port mirroring remote RMON
destination group configuration, 297 alarm configuration, 174, 177
Layer 2 remote port mirroring remote probe alarm configuration restrictions, 174
VLAN configuration, 298, 298, 298 alarm group, 172
Layer 2 remote port mirroring source port alarm group sample types, 173
configuration, 299 configuration, 171, 176
local port mirroring group monitor port Ethernet statistics entry creation, 173
configuration, 293
Ethernet statistics group, 171
NETCONF session establishment, 200
Ethernet statistics group configuration, 176
NQA client history record save, 30
event group, 171
NQA client operation (FTP), 14
Event MIB configuration, 180, 182, 189
NQA client operation (ICMP jitter), 12
Event MIB event configuration, 183
NQA client operation (UDP jitter), 16
Event MIB trigger test configuration (Boolean),
NQA client operation (UDP tracert), 20 191
NQA client operation (voice), 22 Event MIB trigger test configuration (existence),
NQA client operation optional parameter 189
configuration, 26 Event MIB trigger test configuration (threshold),
NQA client operation scheduling, 31 194
NQA client statistics collection, 29 group, 171
NQA client template configuration, 31 history control entry creation, 173
NQA client template optional parameter history control entry creation restrictions, 173
configuration, 44 history group, 171
NQA client threshold monitoring configuration, history group configuration, 176
28
how it works, 171
NQA client+Track collaboration, 27

449
private alarm group, 172 sFlow counter sampling, 318
protocols and standards, 173 sFlow flow sampling configuration, 317
settings display, 175 saving
statistics configuration, 173 feature image-based packet capture to file, 349
statistics function, 173 information center diagnostic logs (log file), 336
rolling back information center log (log file), 331
NETCONF configuration, 226 information center security logs (log file), 335
NETCONF configuration (configuration NETCONF configuration, 224
file-based), 227 NETCONF running configuration, 225
NETCONF configuration (rollback NQA client history records, 30
point-based), 227 scheduling
routing CWMP ACS connection initiation, 260
IPv6 NTP client/server association mode, 121 NQA client operation, 31
IPv6 NTP multicast association mode, 130 security
IPv6 NTP symmetric active/passive eMDI implementation, 404
association mode, 124
information center security log file management,
NTP association mode, 108 336
NTP broadcast association mode, 125 information center security log management, 335
NTP broadcast mode+authentication, 135 information center security log save (log file), 335
NTP client/server association mode, 120 information center security logs, 321
NTP client/server mode+authentication, 133 NTP, 105
NTP configuration, 102, 107, 120 NTP authentication, 106, 111
NTP multicast association mode, 127 NTP broadcast mode authentication, 114
NTP symmetric active/passive association NTP client/server mode authentication, 111
mode, 123
NTP multicast mode authentication, 116
SNTP configuration, 107, 138, 141, 141
NTP symmetric active/passive mode
RPC authentication, 113
CWMP RPC methods, 253 SNTP authentication, 139
RTM server
EAA, 270 Chef server configuration, 376
EAA configuration, 270, 277 NQA configuration, 9
Ruby SNTP configuration, 107, 138, 141, 141
Chef configuration, 373, 377, 377 service
Chef resources, 374 NETCONF configuration data retrieval (all
rule modules), 211
information center log default output rules, NETCONF configuration data retrieval (Syslog
322 module), 212
SNMP access control (rule-based), 156 NETCONF configuration modification, 223
system information default output rules SQA configuration, 413
(diagnostic log), 322 service quality
system information default output rules SQA configuration, 413
(hidden log), 323
session
system information default output rules
NETCONF session attribute, 200
(security log), 322
NETCONF session establishment, 200
system information default output rules (trace
log), 323 NETCONF session information retrieval, 209, 213
runtime NETCONF session termination, 238
EAA event monitor policy runtime, 272 sessions
NETCONF over console session establishment,
S 203
sampling NETCONF over SOAP session establishment,
Sampled Flow. Use sFlow 202

450
NETCONF over SSH session establishment, Event MIB trigger test configuration (existence),
203 189
NETCONF over Telnet session establishment, Event MIB trigger test configuration (threshold),
203 187, 194
set FIPS compliance, 156
VCF fabric automated underlay network framework, 155
deployment NETCONF username and get operation, 165
password, 363 Get operation, 156
set operation host notification send, 163
SNMP, 156 information center system log SNMP notification,
SNMP logging, 165 334
setting logging configuration, 165
information center log storage period, 332 manager, 155
NETCONF session attribute, 200 MIB, 155, 155
NTP packet DSCP value, 119 MIB view-based access control, 155
severity level (system information), 321 notification configuration, 162
sFlow notification enable, 162
agent+collector information configuration, 316 Notification operation, 156
configuration, 316, 318, 318 NQA client operation, 18
counter sampling configuration, 318 NQA operation configuration, 57
display, 318 protocol versions, 156
flow sampling configuration, 317 RMON configuration, 171, 176
protocols and standards, 316 Set operation, 156
troubleshoot, 320 set operation, 165
troubleshoot remote collector cannot receive settings display, 166
packets, 320 SNMPv1 community configuration, 159, 159
shutting down SNMPv1 community configuration by community
Chef, 377 name, 159
Puppet (on device), 390 SNMPv1 community configuration by creating
Simple Network Management Protocol. Use SNMPv1 user, 160
SNMP SNMPv1 configuration, 167
Simplified NTP. See SNTP SNMPv2c community configuration, 159, 159
SIP SNMPv2c community configuration by community
SQA configuration, 413, 413 name, 159
SIP call SNMPv2c community configuration by creating
display SQA, 413 SNMPv2c user, 160
SNMP SNMPv2c configuration, 167
access control mode, 156 SNMPv3 configuration, 168
agent, 155 SNMPv3 group and user configuration, 160
agent enable, 157 SNMPv3 group and user configuration in FIPS
agent notification, 162 mode, 162
common parameter configuration, 158 SNMPv3 group and user configuration in
non-FIPS mode, 161
configuration, 155, 167
version enable, 157
Event MIB configuration, 180, 182, 189
SNMPv1
Event MIB display, 189
community configuration, 159, 159
Event MIB event configuration, 183
community configuration restrictions, 159
Event MIB SNMP notification enable, 188
configuration, 167
Event MIB trigger test configuration, 185
host notification send, 163
Event MIB trigger test configuration (Boolean),
191 Notification operation, 156
protocol version, 156
SNMPv2

451
community configuration restrictions, 159 NQA client template (SSL), 43
SNMPv2c NQA template configuration, 77
community configuration, 159, 159 starting
configuration, 167 Chef, 376
host notification send, 163 PMM 3rd party process, 284
Notification operation, 156 Puppet, 390
protocol version, 156 starvation detection (Linux kernel thread PMM), 287
SNMPv3 statistics
configuration, 168 NQA client statistics collection, 29
Event MIB object owner, 182 RMON configuration, 171, 176
group and user configuration, 160 RMON Ethernet statistics entry, 173
group and user configuration in FIPS mode, RMON Ethernet statistics group, 171
162 RMON Ethernet statistics group configuration,
group and user configuration in non-FIPS 176
mode, 161 RMON history control entry, 173
group and user configuration restrictions, 160 RMON statistics configuration, 173
Notification operation, 156 RMON statistics function, 173
notification send, 163 sFlow agent+collector information configuration,
protocol version, 156 316
SNTP sFlow configuration, 316, 318, 318
authentication, 139 sFlow counter sampling configuration, 318
configuration, 107, 138, 141, 141 sFlow flow sampling configuration, 317
configuration restrictions, 106, 138 stopping
display, 140 packet capture, 350
enable, 138 PMM 3rd party process, 284
SOAP storage
NETCONF message format, 197 information center log storage period, 332
NETCONF over SOAP session establishment, subscribing
202 NETCONF event subscription, 233, 237
source NETCONF module report event subscription, 235
port mirroring source, 289 NETCONF monitoring event subscription, 234
port mirroring source device, 289 NETCONF syslog event subscription, 233
specify suppressing
master spine node, 363 information center duplicate log suppression, 333
VCF fabric automated underlay network information center log suppression for module,
deployment device role, 362 334
VCF fabric automated underlay network suspending
deployment template file, 362 EAA monitor policy, 276
VCF fabric overlay network type, 366 switch
specifying module debug, 5
CWMP ACS HTTPS SSL client policy, 258 screen output, 5
NTP message source address, 117 symmetric
SQA IPv6 NTP symmetric active/passive association
display, 413 mode, 124
SSH NTP symmetric active/passive association mode,
Chef configuration, 373, 377, 377 104, 108, 113, 123
NETCONF over SSH session establishment, NTP symmetric active/passive mode dynamic
203 associations max, 118
Puppet configuration, 388, 391, 391 synchronizing
SSL information center synchronous log output, 333
CWMP ACS HTTPS SSL client policy, 258 NTP configuration, 102, 107, 120

452
SNTP configuration, 107, 138, 141, 141 tracert node failure identification, 4, 4
syslog system debugging
NETCONF configuration data retrieval module debugging switch, 5
(Syslog module), 212 screen output switch, 5
NETCONF syslog event subscription, 233 system information
system information center configuration, 321, 326, 338
default output rules (diagnostic log), 322
T
default output rules (hidden log), 323
default output rules (security log), 322 table
default output rules (trace log), 323 NETCONF data entry retrieval (interface table),
209
information center duplicate log suppression,
333 Tcl
information center interface link up/link down EAA configuration, 270, 277
log generation, 334 EAA monitor policy configuration, 277
information center log destinations, 322 TCP
information center log levels, 321 NQA client operation, 18
information center log output (console), 328 NQA client template, 34
information center log output (log host), 330 NQA client template (TCP half open), 35
information center log output (monitor NQA operation configuration, 58
terminal), 329 NQA template configuration, 72
information center log output configuration NQA template configuration (half open), 72
(console), 338 Telnet
information center log output configuration NETCONF over Telnet session establishment,
(Linux log host), 340 203
information center log output configuration temperature
(UNIX log host), 339 PoE over-temperature protection enable, 151
information center log save (log file), 331 template
information center log types, 321 NQA client template (DNS), 33
information center security log file NQA client template (FTP), 41
management, 336
NQA client template (HTTP), 38
information center security log management,
NQA client template (HTTPS), 39
335
NQA client template (ICMP), 32
information center security log save (log file),
335 NQA client template (RADIUS), 42
information center synchronous log output, NQA client template (SSL), 43
333 NQA client template (TCP half open), 35
information center system log SNMP NQA client template (TCP), 34
notification, 334 NQA client template (UDP), 36
information log formats and field descriptions, NQA client template configuration, 31
323 NQA client template optional parameters, 44
log default output rules, 322 NQA template configuration (DNS), 71
system administration NQA template configuration (FTP), 75
Chef configuration, 373, 377, 377 NQA template configuration (HTTP), 74
debugging, 1 NQA template configuration (HTTPS), 75
feature module debug, 6 NQA template configuration (ICMP), 70
ping, 1 NQA template configuration (RADIUS), 76
ping address reachability, 2 NQA template configuration (SSL), 77
ping command, 1 NQA template configuration (TCP half open), 72
ping network connectivity test, 1 NQA template configuration (TCP), 72
Puppet configuration, 388, 391, 391 NQA template configuration (UDP), 73
system debugging, 5 template file
tracert, 1, 2 automated underlay network deployment, 359

453
VCF fabric automated underlay network information center system log SNMP notification,
deployment configuration, 362 334
terminating SNMP notification, 162
NETCONF session, 238 triggering
testing Event MIB trigger test configuration, 185
Event MIB trigger test configuration, 185 Event MIB trigger test configuration (Boolean),
Event MIB trigger test configuration (Boolean), 191
191 Event MIB trigger test configuration (existence),
Event MIB trigger test configuration 189
(existence), 189 Event MIB trigger test configuration (threshold),
Event MIB trigger test configuration 187, 194
(threshold), 187, 194 troubleshooting
ping network connectivity test, 1 PI critical priority failure, 153
threshold PoE, 153
Event MIB trigger test, 181 PoE profile failure, 154
Event MIB trigger test configuration, 187, 194 sFlow, 320
NQA client threshold monitoring, 8, 27 sFlow remote collector cannot receive packets,
time 320
NTP configuration, 102, 107, 120 tunneling
NTP local clock as reference source, 110 Chef resources (netdev_vte), 386
SNTP configuration, 107, 138, 141, 141 Puppet resources (netdev_vte), 398
timer U
CWMP ACS close-wait timer, 261
UDP
topology
IPv6 NTP client/server association mode, 121
VCF fabric, 354
IPv6 NTP multicast association mode, 130
VCF fabric topology discovery, 361
IPv6 NTP symmetric active/passive association
traceroute. See tracert mode, 124
tracert NQA client operation (UDP echo), 19
IP address retrieval, 2 NQA client operation (UDP jitter), 16
node failure detection, 2, 4, 4 NQA client operation (UDP tracert), 20
NQA client operation (UDP tracert), 20 NQA client template, 36
NQA operation configuration (UDP tracert), 61 NQA operation configuration (UDP echo), 59
system maintenance, 1 NQA operation configuration (UDP jitter), 54
tracing NQA operation configuration (UDP tracert), 61
information center trace log file max size, 337 NQA template configuration, 73
Track NTP association mode, 108
EAA event monitor policy configuration, 279 NTP broadcast association mode, 125
NQA client+Track collaboration, 27 NTP broadcast mode+authentication, 135
NQA collaboration, 7 NTP client/server association mode, 120
NQA collaboration configuration, 68 NTP client/server mode+authentication, 133
traffic NTP configuration, 102, 107, 120
NQA client operation (voice), 22 NTP multicast association mode, 127
RMON configuration, 171, 176 NTP symmetric active/passive association mode,
sFlow agent+collector information 123
configuration, 316 sFlow configuration, 316, 318, 318
sFlow configuration, 316, 318, 318 UNIX
sFlow counter sampling configuration, 318 information center log host output configuration,
sFlow flow sampling configuration, 317 339
trapping unlocking
Event MIB SNMP notification enable, 188 NETCONF running configuration, 220
upgrading

454
PoE PSE firmware, 151 automated underlay network deployment
user NETCONF username and password setting, 363
PMM Linux user, 283 view
user process SNMP access control (view-based), 156
display, 286 virtual
maintain, 286 Virtual Converged Framework. Use VCF
VLAN
V
Chef resources, 380
variable Chef resources (netdev_l2_interface), 382
EAA environment variable configuration Chef resources (netdev_vlan), 384
(user-defined), 273
flow mirroring configuration, 309, 314
EAA event monitor policy environment
flow mirroring QoS policy application, 313
(user-defined), 273
Layer 2 remote port mirroring configuration, 295
EAA event monitor policy environment
system-defined (event-specific), 272 local port mirroring configuration, 292
EAA event monitor policy environment local port mirroring group monitor port, 293
system-defined (public), 272 local port mirroring group source port, 293
EAA event monitor policy environment packet capture filter configuration (vlan vlan_id
variable, 272 expression), 345
EAA monitor policy configuration port mirroring configuration, 289, 301
(CLI-defined+environment variables), 281 port mirroring remote probe VLAN, 289
VCF fabric Puppet resources, 392
automated deployment, 358 Puppet resources (netdev_l2_interface), 394
automated deployment process, 359 Puppet resources (netdev_vlan), 397
automated underlay network delpoyment VCF fabric configuration, 354, 360
template file configuration, 362 voice
automated underlay network deployment NQA client operation, 22
configuration, 362, 364 NQA operation configuration, 62
automated underlay network deployment VPN
device role configuration, 362
Chef resources (netdev_l2vpn), 383
automated underlay network deployment
Puppet resources (netdev_l2vpn), 395
NETCONF username and password setting,
363 VSI
configuration, 354, 360 Chef resources (netdev_vsi), 385
display, 369 Puppet resources (netdev_vsi), 398
local proxy ARP, 368 VTE
MAC address of VSI interfaces, 368 Chef resources (netdev_vte), 386
master spine node configuration, 363 Puppet resources (netdev_vte), 398
Neutron components, 356 VXLAN
Neutron deployment, 357 Chef resources (netdev_vxlan), 386
overlay network border node configuration, Puppet resources (netdev_vxlan), 399
367 VCF fabric configuration, 354, 360
overlay network L2 agent, 366 W
overlay network L3 agent, 367
workstation
overlay network tyep specifying, 366
Chef workstation configuration, 376
pausing automated underlay network
deployment, 363 X
RabbitMQ server communication parameters XML
configuration, 365 NETCONF capability exchange, 204
topology, 354 NETCONF configuration, 197, 199
topology discovery enable, 361 NETCONF data filtering, 214
VCF fabric topology NETCONF data filtering (conditional match), 219

455
NETCONF data filtering (regex match), 217
NETCONF message format, 197
NETCONF structure, 197
XSD
NETCONF message format, 197
Y
YANG
NETCONF YANG file content retrieval, 208

456

You might also like