5200-8310-Network Management and Monitoring Configuration Guide
5200-8310-Network Management and Monitoring Configuration Guide
i
Configuring optional parameters for the NQA template ··········································································· 44
Display and maintenance commands for NQA ································································································ 45
NQA configuration examples ··························································································································· 46
Example: Configuring the ICMP echo operation ······················································································ 46
Example: Configuring the ICMP jitter operation ······················································································· 47
Example: Configuring the DHCP operation······························································································ 50
Example: Configuring the DNS operation ································································································ 51
Example: Configuring the FTP operation ································································································· 52
Example: Configuring the HTTP operation ······························································································ 53
Example: Configuring the UDP jitter operation ························································································ 54
Example: Configuring the SNMP operation ····························································································· 57
Example: Configuring the TCP operation································································································· 58
Example: Configuring the UDP echo operation ······················································································· 59
Example: Configuring the UDP tracert operation ····················································································· 61
Example: Configuring the voice operation ······························································································· 62
Example: Configuring the DLSw operation ······························································································ 65
Example: Configuring the path jitter operation ························································································· 66
Example: Configuring NQA collaboration································································································· 68
Example: Configuring the ICMP template ································································································ 70
Example: Configuring the DNS template ································································································· 71
Example: Configuring the TCP template ·································································································· 72
Example: Configuring the TCP half open template ·················································································· 72
Example: Configuring the UDP template ································································································· 73
Example: Configuring the HTTP template································································································ 74
Example: Configuring the HTTPS template ····························································································· 75
Example: Configuring the FTP template ·································································································· 75
Example: Configuring the RADIUS template ··························································································· 76
Example: Configuring the SSL template ·································································································· 77
Configuring iNQA ························································································· 79
About iNQA ······················································································································································ 79
iNQA benefits ··········································································································································· 79
Basic concepts ········································································································································· 79
Application scenarios ······························································································································· 81
Operating mechanism ······························································································································ 83
Restrictions and guidelines: iNQA configuration ······························································································ 85
Instance restrictions and guidelines ················································································································· 85
Network and target flow requirements ····································································································· 85
Collaborating with other features ············································································································· 85
iNQA tasks at a glance····································································································································· 86
Prerequisites ···················································································································································· 86
Configure a collector ········································································································································ 86
Configuring parameters for a collector ····································································································· 86
Configuring a collector instance ··············································································································· 87
Configuring an MP ··································································································································· 88
Enabling packet loss measurement ········································································································· 88
Configure the analyzer ····································································································································· 89
Configuring parameters for the analyzer ·································································································· 89
Configuring an analyzer instance ············································································································· 89
Configuring an AMS ································································································································· 90
Enabling the measurement functionality ·································································································· 90
Configuring iNQA logging································································································································· 91
Display and maintenance commands for iNQA collector ················································································· 91
Display and maintenance commands for iNQA analyzer ················································································· 91
iNQA configuration examples ·························································································································· 92
Example: Configuring an end-to-end iNQA packet loss measurement ···················································· 92
Example: Configuring an point-to-point iNQA packet loss measurement ················································ 96
Configuring NTP ························································································ 102
About NTP······················································································································································ 102
NTP application scenarios ····················································································································· 102
NTP working mechanism ······················································································································· 102
ii
NTP architecture ···································································································································· 103
NTP association modes ························································································································· 104
NTP security··········································································································································· 105
Protocols and standards ························································································································ 106
Restrictions and guidelines: NTP configuration ····························································································· 106
NTP tasks at a glance ···································································································································· 107
Enabling the NTP service······························································································································· 107
Configuring NTP association mode················································································································ 108
Configuring NTP in client/server mode ·································································································· 108
Configuring NTP in symmetric active/passive mode ·············································································· 108
Configuring NTP in broadcast mode ······································································································ 109
Configuring NTP in multicast mode········································································································ 110
Configuring the local clock as the reference source ······················································································ 110
Configuring access control rights ··················································································································· 111
Configuring NTP authentication ····················································································································· 111
Configuring NTP authentication in client/server mode ··········································································· 111
Configuring NTP authentication in symmetric active/passive mode ······················································ 113
Configuring NTP authentication in broadcast mode··············································································· 114
Configuring NTP authentication in multicast mode ················································································ 116
Controlling NTP message sending and receiving ·························································································· 117
Specifying a source address for NTP messages ··················································································· 117
Disabling an interface from receiving NTP messages ··········································································· 118
Configuring the maximum number of dynamic associations ·································································· 118
Setting a DSCP value for NTP packets·································································································· 119
Specifying the NTP time-offset thresholds for log and trap outputs ······························································· 119
Display and maintenance commands for NTP ······························································································· 120
NTP configuration examples ·························································································································· 120
Example: Configuring NTP client/server association mode ··································································· 120
Example: Configuring IPv6 NTP client/server association mode ··························································· 121
Example: Configuring NTP symmetric active/passive association mode··············································· 123
Example: Configuring IPv6 NTP symmetric active/passive association mode ····································· 124
Example: Configuring NTP broadcast association mode ······································································· 125
Example: Configuring NTP multicast association mode ········································································ 127
Example: Configuring IPv6 NTP multicast association mode ································································ 130
Example: Configuring NTP authentication in client/server association mode ········································ 133
Example: Configuring NTP authentication in broadcast association mode············································ 135
Configuring SNTP ······················································································ 138
About SNTP ··················································································································································· 138
SNTP working mode ······························································································································ 138
Protocols and standards ························································································································ 138
Restrictions and guidelines: SNTP configuration ··························································································· 138
SNTP tasks at a glance·································································································································· 138
Enabling the SNTP service ···························································································································· 138
Specifying an NTP server for the device ········································································································ 139
Configuring SNTP authentication ··················································································································· 139
Specifying the SNTP time-offset thresholds for log and trap outputs····························································· 140
Display and maintenance commands for SNTP ···························································································· 140
SNTP configuration examples ······················································································································· 141
Example: Configuring SNTP ·················································································································· 141
Configuring PoE ························································································ 143
About PoE ······················································································································································ 143
PoE system ············································································································································ 143
AI-driven PoE ········································································································································· 143
Protocols and standards ························································································································ 144
Restrictions: Hardware compatibility with PoE ······························································································· 144
Restrictions and guidelines: PoE configuration ······························································································ 144
PoE configuration tasks at a glance ··············································································································· 145
Prerequisites for configuring PoE ·················································································································· 145
Enabling PoE for a PI ····································································································································· 145
Enabling AI-driven PoE ·································································································································· 146
iii
Configuring PoE delay ··································································································································· 146
Disabling PoE on shutdown interfaces ·········································································································· 147
Enabling nonstandard PD detection··············································································································· 147
Configuring the PoE power ···························································································································· 148
Configuring the maximum PSE power ··································································································· 148
Configuring the maximum PI power ······································································································· 148
Configuring the PI priority policy ···················································································································· 149
Configuring PSE power monitoring ················································································································ 149
Configuring a PI by using a PoE profile ········································································································· 150
Restrictions and guidelines ···················································································································· 150
Configuring a PoE profile ······················································································································· 150
Applying a PoE profile ···························································································································· 150
Upgrading PSE firmware in service ··············································································································· 151
Enabling forced power supply ························································································································ 151
Display and maintenance commands for PoE ······························································································· 152
PoE configuration examples ·························································································································· 152
Example: Configuring PoE ····················································································································· 152
Troubleshooting PoE······································································································································ 153
Failure to set the priority of a PI to critical ······························································································ 153
Failure to apply a PoE profile to a PI······································································································ 154
Configuring SNMP ····················································································· 155
About SNMP ·················································································································································· 155
SNMP framework ··································································································································· 155
MIB and view-based MIB access control ······························································································· 155
SNMP operations ··································································································································· 156
Protocol versions···································································································································· 156
Access control modes ···························································································································· 156
FIPS compliance ············································································································································ 156
SNMP tasks at a glance ································································································································· 157
Enabling the SNMP agent ······························································································································ 157
Enabling SNMP versions ······························································································································· 157
Configuring SNMP common parameters ······································································································· 158
Configuring an SNMPv1 or SNMPv2c community ························································································· 159
About configuring an SNMPv1 or SNMPv2c community ······································································· 159
Restrictions and guidelines for configuring an SNMPv1 or SNMPv2c community································· 159
Configuring an SNMPv1/v2c community by a community name ··························································· 159
Configuring an SNMPv1/v2c community by creating an SNMPv1/v2c user ·········································· 160
Configuring an SNMPv3 group and user ······································································································· 160
Restrictions and guidelines for configuring an SNMPv3 group and user ··············································· 160
Configuring an SNMPv3 group and user in non-FIPS mode ································································· 161
Configuring an SNMPv3 group and user in FIPS mode········································································· 162
Configuring SNMP notifications ····················································································································· 162
About SNMP notifications ······················································································································ 162
Enabling SNMP notifications ·················································································································· 162
Configuring parameters for sending SNMP notifications ······································································· 163
Configuring SNMP logging ····························································································································· 165
Setting the interval at which the SNMP module examines the system configuration for changes ················· 165
Display and maintenance commands for SNMP ··························································································· 166
SNMP configuration examples ······················································································································· 167
Example: Configuring SNMPv1/SNMPv2c····························································································· 167
Example: Configuring SNMPv3·············································································································· 168
Configuring RMON ···················································································· 171
About RMON ·················································································································································· 171
RMON working mechanism ··················································································································· 171
RMON groups ········································································································································ 171
Sample types for the alarm group and the private alarm group ····························································· 173
Protocols and standards ························································································································ 173
Configuring the RMON statistics function ······································································································ 173
About the RMON statistics function ······································································································· 173
Creating an RMON Ethernet statistics entry ·························································································· 173
iv
Creating an RMON history control entry ································································································ 173
Configuring the RMON alarm function ··········································································································· 174
Display and maintenance commands for RMON ··························································································· 175
RMON configuration examples ······················································································································ 176
Example: Configuring the Ethernet statistics function ············································································ 176
Example: Configuring the history statistics function ··············································································· 176
Example: Configuring the alarm function ······························································································· 177
Configuring the Event MIB ········································································· 180
About the Event MIB ······································································································································ 180
Trigger ···················································································································································· 180
Monitored objects ··································································································································· 180
Trigger test ············································································································································· 180
Event actions·········································································································································· 181
Object list ··············································································································································· 181
Object owner ·········································································································································· 182
Restrictions and guidelines: Event MIB configuration ···················································································· 182
Event MIB tasks at a glance··························································································································· 182
Prerequisites for configuring the Event MIB ··································································································· 182
Configuring the Event MIB global sampling parameters ················································································ 183
Configuring Event MIB object lists ················································································································· 183
Configuring an event ······································································································································ 183
Creating an event ··································································································································· 183
Configuring a set action for an event ····································································································· 184
Configuring a notification action for an event ························································································· 184
Enabling the event ································································································································· 185
Configuring a trigger······································································································································· 185
Creating a trigger and configuring its basic parameters········································································· 185
Configuring a Boolean trigger test·········································································································· 186
Configuring an existence trigger test······································································································ 186
Configuring a threshold trigger test ········································································································ 187
Enabling trigger sampling······················································································································· 188
Enabling SNMP notifications for the Event MIB module ················································································ 188
Display and maintenance commands for Event MIB ····················································································· 189
Event MIB configuration examples················································································································· 189
Example: Configuring an existence trigger test······················································································ 189
Example: Configuring a Boolean trigger test·························································································· 191
Example: Configuring a threshold trigger test ························································································ 194
Configuring NETCONF ·············································································· 197
About NETCONF ··········································································································································· 197
NETCONF structure ······························································································································· 197
NETCONF message format ··················································································································· 197
How to use NETCONF ··························································································································· 199
Protocols and standards ························································································································ 199
FIPS compliance ············································································································································ 199
NETCONF tasks at a glance ·························································································································· 199
Establishing a NETCONF session ················································································································· 200
Restrictions and guidelines for NETCONF session establishment ························································ 200
Setting NETCONF session attributes····································································································· 200
Establishing NETCONF over SOAP sessions ······················································································· 202
Establishing NETCONF over SSH sessions ·························································································· 203
Establishing NETCONF over Telnet or NETCONF over console sessions ··········································· 203
Exchanging capabilities·························································································································· 204
Retrieving device configuration information ··································································································· 204
Restrictions and guidelines for device configuration retrieval ································································ 204
Retrieving device configuration and state information ··········································································· 205
Retrieving non-default settings··············································································································· 207
Retrieving NETCONF information ·········································································································· 208
Retrieving YANG file content ················································································································· 208
Retrieving NETCONF session information····························································································· 209
Example: Retrieving a data entry for the interface table ········································································ 209
v
Example: Retrieving non-default configuration data ·············································································· 211
Example: Retrieving syslog configuration data ······················································································ 212
Example: Retrieving NETCONF session information············································································· 213
Filtering data ·················································································································································· 214
About data filtering ································································································································· 214
Restrictions and guidelines for data filtering ·························································································· 214
Table-based filtering······························································································································· 214
Column-based filtering ··························································································································· 215
Example: Filtering data with regular expression match·········································································· 217
Example: Filtering data by conditional match························································································· 219
Locking or unlocking the running configuration ······························································································ 220
About configuration locking and unlocking ····························································································· 220
Restrictions and guidelines for configuration locking and unlocking ······················································ 220
Locking the running configuration ·········································································································· 220
Unlocking the running configuration ······································································································· 220
Example: Locking the running configuration ·························································································· 221
Modifying the configuration ···························································································································· 222
About the <edit-config> operation ·········································································································· 222
Procedure··············································································································································· 222
Example: Modifying the configuration ···································································································· 223
Saving the running configuration ··················································································································· 224
About the <save> operation ··················································································································· 224
Restrictions and guidelines ···················································································································· 224
Procedure··············································································································································· 224
Example: Saving the running configuration···························································································· 225
Loading the configuration ······························································································································· 226
About the <load> operation ···················································································································· 226
Restrictions and guidelines ···················································································································· 226
Procedure··············································································································································· 226
Rolling back the configuration ························································································································ 226
Restrictions and guidelines ···················································································································· 226
Rolling back the configuration based on a configuration file ·································································· 227
Rolling back the configuration based on a rollback point ······································································· 227
Performing CLI operations through NETCONF······························································································ 231
About CLI operations through NETCONF ······························································································ 231
Restrictions and guidelines ···················································································································· 231
Procedure··············································································································································· 231
Example: Performing CLI operations ····································································································· 232
Subscribing to events ····································································································································· 233
About event subscription ························································································································ 233
Restrictions and guidelines ···················································································································· 233
Subscribing to syslog events·················································································································· 233
Subscribing to events monitored by NETCONF····················································································· 234
Subscribing to events reported by modules ··························································································· 235
Canceling an event subscription ············································································································ 236
Example: Subscribing to syslog events·································································································· 237
Terminating NETCONF sessions ··················································································································· 238
About NETCONF session termination ··································································································· 238
Procedure··············································································································································· 238
Example: Terminating another NETCONF session ··············································································· 238
Returning to the CLI ······································································································································· 239
Supported NETCONF operations ······························································ 240
action······················································································································································ 240
CLI·························································································································································· 240
close-session ········································································································································· 241
edit-config: create··································································································································· 241
edit-config: delete ··································································································································· 242
edit-config: merge ·································································································································· 242
edit-config: remove································································································································· 242
edit-config: replace ································································································································· 243
edit-config: test-option ···················································································································· 243
vi
edit-config: default-operation·················································································································· 244
edit-config: error-option ·························································································································· 245
edit-config: incremental ·························································································································· 246
get ·························································································································································· 246
get-bulk ·················································································································································· 247
get-bulk-config········································································································································ 247
get-config ··············································································································································· 248
get-sessions ··········································································································································· 248
kill-session·············································································································································· 248
load ························································································································································ 249
lock ························································································································································· 249
rollback ··················································································································································· 249
save························································································································································ 250
unlock ····················································································································································· 250
Configuring CWMP ···················································································· 251
About CWMP ················································································································································· 251
CWMP network framework ···················································································································· 251
Basic CWMP functions··························································································································· 251
How CWMP works ································································································································· 253
Restrictions and guidelines: CWMP configuration ························································································· 255
CWMP tasks at a glance ································································································································ 255
Enabling CWMP from the CLI ························································································································ 256
Configuring ACS attributes····························································································································· 256
About ACS attributes······························································································································ 256
Configuring the preferred ACS attributes ······························································································· 256
Configuring the default ACS attributes from the CLI ·············································································· 257
Configuring CPE attributes····························································································································· 258
About CPE attributes······························································································································ 258
Specifying an SSL client policy for HTTPS connection to ACS ····························································· 258
Configuring ACS authentication parameters ·························································································· 258
Configuring the provision code··············································································································· 259
Configuring the CWMP connection interface ························································································· 259
Configuring autoconnect parameters ····································································································· 260
Setting the close-wait timer ···················································································································· 261
Display and maintenance commands for CWMP ·························································································· 261
CWMP configuration examples ······················································································································ 261
Example: Configuring CWMP ················································································································ 261
Configuring EAA ························································································ 270
About EAA······················································································································································ 270
EAA framework ······································································································································ 270
Elements in a monitor policy ·················································································································· 271
EAA environment variables ···················································································································· 272
Configuring a user-defined EAA environment variable ·················································································· 273
Configuring a monitor policy··························································································································· 274
Restrictions and guidelines ···················································································································· 274
Configuring a monitor policy from the CLI ······························································································ 274
Configuring a monitor policy by using Tcl ······························································································ 275
Suspending monitor policies ·························································································································· 276
Display and maintenance commands for EAA ······························································································· 277
EAA configuration examples ·························································································································· 277
Example: Configuring a CLI event monitor policy by using Tcl ······························································ 277
Example: Configuring a CLI event monitor policy from the CLI ····························································· 278
Example: Configuring a track event monitor policy from the CLI ··························································· 279
Example: Configuring a CLI event monitor policy with EAA environment variables from the CLI ·········· 281
Monitoring and maintaining processes ······················································· 283
About monitoring and maintaining processes ································································································ 283
Process monitoring and maintenance tasks at a glance ················································································ 283
Starting or stopping a third-party process ······································································································ 283
About third-party processes ··················································································································· 283
vii
Starting a third-party process ················································································································· 284
Stopping a third-party process ··············································································································· 284
Monitoring and maintaining processes ·········································································································· 284
Monitoring and maintaining user processes ·································································································· 285
About monitoring and maintaining user processes ················································································ 285
Configuring core dump ··························································································································· 285
Display and maintenance commands for user processes······································································ 286
Monitoring and maintaining kernel threads ···································································································· 286
Configuring kernel thread deadloop detection ······················································································· 286
Configuring kernel thread starvation detection······················································································· 287
Display and maintenance commands for kernel threads ······································································· 287
Configuring port mirroring ·········································································· 289
About port mirroring ······································································································································· 289
Terminology ··········································································································································· 289
Port mirroring classification ···················································································································· 290
Local port mirroring (SPAN) ··················································································································· 290
Layer 2 remote port mirroring (RSPAN) ································································································· 290
Restrictions and guidelines: Port mirroring configuration ··············································································· 292
Configuring local port mirroring (SPAN) ········································································································· 292
Restrictions and guidelines for local port mirroring configuration··························································· 292
Local port mirroring tasks at a glance ···································································································· 292
Creating a local mirroring group ············································································································· 293
Configuring mirroring sources ················································································································ 293
Configuring the monitor port··················································································································· 293
Configuring local port mirroring group with multiple monitoring devices ························································ 294
Configuring Layer 2 remote port mirroring (RSPAN) ····················································································· 295
Restrictions and guidelines for Layer 2 remote port mirroring configuration ·········································· 295
Layer 2 remote port mirroring with a fixed reflector port configuration tasks at a glance ······················· 296
Layer 2 remote port mirroring with reflector port configuration task list ················································· 296
Layer 2 remote port mirroring with egress port configuration task list···················································· 296
Creating a remote destination group ······································································································ 297
Configuring the monitor port··················································································································· 297
Configuring the remote probe VLAN ······································································································ 298
Assigning the monitor port to the remote probe VLAN··········································································· 298
Creating a remote source group ············································································································ 298
Configuring mirroring sources ················································································································ 299
Configuring the reflector port·················································································································· 299
Configuring the egress port ···················································································································· 300
Display and maintenance commands for port mirroring ················································································ 301
Port mirroring configuration examples ··········································································································· 301
Example: Configuring local port mirroring ······························································································ 301
Example: Configuring local port mirroring with multiple monitoring devices ·········································· 302
Example: Configuring Layer 2 remote port mirroring (RSPAN with reflector port) ································· 303
Example: Configuring Layer 2 remote port mirroring (RSPAN with egress port) ··································· 306
Configuring flow mirroring ·········································································· 309
About flow mirroring ······································································································································· 309
Types of flow-mirroring traffic to an interface ································································································· 309
Flow mirroring SPAN or RSPAN ············································································································ 309
Flow mirroring ERSPAN························································································································· 310
Restrictions and guidelines: Flow mirroring configuration ·············································································· 311
Flow mirroring tasks at a glance ···················································································································· 311
Configuring a traffic class ······························································································································· 311
Configuring a traffic behavior ························································································································· 312
Configuring a QoS policy ······························································································································· 312
Applying a QoS policy ···································································································································· 313
Applying a QoS policy to an interface ···································································································· 313
Applying a QoS policy to a VLAN··········································································································· 313
Applying a QoS policy globally ··············································································································· 313
Flow mirroring configuration examples ·········································································································· 314
Example: Configuring flow mirroring ······································································································ 314
viii
Configuring sFlow ······················································································ 316
About sFlow ··················································································································································· 316
Protocols and standards ································································································································ 316
Configuring basic sFlow information ·············································································································· 316
Configuring flow sampling ······························································································································ 317
Configuring counter sampling ························································································································ 318
Display and maintenance commands for sFlow ···························································································· 318
sFlow configuration examples ························································································································ 318
Example: Configuring sFlow ·················································································································· 318
Troubleshooting sFlow ··································································································································· 320
The remote sFlow collector cannot receive sFlow packets ···································································· 320
Configuring the information center ····························································· 321
About the information center ·························································································································· 321
Log types················································································································································ 321
Log levels ··············································································································································· 321
Log destinations ····································································································································· 322
Default output rules for logs ··················································································································· 322
Default output rules for diagnostic logs ·································································································· 322
Default output rules for security logs ······································································································ 322
Default output rules for hidden logs ······································································································· 323
Default output rules for trace logs ·········································································································· 323
Log formats and field descriptions ········································································································· 323
FIPS compliance ············································································································································ 326
Information center tasks at a glance ·············································································································· 326
Managing standard system logs ············································································································ 326
Managing hidden logs ···························································································································· 327
Managing security logs ·························································································································· 327
Managing diagnostic logs······················································································································· 327
Managing trace logs ······························································································································· 328
Enabling the information center ····················································································································· 328
Outputting logs to various destinations ·········································································································· 328
Outputting logs to the console················································································································ 328
Outputting logs to the monitor terminal ·································································································· 329
Outputting logs to log hosts···················································································································· 330
Outputting logs to the log buffer ············································································································· 331
Saving logs to the log file ······················································································································· 331
Setting the minimum storage period for logs ································································································· 332
Enabling synchronous information output ······································································································ 333
Configuring log suppression··························································································································· 333
Enabling duplicate log suppression········································································································ 333
Configuring log suppression for a module······························································································ 334
Disabling an interface from generating link up or link down logs ··························································· 334
Enabling SNMP notifications for system logs································································································· 334
Managing security logs ·································································································································· 335
Saving security logs to the security log file ···························································································· 335
Managing the security log file················································································································· 336
Saving diagnostic logs to the diagnostic log file ····························································································· 336
Setting the maximum size of the trace log file································································································ 337
Display and maintenance commands for information center ········································································· 337
Information center configuration examples ···································································································· 338
Example: Outputting logs to the console································································································ 338
Example: Outputting logs to a UNIX log host ························································································· 339
Example: Outputting logs to a Linux log host ························································································· 340
Configuring the packet capture ·································································· 342
About packet capture ····································································································································· 342
Building a capture filter rule···························································································································· 342
Capture filter rule keywords ··················································································································· 342
Capture filter rule operators ··················································································································· 343
Capture filter rule expressions ··············································································································· 345
ix
Building a display filter rule ···························································································································· 346
Display filter rule keywords ···················································································································· 346
Display filter rule operators ···················································································································· 347
Display filter rule expressions ················································································································ 348
Restrictions and guidelines: Packet capture ·································································································· 349
Installing feature image ·································································································································· 349
Configuring feature image-based packet capture ·························································································· 349
Restrictions and guidelines ···················································································································· 349
Saving captured packets to a file ··········································································································· 349
Displaying specific captured packets ····································································································· 349
Stopping packet capture ································································································································ 350
Displaying the contents in a packet file ·········································································································· 350
Packet capture configuration examples ········································································································· 350
Example: Configuring feature image-based packet capture ·································································· 350
Configuring VCF fabric ·············································································· 354
About VCF fabric ············································································································································ 354
VCF fabric topology································································································································ 354
Neutron overview ··································································································································· 356
Automated VCF fabric deployment ········································································································ 358
Process of automated VCF fabric deployment······················································································· 359
Template file··········································································································································· 359
VCF fabric task at a glance ···························································································································· 360
Configuring automated VCF fabric deployment ····························································································· 360
Enabling VCF fabric topology discovery ········································································································ 361
Configuring automated underlay network deployment ··················································································· 362
Specify the template file for automated underlay network deployment ·················································· 362
Specifying the role of the device in the VCF fabric ················································································ 362
Configuring the device as a master spine node ····················································································· 363
Setting the NETCONF username and password ··················································································· 363
Pausing automated underlay network deployment ················································································ 363
Configuring automated overlay network deployment ····················································································· 364
Restrictions and guidelines for automated overlay network deployment ··············································· 364
Automated overlay network deployment tasks at a glance ···································································· 364
Prerequisites for automated overlay network deployment ····································································· 365
Configuring parameters for the device to communicate with RabbitMQ servers ··································· 365
Specifying the network type ··················································································································· 366
Enabling L2 agent ·································································································································· 366
Enabling L3 agent ·································································································································· 367
Configuring the border node ·················································································································· 367
Enabling local proxy ARP······················································································································· 368
Configuring the MAC address of VSI interfaces····················································································· 368
Display and maintenance commands for VCF fabric ····················································································· 369
Using Ansible for automated configuration management ··························· 370
About Ansible ················································································································································· 370
Ansible network architecture ·················································································································· 370
How Ansible works ································································································································· 370
Restrictions and guidelines ···························································································································· 370
Configuring the device for management with Ansible ···················································································· 371
Device setup examples for management with Ansible ·················································································· 371
Example: Setting up the device for management with Ansible ······························································ 371
Configuring Chef ························································································ 373
About Chef ····················································································································································· 373
Chef network framework ························································································································ 373
Chef resources ······································································································································· 374
Chef configuration file ···························································································································· 374
Restrictions and guidelines: Chef configuration ····························································································· 375
Prerequisites for Chef ···································································································································· 376
Starting Chef ·················································································································································· 376
Configuring the Chef server ··················································································································· 376
x
Configuring a workstation······················································································································· 376
Configuring a Chef client ························································································································ 376
Shutting down Chef ········································································································································ 377
Chef configuration examples·························································································································· 377
Example: Configuring Chef ···················································································································· 377
Chef resources ·························································································· 380
netdev_device ················································································································································ 380
netdev_interface············································································································································· 380
netdev_l2_interface ········································································································································ 382
netdev_l2vpn ·················································································································································· 383
netdev_lagg···················································································································································· 383
netdev_vlan ···················································································································································· 384
netdev_vsi ······················································································································································ 385
netdev_vte······················································································································································ 386
netdev_vxlan ·················································································································································· 386
Configuring Puppet ···················································································· 388
About Puppet ················································································································································· 388
Puppet network framework ···················································································································· 388
Puppet resources ··································································································································· 389
Restrictions and guidelines: Puppet configuration ························································································· 389
Prerequisites for Puppet································································································································· 389
Starting Puppet ·············································································································································· 390
Configuring resources ···························································································································· 390
Configuring a Puppet agent ··················································································································· 390
Signing a certificate for the Puppet agent ······························································································ 390
Shutting down Puppet on the device ············································································································· 390
Puppet configuration examples ······················································································································ 391
Example: Configuring Puppet ················································································································ 391
Puppet resources······················································································· 392
netdev_device ················································································································································ 392
netdev_interface············································································································································· 393
netdev_l2_interface ········································································································································ 394
netdev_l2vpn ·················································································································································· 395
netdev_lagg···················································································································································· 396
netdev_vlan ···················································································································································· 397
netdev_vsi ······················································································································································ 398
netdev_vte······················································································································································ 398
netdev_vxlan ·················································································································································· 399
Configuring EPS agent ·············································································· 401
About EPS and EPS agent ···························································································································· 401
Restrictions: Hardware compatibility with EPS agent ···················································································· 401
Configuring EPS agent··································································································································· 402
Display and maintenance commands for EPS agent ····················································································· 402
EPS agent configuration examples ················································································································ 402
Example: Configuring EPS agent··········································································································· 402
Configuring eMDI ······················································································· 404
About eMDI ···················································································································································· 404
eMDI implementation ····························································································································· 404
Monitored items······································································································································ 405
Restrictions and guidelines ···························································································································· 408
Prerequisites ·················································································································································· 409
Configure eMDI ·············································································································································· 409
Configuring an eMDI instance ················································································································ 409
Starting an eMDI instance ······················································································································ 410
Stopping an eMDI instance ···················································································································· 410
Display and maintenance commands for eMDI ····························································································· 411
eMDI configuration examples ························································································································ 411
xi
Example: Configuring an eMDI instance for a UDP data flow································································ 411
Configuring SQA ························································································ 413
About SQA ····················································································································································· 413
Restrictions and guidelines: SQA configuration ····························································································· 413
Configuring SQA ············································································································································ 413
Display and maintenance commands for SQA ······························································································ 413
SQA configuration examples·························································································································· 414
Example: Configuring SQA ···················································································································· 414
Document conventions and icons ······························································ 416
Conventions ··················································································································································· 416
Network topology icons ·································································································································· 417
Support and other resources ····································································· 418
Accessing Hewlett Packard Enterprise Support····························································································· 418
Accessing updates ········································································································································· 418
Websites ················································································································································ 419
Customer self repair ······························································································································· 419
Remote support······································································································································ 419
Documentation feedback ······················································································································· 419
Index·········································································································· 421
xii
Using ping, tracert, and system
debugging
This chapter covers ping, tracert, and information about debugging the system.
Ping
About ping
Use the ping utility to determine if an address is reachable.
Ping sends ICMP echo requests (ECHO-REQUEST) to the destination device. Upon receiving the
requests, the destination device responds with ICMP echo replies (ECHO-REPLY) to the source
device. The source device outputs statistics about the ping operation, including the number of
packets sent, number of echo replies received, and the round-trip time. You can measure the
network performance by analyzing these statistics.
You can use the ping –r command to display the routers through which ICMP echo requests have
passed. The test procedure of ping –r is as shown in Figure 1:
1. The source device (Device A) sends an ICMP echo request to the destination device (Device C)
with the RR option empty.
2. The intermediate device (Device B) adds the IP address of its outbound interface (1.1.2.1) to
the RR option of the ICMP echo request, and forwards the packet.
3. Upon receiving the request, the destination device copies the RR option in the request and
adds the IP address of its outbound interface (1.1.2.2) to the RR option. Then the destination
device sends an ICMP echo reply.
4. The intermediate device adds the IP address of its outbound interface (1.1.1.2) to the RR option
in the ICMP echo reply, and then forwards the reply.
5. Upon receiving the reply, the source device adds the IP address of its inbound interface (1.1.1.1)
to the RR option. The detailed information of routes from Device A to Device C is formatted as:
1.1.1.1 <-> {1.1.1.2; 1.1.2.1} <-> 1.1.2.2.
Figure 1 Ping operation
Device A Device B Device C
1.1.1.1/24 1.1.2.1/24
1.1.1.2/24 1.1.2.2/24
ECHO-REQUEST
(NULL)
ECHO-REQUEST
1st=1.1.2.1
ECHO-REPLY ECHO-REPLY
ECHO-REPLY 1st=1.1.2.1 1st=1.1.2.1
1st=1.1.2.1 2nd=1.1.2.2 2nd=1.1.2.2
2nd=1.1.2.2 3rd=1.1.1.2
3rd=1.1.1.2
4th=1.1.1.1
1
ping [ ip ] [ -a source-ip | -c count | -f | -h ttl | -i interface-type
interface-number | -m interval | -n | -p pad | -q | -r | -s packet-size | -t
timeout | -tos tos | -v | -vpn-instance vpn-instance-name ] * host
Increase the timeout time (indicated by the -t keyword) on a low-speed network.
• Determine if an IPv6 address is reachable.
ping ipv6 [ -a source-ipv6 | -c count | -i interface-type
interface-number | -m interval | -q | -s packet-size | -t timeout | -tc
traffic-class | -v | -vpn-instance vpn-instance-name ] * host
Increase the timeout time (indicated by the -t keyword) on a low-speed network.
Procedure
# Test the connectivity between Device A and Device C.
<DeviceA> ping 1.1.2.2
Ping 1.1.2.2 (1.1.2.2): 56 data bytes, press CTRL+C to break
56 bytes from 1.1.2.2: icmp_seq=0 ttl=254 time=2.137 ms
56 bytes from 1.1.2.2: icmp_seq=1 ttl=254 time=2.051 ms
56 bytes from 1.1.2.2: icmp_seq=2 ttl=254 time=1.996 ms
56 bytes from 1.1.2.2: icmp_seq=3 ttl=254 time=1.963 ms
56 bytes from 1.1.2.2: icmp_seq=4 ttl=254 time=1.991 ms
Tracert
About tracert
Tracert (also called Traceroute) enables retrieval of the IP addresses of Layer 3 devices in the path
to a destination. In the event of network failure, use tracert to test network connectivity and identify
failed nodes.
2
Figure 3 Tracert operation
Device A Device B Device C Device D
1.1.1.1/24 1.1.2.1/24 1.1.3.1/24
Hop Limit=1
TTL exceeded
Hop Limit=2
TTL exceeded
Hop Limit=n
UDP port unreachable
Tracert uses received ICMP error messages to get the IP addresses of devices. Tracert works as
shown in Figure 3:
1. The source device sends a UDP packet with a TTL value of 1 to the destination device. The
destination UDP port is not used by any application on the destination device.
2. The first hop (Device B, the first Layer 3 device that receives the packet) responds by sending a
TTL-expired ICMP error message to the source, with its IP address (1.1.1.2) encapsulated. This
way, the source device can get the address of the first Layer 3 device (1.1.1.2).
3. The source device sends a packet with a TTL value of 2 to the destination device.
4. The second hop (Device C) responds with a TTL-expired ICMP error message, which gives the
source device the address of the second Layer 3 device (1.1.2.2).
5. This process continues until a packet sent by the source device reaches the ultimate
destination device. Because no application uses the destination port specified in the packet, the
destination device responds with a port-unreachable ICMP message to the source device, with
its IP address encapsulated. This way, the source device gets the IP address of the destination
device (1.1.3.2).
6. The source device determines that:
The packet has reached the destination device after receiving the port-unreachable ICMP
message.
The path to the destination device is 1.1.1.2 to 1.1.2.2 to 1.1.3.2.
Prerequisites
Before you use a tracert command, perform the tasks in this section.
For an IPv4 network:
• Enable sending of ICMP timeout packets on the intermediate devices (devices between the
source and destination devices). If the intermediate devices are HPE devices, execute the ip
ttl-expires enable command on the devices. For more information about this command,
see Layer 3—IP Services Command Reference.
• Enable sending of ICMP destination unreachable packets on the destination device. If the
destination device is an HPE device, execute the ip unreachables enable command. For
more information about this command, see Layer 3—IP Services Command Reference.
For an IPv6 network:
• Enable sending of ICMPv6 timeout packets on the intermediate devices (devices between the
source and destination devices). If the intermediate devices are HPE devices, execute the
3
ipv6 hoplimit-expires enable command on the devices. For more information about
this command, see Layer 3—IP Services Command Reference.
• Enable sending of ICMPv6 destination unreachable packets on the destination device. If the
destination device is an HPE device, execute the ipv6 unreachables enable command.
For more information about this command, see Layer 3—IP Services Command Reference.
Procedure
1. Configure IP addresses for the devices as shown in Figure 4.
2. Configure a static route on Device A.
<DeviceA> system-view
[DeviceA] ip route-static 0.0.0.0 0.0.0.0 1.1.1.2
3. Test connectivity between Device A and Device C.
[DeviceA] ping 1.1.2.2
Ping 1.1.2.2(1.1.2.2): 56 -data bytes, press CTRL+C to break
Request time out
Request time out
Request time out
Request time out
Request time out
4
4. Identify failed nodes:
# Enable sending of ICMP timeout packets on Device B.
<DeviceB> system-view
[DeviceB] ip ttl-expires enable
# Enable sending of ICMP destination unreachable packets on Device C.
<DeviceC> system-view
[DeviceC] ip unreachables enable
# Identify failed nodes.
[DeviceA] tracert 1.1.2.2
traceroute to 1.1.2.2 (1.1.2.2) 30 hops at most,40 bytes each packet, press CTRL+C
to break
1 1.1.1.2 (1.1.1.2) 1 ms 2 ms 1 ms
2 * * *
3 * * *
4 * * *
5
[DeviceA]
The output shows that Device A can reach Device B but cannot reach Device C. An error has
occurred on the connection between Device B and Device C.
5. To identify the cause of the issue, execute the following commands on Device A and Device C:
Execute the debugging ip icmp command and verify that Device A and Device C can
send and receive the correct ICMP packets.
Execute the display ip routing-table command to verify that Device A and Device
C have a route to each other.
System debugging
About system debugging
The device supports debugging for the majority of protocols and features, and provides debugging
information to help users diagnose errors.
The following switches control the display of debugging information:
• Module debugging switch—Controls whether to generate the module-specific debugging
information.
• Screen output switch—Controls whether to display the debugging information on a certain
screen. Use terminal monitor and terminal logging level commands to turn on
the screen output switch. For more information about these two commands, see Network
Management and Monitoring Command Reference.
As shown in Figure 5, the device can provide debugging for the three modules 1, 2, and 3. The
debugging information can be output on a terminal only when both the module debugging switch and
the screen output switch are turned on.
Debugging information is typically displayed on a console. You can also send debugging information
to other destinations. For more information, see "Configuring the information center."
5
Figure 5 Relationship between the module and screen output switch
Debugging 1 2 3 Debugging 1 2 3
information information
Protocol Protocol
debugging debugging
switch ON OFF ON switch ON OFF ON
1 3 1 3
Screen Screen
output switch output switch
OFF ON
1 3
6
Configuring NQA
About NQA
Network quality analyzer (NQA) allows you to measure network performance, verify the service
levels for IP services and applications, and troubleshoot network problems.
IP network
NQA source device/
NQA destination device
NQA client
After starting an NQA operation, the NQA client periodically performs the operation at the interval
specified by using the frequency command.
You can set the number of probes the NQA client performs in an operation by using the probe
count command. For the voice and path jitter operations, the NQA client performs only one probe
per operation and the probe count command is not available.
7
3. The Track module notifies the static routing module of the state change.
4. The static routing module sets the static route to invalid according to a predefined action.
For more information about collaboration, see High Availability Configuration Guide.
Threshold monitoring
Threshold monitoring enables the NQA client to take a predefined action when the NQA operation
performance metrics violate the specified thresholds.
Table 1 describes the relationships between performance metrics and NQA operation types.
Table 1 Performance metrics and NQA operation types
8
Configuring the NQA server
Restrictions and guidelines
To perform TCP, UDP echo, UDP jitter, and voice operations, you must configure the NQA server on
the destination device. The NQA server listens and responds to requests on the specified IP
addresses and ports.
You can configure multiple TCP or UDP listening services on an NQA server, where each
corresponds to a specific IP address and port number.
The IP address and port number for a listening service must be unique on the NQA server and match
the configuration on the NQA client.
Procedure
1. Enter system view.
system-view
2. Enable the NQA server.
nqa server enable
By default, the NQA server is disabled.
3. Configure a TCP listening service.
nqa server tcp-connect ip-address port-number [ vpn-instance
vpn-instance-name ] [ tos tos ]
This task is required for only TCP and DLSw operations. For the DLSw operation, the port
number for the TCP listening service must be 2065.
4. Configure a UDP listening service.
nqa server udp-echo ip-address port-number [ vpn-instance
vpn-instance-name ] [ tos tos ]
This task is required for only UDP echo, UDP jitter, and voice operations.
9
Configuring the DNS operation
Configuring the FTP operation
Configuring the HTTP operation
Configuring the UDP jitter operation
Configuring the SNMP operation
Configuring the TCP operation
Configuring the UDP echo operation
Configuring the UDP tracert operation
Configuring the voice operation
Configuring the DLSw operation
Configuring the path jitter operation
2. (Optional.) Configuring optional parameters for the NQA operation
3. (Optional.) Configuring the collaboration feature
4. (Optional.) Configuring threshold monitoring
5. (Optional.) Configuring the NQA statistics collection feature
6. (Optional.) Configuring the saving of NQA history records
7. Scheduling the NQA operation on the NQA client
10
The specified source interface must be up.
Specify the source IPv4 address.
source ip ip-address
By default, the source IPv4 address of ICMP echo requests is the primary IPv4 address of
their output interface.
The specified source IPv4 address must be the IPv4 address of a local interface, and the
interface must be up. Otherwise, no probe packets can be sent out.
Specify the source IPv6 address.
source ipv6 ipv6-address
By default, the source IPv6 address of ICMP echo requests is the primary IPv6 address of
their output interface.
The specified source IPv6 address must be the IPv6 address of a local interface, and the
interface must be up. Otherwise, no probe packets can be sent out.
6. Specify the output interface or the next hop IP address for ICMP echo requests. Choose one
option as needed:
Specify the output interface for ICMP echo requests.
out interface interface-type interface-number
By default, the output interface for ICMP echo requests is not specified. The NQA client
determines the output interface based on the routing table lookup.
Specify the next hop IPv4 address.
next-hop ip ip-address
By default, no next hop IPv4 address is specified.
Specify the next hop IPv6 address.
next-hop ipv6 ipv6-address
By default, no next hop IPv6 address is specified.
7. (Optional.) Set the payload size for each ICMP echo request.
data-size size
The default payload size is 100 bytes.
8. (Optional.) Specify the payload fill string for ICMP echo requests.
data-fill string
The default payload fill string is the hexadecimal string 00010203040506070809.
11
Restrictions and guidelines
The display nqa history command does not display the results or statistics of the ICMP jitter
operation. To view the results or statistics of the operation, use the display nqa result or
display nqa statistics command.
Before starting the operation, make sure the network devices are time synchronized by using NTP.
For more information about NTP, see "Configuring NTP."
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the ICMP jitter type and enter its view.
type icmp-jitter
4. Specify the destination IP address for ICMP packets.
destination ip ip-address
By default, no destination IP address is specified.
5. Set the number of ICMP packets sent per probe.
probe packet-number number
The default setting is 10.
6. Set the interval for sending ICMP packets.
probe packet-interval interval
The default setting is 20 milliseconds.
7. Specify how long the NQA client waits for a response from the server before it regards the
response times out.
probe packet-timeout timeout
The default setting is 3000 milliseconds.
8. Specify the source IP address for ICMP packets.
source ip ip-address
By default, the source IP address of ICMP packets is the primary IP address of their output
interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no ICMP packets can be sent out.
12
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the DHCP type and enter its view.
type dhcp
4. Specify the IP address of the DHCP server as the destination IP address of DHCP packets.
destination ip ip-address
By default, no destination IP address is specified.
5. Specify the output interface for DHCP request packets.
out interface interface-type interface-number
By default, the NQA client determines the output interface based on the routing table lookup.
6. Specify the source IP address of DHCP request packets.
source ip ip-address
By default, the source IP address of DHCP request packets is the primary IP address of their
output interface.
The specified source IP address must be the IP address of a local interface, and the local
interface must be up. Otherwise, no probe packets can be sent out.
13
Configuring the FTP operation
About this task
The FTP operation measures the time for the NQA client to transfer a file to or download a file from
an FTP server.
The FTP operation uploads or downloads a file from an FTP server per probe.
Restrictions and guidelines
To upload (put) a file to the FTP server, use the filename command to specify the name of the file
you want to upload. The file must exist on the NQA client.
To download (get) a file from the FTP server, include the name of the file you want to download in the
url command. The file must exist on the FTP server. The NQA client does not save the file obtained
from the FTP server.
Use a small file for the FTP operation. A big file might result in transfer failure because of timeout, or
might affect other services because of the amount of network bandwidth it occupies.
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the FTP type and enter its view.
type ftp
4. Specify an FTP login username.
username username
By default, no FTP login username is specified.
5. Specify an FTP login password.
password { cipher | simple } string
By default, no FTP login password is specified.
6. Specify the source IP address for FTP request packets.
source ip ip-address
By default, the source IP address of FTP request packets is the primary IP address of their
output interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no FTP requests can be sent out.
7. Set the data transmission mode.
mode { active | passive }
The default mode is active.
8. Specify the FTP operation type.
operation { get | put }
The default FTP operation type is get.
9. Specify the destination URL for the FTP operation.
url url
By default, no destination URL is specified for an FTP operation.
Enter the URL in one of the following formats:
ftp://host/filename.
14
ftp://host:port/filename.
The filename argument is required only for the get operation.
10. Specify the name of the file to be uploaded.
filename file-name
By default, no file is specified.
This task is required only for the put operation.
The configuration does not take effect for the get operation.
15
If you set the operation type to raw, the client pads the content configured in raw request view
to the HTTP request to send to the HTTP server.
9. Configure the HTTP raw request.
a. Enter raw request view.
raw-request
Every time you enter raw request view, the previously configured raw request content is
cleared.
b. Enter or paste the request content.
By default, no request content is configured.
To ensure successful operations, make sure the request content does not contain
command aliases configured by using the alias command. For more information about
the alias command, see CLI commands in Fundamentals Command Reference.
c. Save the input and return to HTTP operation view:
quit
This step is required only when the operation type is set to raw.
10. Specify the source IP address for the HTTP packets.
source ip ip-address
By default, the source IP address of HTTP packets is the primary IP address of their output
interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no request packets can be sent out.
16
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the UDP jitter type and enter its view.
type udp-jitter
4. Specify the destination IP address for UDP packets.
destination ip ip-address
By default, no destination IP address is specified.
The destination IP address must be the same as the IP address of the UDP listening service
configured on the NQA server. To configure a UDP listening service on the server, use the nqa
server udp-echo command.
5. Specify the destination port number for UDP packets.
destination port port-number
By default, no destination port number is specified.
The destination port number must be the same as the port number of the UDP listening service
configured on the NQA server. To configure a UDP listening service on the server, use the nqa
server udp-echo command.
6. Specify the source IP address for UDP packets.
source ip ip-address
By default, the source IP address of UDP packets is the primary IP address of their output
interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no UDP packets can be sent out.
7. Specify the source port number for UDP packets.
source port port-number
By default, the NQA client randomly picks an unused port as the source port when the operation
starts.
8. Set the number of UDP packets sent per probe.
probe packet-number number
The default setting is 10.
9. Set the interval for sending UDP packets.
probe packet-interval interval
The default setting is 20 milliseconds.
10. Specify how long the NQA client waits for a response from the server before it regards the
response times out.
probe packet-timeout timeout
The default setting is 3000 milliseconds.
11. (Optional.) Set the payload size for each UDP packet.
data-size size
The default payload size is 100 bytes.
12. (Optional.) Specify the payload fill string for UDP packets.
data-fill string
The default payload fill string is the hexadecimal string 00010203040506070809.
17
Configuring the SNMP operation
About this task
The SNMP operation tests whether the SNMP service is available on an SNMP agent.
The SNMP operation sends one SNMPv1 packet, one SNMPv2c packet, and one SNMPv3 packet to
the SNMP agent per probe.
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the SNMP type and enter its view.
type snmp
4. Specify the destination address for SNMP packets.
destination ip ip-address
By default, no destination IP address is specified.
5. Specify the source IP address for SNMP packets.
source ip ip-address
By default, the source IP address of SNMP packets is the primary IP address of their output
interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no SNMP packets can be sent out.
6. Specify the source port number for SNMP packets.
source port port-number
By default, the NQA client randomly picks an unused port as the source port when the operation
starts.
7. Specify the community name carried in the SNMPv1 and SNMPv2c packets.
community read { cipher | simple } community-name
By default, the SNMPv1 and SNMPv2c packets carry community name public.
Make sure the specified community name is the same as the community name configured on
the SNMP agent.
18
nqa entry admin-name operation-tag
3. Specify the TCP type and enter its view.
type tcp
4. Specify the destination address for TCP packets.
destination ip ip-address
By default, no destination IP address is specified.
The destination address must be the same as the IP address of the TCP listening service
configured on the NQA server. To configure a TCP listening service on the server, use the nqa
server tcp-connect command.
5. Specify the destination port for TCP packets.
destination port port-number
By default, no destination port number is configured.
The destination port number must be the same as the port number of the TCP listening service
configured on the NQA server. To configure a TCP listening service on the server, use the nqa
server tcp-connect command.
6. Specify the source IP address for TCP packets.
source ip ip-address
By default, the source IP address of TCP packets is the primary IP address of their output
interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no TCP packets can be sent out.
19
By default, no destination port number is specified.
The destination port number must be the same as the port number of the UDP listening service
configured on the NQA server. To configure a UDP listening service on the server, use the nqa
server udp-echo command.
6. Specify the source IP address for UDP packets.
source ip ip-address
By default, the source IP address of UDP packets is the primary IP address of their output
interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no UDP packets can be sent out.
7. Specify the source port number for UDP packets.
source port port-number
By default, the NQA client randomly picks an unused port as the source port when the operation
starts.
8. (Optional.) Set the payload size for each UDP packet.
data-size size
The default setting is 100 bytes.
9. (Optional.) Specify the payload fill string for UDP packets.
data-fill string
The default payload fill string is the hexadecimal string 00010203040506070809.
20
nqa entry admin-name operation-tag
3. Specify the UDP tracert operation type and enter its view.
type udp-tracert
4. Specify the destination device for the operation. Choose one option as needed:
Specify the destination device by its host name.
destination host host-name
By default, no destination host name is specified.
Specify the destination device by its IP address.
destination ip ip-address
By default, no destination IP address is specified.
5. Specify the destination port number for UDP packets.
destination port port-number
By default, the destination port number is 33434.
This port number must be an unused number on the destination device, so that the destination
device can reply with ICMP port unreachable messages.
6. Specify an output interface for UDP packets.
out interface interface-type interface-number
By default, the NQA client determines the output interface based on the routing table lookup.
7. Specify the source IP address for UDP packets.
Specify the IP address of the specified interface as the source IP address.
source interface interface-type interface-number
By default, the source IP address of UDP packets is the primary IP address of their output
interface.
Specify the source IP address.
source ip ip-address
The specified source interface must be up. The source IP address must be the IP address of
a local interface, and the local interface must be up. Otherwise, no probe packets can be
sent out.
8. Specify the source port number for UDP packets.
source port port-number
By default, the NQA client randomly picks an unused port as the source port when the operation
starts.
9. Set the maximum number of consecutive probe failures.
max-failure times
The default setting is 5.
10. Set the initial TTL value for UDP packets.
init-ttl value
The default setting is 1.
11. (Optional.) Set the payload size for each UDP packet.
data-size size
The default setting is 100 bytes.
12. (Optional.) Enable the no-fragmentation feature.
no-fragment enable
By default, the no-fragmentation feature is disabled.
21
Configuring the voice operation
About this task
The voice operation measures VoIP network performance.
The voice operation works as follows:
1. The NQA client sends voice packets at sending intervals to the destination device (NQA
server).
The voice packets are of one of the following codec types:
G.711 A-law.
G.711 µ-law.
G.729 A-law.
2. The destination device time stamps each voice packet it receives and sends it back to the
source.
3. Upon receiving the packet, the source device calculates the jitter and one-way delay based on
the timestamp.
The voice operation sends a number of voice packets to the destination device per probe. The
number of packets to send per probe is determined by using the probe packet-number
command.
The following parameters that reflect VoIP network performance can be calculated by using the
metrics gathered by the voice operation:
• Calculated Planning Impairment Factor (ICPIF)—Measures impairment to voice quality on a
VoIP network. It is decided by packet loss and delay. A higher value represents a lower service
quality.
• Mean Opinion Scores (MOS)—A MOS value can be evaluated by using the ICPIF value, in the
range of 1 to 5. A higher value represents a higher service quality.
The evaluation of voice quality depends on users' tolerance for voice quality. For users with higher
tolerance for voice quality, use the advantage-factor command to set an advantage factor.
When the system calculates the ICPIF value, it subtracts the advantage factor to modify ICPIF and
MOS values for voice quality evaluation.
The voice operation requires both the NQA server and the NQA client. Before you perform a voice
operation, configure a UDP listening service on the NQA server. For more information about UDP
listening service configuration, see "Configuring the NQA server."
Restrictions and guidelines
To ensure successful voice operations and avoid affecting existing services, do not perform the
operations on well-known ports from 1 to 1023.
The display nqa history command does not display the results or statistics of the voice
operation. To view the results or statistics of the voice operation, use the display nqa result or
display nqa statistics command.
Before starting the operation, make sure the network devices are time synchronized by using NTP.
For more information about NTP, see "Configuring NTP."
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the voice type and enter its view.
22
type voice
4. Specify the destination IP address for voice packets.
destination ip ip-address
By default, no destination IP address is configured.
The destination IP address must be the same as the IP address of the UDP listening service
configured on the NQA server. To configure a UDP listening service on the server, use the nqa
server udp-echo command.
5. Specify the destination port number for voice packets.
destination port port-number
By default, no destination port number is configured.
The destination port number must be the same as the port number of the UDP listening service
configured on the NQA server. To configure a UDP listening service on the server, use the nqa
server udp-echo command.
6. Specify the source IP address for voice packets.
source ip ip-address
By default, the source IP address of voice packets is the primary IP address of their output
interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no voice packets can be sent out.
7. Specify the source port number for voice packets.
source port port-number
By default, the NQA client randomly picks an unused port as the source port when the operation
starts.
8. Configure the basic voice operation parameters.
Specify the codec type.
codec-type { g711a | g711u | g729a }
By default, the codec type is G.711 A-law.
Set the advantage factor for calculating MOS and ICPIF values.
advantage-factor factor
By default, the advantage factor is 0.
9. Configure the probe parameters for the voice operation.
Set the number of voice packets to be sent per probe.
probe packet-number number
The default setting is 1000.
Set the interval for sending voice packets.
probe packet-interval interval
The default setting is 20 milliseconds.
Specify how long the NQA client waits for a response from the server before it regards the
response times out.
probe packet-timeout timeout
The default setting is 5000 milliseconds.
10. Configure the payload parameters.
a. Set the payload size for voice packets.
data-size size
By default, the voice packet size varies by codec type. The default packet size is 172 bytes
for G.711A-law and G.711 µ-law codec type, and 32 bytes for G.729 A-law codec type.
23
b. (Optional.) Specify the payload fill string for voice packets.
data-fill string
The default payload fill string is the hexadecimal string 00010203040506070809.
24
Procedure
1. Enter system view.
system-view
2. Create an NQA operation and enter NQA operation view.
nqa entry admin-name operation-tag
3. Specify the path jitter type and enter its view.
type path-jitter
4. Specify the destination IP address for ICMP echo requests.
destination ip ip-address
By default, no destination IP address is specified.
5. Specify the source IP address for ICMP echo requests.
source ip ip-address
By default, the source IP address of ICMP echo requests is the primary IP address of their
output interface.
The source IP address must be the IP address of a local interface, and the interface must be up.
Otherwise, no ICMP echo requests can be sent out.
6. Configure the probe parameters for the path jitter operation.
a. Set the number of ICMP echo requests to be sent per probe.
probe packet-number number
The default setting is 10.
b. Set the interval for sending ICMP echo requests.
probe packet-interval interval
The default setting is 20 milliseconds.
c. Specify how long the NQA client waits for a response from the server before it regards the
response times out.
probe packet-timeout timeout
The default setting is 3000 milliseconds.
7. (Optional.) Specify an LSR path.
lsr-path ip-address&<1-8>
By default, no LSR path is specified.
The path jitter operation uses tracert to detect the LSR path to the destination, and sends ICMP
echo requests to each hop on the LSR path.
8. Perform the path jitter operation only on the destination address.
target-only
By default, the path jitter operation is performed on each hop on the path to the destination.
9. (Optional.) Set the payload size for each ICMP echo request.
data-size size
The default setting is 100 bytes.
10. (Optional.) Specify the payload fill string for ICMP echo requests.
data-fill string
The default payload fill string is the hexadecimal string 00010203040506070809.
25
Configuring optional parameters for the NQA operation
Restrictions and guidelines
Unless otherwise specified, the following optional parameters apply to all types of NQA operations.
The parameter settings take effect only on the current operation.
Procedure
1. Enter system view.
system-view
2. Enter the view of an existing NQA operation.
nqa entry admin-name operation-tag
3. Configure a description for the operation.
description text
By default, no description is configured.
4. Set the interval at which the NQA operation repeats.
frequency interval
For a voice or path jitter operation, the default setting is 60000 milliseconds.
For other types of operations, the default setting is 0 milliseconds, and only one operation is
performed.
When the interval expires, but the operation is not completed or is not timed out, the next
operation does not start.
5. Specify the probe times.
probe count times
In an UDP tracert operation, the NQA client performs three probes to each hop to the
destination by default.
In other types of operations, the NQA client performs one probe to the destination per operation
by default.
This command is not available for the voice and path jitter operations. Each of these operations
performs only one probe.
6. Set the probe timeout time.
probe timeout timeout
The default setting is 3000 milliseconds.
This command is not available for the ICMP jitter, UDP jitter, voice, or path jitter operations.
7. Set the maximum number of hops that the probe packets can traverse.
ttl value
The default setting is 30 for probe packets of the UDP tracert operation, and is 20 for probe
packets of other types of operations.
This command is not available for the DHCP or path jitter operations.
8. Set the ToS value in the IP header of the probe packets.
tos value
The default setting is 0.
9. Enable the routing table bypass feature.
route-option bypass-route
By default, the routing table bypass feature is disabled.
This command is not available for the DHCP or path jitter operations.
26
This command does not take effect if the destination address of the NQA operation is an IPv6
address.
10. Specify the VPN instance where the operation is performed.
vpn-instance vpn-instance-name
By default, the operation is performed on the public network.
27
• consecutive—If the number of consecutive times that the monitored performance metric is out
of the specified value range reaches or exceeds the specified threshold, a threshold violation
occurs.
Threshold violations for the average or accumulate threshold type are determined on a per NQA
operation basis. The threshold violations for the consecutive type are determined from the time the
NQA operation starts.
The following actions might be triggered:
• none—NQA displays results only on the terminal screen. It does not send traps to the NMS.
• trap-only—NQA displays results on the terminal screen, and meanwhile it sends traps to the
NMS.
To send traps to the NMS, the NMS address must be specified by using the snmp-agent
target-host command. For more information about the command, see Network
Management and Monitoring Command Reference.
• trigger-only—NQA displays results on the terminal screen, and meanwhile triggers other
modules for collaboration.
In a reaction entry, configure a monitored element, a threshold type, and an action to be triggered to
implement threshold monitoring.
The state of a reaction entry can be invalid, over-threshold, or below-threshold.
• Before an NQA operation starts, the reaction entry is in invalid state.
• If the threshold is violated, the state of the entry is set to over-threshold. Otherwise, the state of
the entry is set to below-threshold.
Restrictions and guidelines
The threshold monitoring feature is not available for the path jitter operations.
Procedure
1. Enter system view.
system-view
2. Enter the view of an existing NQA operation.
nqa entry admin-name operation-tag
3. Enable sending traps to the NMS when specific conditions are met.
reaction trap { path-change | probe-failure
consecutive-probe-failures | test-complete | test-failure
[ accumulate-probe-failures ] }
By default, no traps are sent to the NMS.
The ICMP jitter, UDP jitter, and voice operations support only the test-complete keyword.
The following parameters are not available for the UDP tracert operation:
The probe-failure consecutive-probe-failures option.
The accumulate-probe-failures argument.
4. Configure threshold monitoring. Choose the options to configure as needed:
Monitor the operation duration.
reaction item-number checked-element probe-duration
threshold-type { accumulate accumulate-occurrences | average |
consecutive consecutive-occurrences } threshold-value
upper-threshold lower-threshold [ action-type { none | trap-only } ]
This reaction entry is not supported in the ICMP jitter, UDP jitter, UDP tracert, or voice
operations
Monitor failure times.
28
reaction item-number checked-element probe-fail threshold-type
{ accumulate accumulate-occurrences | consecutive
consecutive-occurrences } [ action-type { none | trap-only } ]
This reaction entry is not supported in the ICMP jitter, UDP jitter, UDP tracert, or voice
operations.
Monitor the round-trip time.
reaction item-number checked-element rtt threshold-type
{ accumulate accumulate-occurrences | average } threshold-value
upper-threshold lower-threshold [ action-type { none | trap-only } ]
Only the ICMP jitter, UDP jitter, and voice operations support this reaction entry.
Monitor packet loss.
reaction item-number checked-element packet-loss threshold-type
accumulate accumulate-occurrences [ action-type { none |
trap-only } ]
Only the ICMP jitter, UDP jitter, and voice operations support this reaction entry.
Monitor the one-way jitter.
reaction item-number checked-element { jitter-ds | jitter-sd }
threshold-type { accumulate accumulate-occurrences | average }
threshold-value upper-threshold lower-threshold [ action-type
{ none | trap-only } ]
Only the ICMP jitter, UDP jitter, and voice operations support this reaction entry.
Monitor the one-way delay.
reaction item-number checked-element { owd-ds | owd-sd }
threshold-value upper-threshold lower-threshold
Only the ICMP jitter, UDP jitter, and voice operations support this reaction entry.
Monitor the ICPIF value.
reaction item-number checked-element icpif threshold-value
upper-threshold lower-threshold [ action-type { none | trap-only } ]
Only the voice operation supports this reaction entry.
Monitor the MOS value.
reaction item-number checked-element mos threshold-value
upper-threshold lower-threshold [ action-type { none | trap-only } ]
Only the voice operation supports this reaction entry.
The DNS operation does not support the action of sending trap messages. For the DNS
operation, the action type can only be none.
29
If you use the frequency command to set the interval to 0 milliseconds for an NQA operation, NQA
does not generate any statistics group for the operation.
Procedure
1. Enter system view.
system-view
2. Enter the view of an existing NQA operation.
nqa entry admin-name operation-tag
3. Set the statistics collection interval.
statistics interval interval
The default setting is 60 minutes.
4. Set the maximum number of statistics groups that can be saved.
statistics max-group number
By default, the NQA client can save a maximum of two statistics groups for an operation.
To disable the NQA statistics collection feature, set the number argument to 0.
5. Set the hold time of statistics groups.
statistics hold-time hold-time
The default setting is 120 minutes.
30
The default setting is 50.
When the maximum number of history records is reached, the system will delete the oldest
record to save a new one.
31
Configuring the ICMP template
About this task
A feature that uses the ICMP template performs the ICMP operation to measure the reachability of a
destination device. The ICMP template is supported on both IPv4 and IPv6 networks.
Procedure
1. Enter system view.
system-view
2. Create an ICMP template and enter its view.
nqa template icmp name
3. Specify the destination IP address for the operation.
IPv4:
destination ip ip-address
IPv6:
destination ipv6 ipv6-address
By default, no destination IP address is configured.
4. Specify the source IP address for ICMP echo requests. Choose one option as needed:
Use the IP address of the specified interface as the source IP address.
source interface interface-type interface-number
By default, the primary IP address of the output interface is used as the source IP address of
ICMP echo requests.
The specified source interface must be up.
Specify the source IPv4 address.
source ip ip-address
By default, the primary IPv4 address of the output interface is used as the source IPv4
address of ICMP echo requests.
The specified source IPv4 address must be the IPv4 address of a local interface, and the
interface must be up. Otherwise, no probe packets can be sent out.
Specify the source IPv6 address.
source ipv6 ipv6-address
By default, the primary IPv6 address of the output interface is used as the source IPv6
address of ICMP echo requests.
The specified source IPv6 address must be the IPv6 address of a local interface, and the
interface must be up. Otherwise, no probe packets can be sent out.
5. Specify the next hop IP address for ICMP echo requests.
IPv4:
next-hop ip ip-address
IPv6:
next-hop ipv6 ipv6-address
By default, no IP address of the next hop is configured.
6. Configure the probe result sending on a per-probe basis.
reaction trigger per-probe
By default, the probe result is sent to the feature that uses the template after three consecutive
failed or successful probes.
32
If you execute the reaction trigger per-probe and reaction trigger
probe-pass commands multiple times, the most recent configuration takes effect.
If you execute the reaction trigger per-probe and reaction trigger
probe-fail commands multiple times, the most recent configuration takes effect.
7. (Optional.) Set the payload size for each ICMP request.
data-size size
The default setting is 100 bytes.
8. (Optional.) Specify the payload fill string for ICMP echo requests.
data-fill string
The default payload fill string is the hexadecimal string 00010203040506070809.
33
source ipv6 ipv6-address
By default, the source IPv6 address of the probe packets is the primary IPv6 address of their
output interface.
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
6. Specify the source port number for the probe packets.
source port port-number
By default, no source port number is specified.
7. Specify the domain name to be translated.
resolve-target domain-name
By default, no domain name is specified.
8. Specify the domain name resolution type.
resolve-type { A | AAAA }
By default, the type is type A.
A type A query resolves a domain name to a mapped IPv4 address, and a type AAAA query to
a mapped IPv6 address.
9. (Optional.) Specify the IP address that is expected to be returned.
IPv4:
expect ip ip-address
IPv6:
expect ipv6 ipv6-address
By default, no expected IP address is specified.
34
The destination address must be the same as the IP address of the TCP listening service
configured on the NQA server. To configure a TCP listening service on the server, use the nqa
server tcp-connect command.
4. Specify the destination port number for the operation.
destination port port-number
By default, no destination port number is specified.
The destination port number must be the same as the port number of the TCP listening service
configured on the NQA server. To configure a TCP listening service on the server, use the nqa
server tcp-connect command.
5. Specify the source IP address for the probe packets.
IPv4:
source ip ip-address
By default, the primary IPv4 address of the output interface is used as the source IPv4 address
of the probe packets.
The source IP address must be the IPv4 address of a local interface, and the interface must be
up. Otherwise, no probe packets can be sent out.
IPv6:
source ipv6 ipv6-address
By default, the primary IPv6 address of the output interface is used as the source IPv6 address
of the probe packets.
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
6. (Optional.) Specify the payload fill string for the probe packets.
data-fill string
The default payload fill string is the hexadecimal string 00010203040506070809.
7. (Optional.) Configure the expected data.
expect data expression [ offset number ]
By default, no expected data is configured.
The NQA client performs expect data check only when you configure both the data-fill and
expect-data commands.
35
destination ip ip-address
IPv6:
destination ipv6 ipv6-address
By default, no destination IP address is specified.
The destination address must be the same as the IP address of the TCP listening service
configured on the NQA server. To configure a TCP listening service on the server, use the nqa
server tcp-connect command.
4. Specify the source IP address for the probe packets.
IPv4:
source ip ip-address
By default, the primary IPv4 address of the output interface is used as the source IPv4 address
of the probe packets.
The source IPv4 address must be the IPv4 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
IPv6:
source ipv6 ipv6-address
By default, the primary IPv6 address of the output interface is used as the source IPv6 address
of the probe packets.
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
5. Specify the next hop IP address for the probe packets.
IPv4:
next-hop ip ip-address
IPv6:
next-hop ipv6 ipv6-address
By default, the IP address of the next hop is configured.
6. Configure the probe result sending on a per-probe basis.
reaction trigger per-probe
By default, the probe result is sent to the feature that uses the template after three consecutive
failed or successful probes.
If you execute the reaction trigger per-probe and reaction trigger
probe-pass commands multiple times, the most recent configuration takes effect.
If you execute the reaction trigger per-probe and reaction trigger
probe-fail commands multiple times, the most recent configuration takes effect.
36
The UDP operation requires both the NQA server and the NQA client. Before you perform a UDP
operation, configure a UDP listening service on the NQA server. For more information about the UDP
listening service configuration, see "Configuring the NQA server."
Procedure
1. Enter system view.
system-view
2. Create a UDP template and enter its view.
nqa template udp name
3. Specify the destination IP address of the operation.
IPv4:
destination ip ip-address
IPv6:
destination ipv6 ipv6-address
By default, no destination IP address is specified.
The destination address must be the same as the IP address of the UDP listening service
configured on the NQA server. To configure a UDP listening service on the server, use the nqa
server udp-echo command.
4. Specify the destination port number for the operation.
destination port port-number
By default, no destination port number is specified.
The destination port number must be the same as the port number of the UDP listening service
configured on the NQA server. To configure a UDP listening service on the server, use the nqa
server udp-echo command.
5. Specify the source IP address for the probe packets.
IPv4:
source ip ip-address
By default, the primary IPv4 address of the output interface is used as the source IPv4 address
of the probe packets.
The source IP address must be the IPv4 address of a local interface, and the interface must be
up. Otherwise, no probe packets can be sent out.
IPv6:
source ipv6 ipv6-address
By default, the primary IPv6 address of the output interface is used as the source IPv6 address
of the probe packets.
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
6. Specify the payload fill string for the probe packets.
data-fill string
The default payload fill string is the hexadecimal string 00010203040506070809.
7. (Optional.) Set the payload size for the probe packets.
data-size size
The default setting is 100 bytes.
8. (Optional.) Configure the expected data.
expect data expression [ offset number ]
By default, no expected data is configured.
37
Expected data check is performed only when both the data-fill command and the expect
data command are configured.
38
b. Enter or paste the request content.
By default, no request content is configured.
To ensure successful operations, make sure the request content does not contain
command aliases configured by using the alias command. For more information about
the alias command, see CLI commands in Fundamentals Command Reference.
c. Return to HTTP template view.
quit
The system automatically saves the configuration in raw request view before it returns to
HTTP template view.
This step is required only when the operation type is set to raw.
9. Specify the source IP address for the probe packets.
IPv4:
source ip ip-address
By default, the primary IPv4 address of the output interface is used as the source IPv4 address
of the probe packets.
The source IPv4 address must be the IPv4 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
IPv6:
source ipv6 ipv6-address
By default, the primary IPv6 address of the output interface is used as the source IPv6 address
of the probe packets.
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
10. (Optional.) Configure the expected status codes.
expect status status-list
By default, no expected status code is configured.
11. (Optional.) Configure the expected data.
expect data expression [ offset number ]
By default, no expected data is configured.
39
2. Create an HTTPS template and enter its view.
nqa template https name
3. Specify the destination URL for the HTTPS template.
url url
By default, no destination URL is specified for an HTTPS template.
Enter the URL in one of the following formats:
https://host/resource
https://host:port/resource
4. Specify an HTTPS login username.
username username
By default, no HTTPS login username is specified.
5. Specify an HTTPS login password.
password { cipher | simple } string
By default, no HTTPS login password is specified.
6. Specify an SSL client policy.
ssl-client-policy policy-name
By default, no SSL client policy is specified.
7. Specify the HTTPS version.
version { v1.0 | v1.1 }
By default, HTTPS 1.0 is used.
8. Specify the HTTPS operation type.
operation { get | post | raw }
By default, the HTTPS operation type is get.
If you set the operation type to raw, the client pads the content configured in raw request view to
the HTTPS request to send to the HTTPS server.
9. Configure the content of the HTTPS raw request.
a. Enter raw request view.
raw-request
Every time you enter raw request view, the previously configured raw request content is
cleared.
b. Enter or paste the request content.
By default, no request content is configured.
To ensure successful operations, make sure the request content does not contain
command aliases configured by using the alias command. For more information about
the alias command, see CLI commands in Fundamentals Command Reference.
c. Return to HTTPS template view.
quit
The system automatically saves the configuration in raw request view before it returns to
HTTPS template view.
This step is required only when the operation type is set to raw.
10. Specify the source IP address for the probe packets.
IPv4:
source ip ip-address
By default, the primary IPv4 address of the output interface is used as the source IPv4 address
of the probe packets.
40
The source IP address must be the IPv4 address of a local interface, and the interface must be
up. Otherwise, no probe packets can be sent out.
IPv6:
source ipv6 ipv6-address
By default, the primary IPv6 address of the output interface is used as the source IPv6 address
of the probe packets.
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
11. (Optional.) Configure the expected data.
expect data expression [ offset number ]
By default, no expected data is configured.
12. (Optional.) Configure the expected status codes.
expect status status-list
By default, no expected status code is configured.
41
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
6. Set the data transmission mode.
mode { active | passive }
The default mode is active.
7. Specify the FTP operation type.
operation { get | put }
By default, the FTP operation type is get, which means obtaining files from the FTP server.
8. Specify the destination URL for the FTP template.
url url
By default, no destination URL is specified for an FTP template.
Enter the URL in one of the following formats:
ftp://host/filename.
ftp://host:port/filename.
When you perform the get operation, the file name is required.
When you perform the put operation, the filename argument does not take effect, even if it is
specified. The file name for the put operation is determined by using the filename command.
9. Specify the name of a file to be transferred.
filename filename
By default, no file is specified.
This task is required only for the put operation.
The configuration does not take effect for the get operation.
42
Procedure
1. Enter system view.
system-view
2. Create a RADIUS template and enter its view.
nqa template radius name
3. Specify the destination IP address of the operation.
IPv4:
destination ip ip-address
IPv6:
destination ipv6 ipv6-address
By default, no destination IP address is specified.
4. Specify the destination port number for the operation.
destination port port-number
By default, the destination port number is 1812.
5. Specify a username.
username username
By default, no username is specified.
6. Specify a password.
password { cipher | simple } string
By default, no password is specified.
7. Specify a shared key for secure RADIUS authentication.
key { cipher | simple } string
By default, no shared key is specified for RADIUS authentication.
8. Specify the source IP address for the probe packets.
IPv4:
source ip ip-address
By default, the primary IPv4 address of the output interface is used as the source IPv4 address
of the probe packets.
The source IP address must be the IPv4 address of a local interface, and the interface must be
up. Otherwise, no probe packets can be sent out.
IPv6:
source ipv6 ipv6-address
By default, the primary IPv6 address of the output interface is used as the source IPv6 address
of the probe packets.
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
43
Procedure
1. Enter system view.
system-view
2. Create an SSL template and enter its view.
nqa template ssl name
3. Specify the destination IP address of the operation.
IPv4:
destination ip ip-address
IPv6:
destination ipv6 ipv6-address
By default, no destination IP address is specified.
4. Specify the destination port number for the operation.
destination port port-number
By default, the destination port number is not specified.
5. Specify an SSL client policy.
ssl-client-policy policy-name
By default, no SSL client policy is specified.
6. Specify the source IP address for the probe packets.
IPv4:
source ip ip-address
By default, the primary IPv4 address of the output interface is used as the source IPv4 address
of the probe packets.
The source IP address must be the IPv4 address of a local interface, and the interface must be
up. Otherwise, no probe packets can be sent out.
IPv6:
source ipv6 ipv6-address
By default, the primary IPv6 address of the output interface is used as the source IPv6 address
of the probe packets.
The source IPv6 address must be the IPv6 address of a local interface, and the interface must
be up. Otherwise, no probe packets can be sent out.
44
By default, no description is configured.
4. Set the interval at which the NQA operation repeats.
frequency interval
The default setting is 5000 milliseconds.
If the operation is not completed when the interval expires, the next operation does not start.
5. Set the probe timeout time.
probe timeout timeout
The default setting is 3000 milliseconds.
6. Set the TTL for the probe packets.
ttl value
The default setting is 20.
This command is not available for the ARP template.
7. Set the ToS value in the IP header of the probe packets.
tos value
The default setting is 0.
This command is not available for the ARP template.
8. Specify the VPN instance where the operation is performed.
vpn-instance vpn-instance-name
By default, the operation is performed on the public network.
9. Set the number of consecutive successful probes to determine a successful operation event.
reaction trigger probe-pass count
The default setting is 3.
If the number of consecutive successful probes for an NQA operation is reached, the NQA
client notifies the feature that uses the template of the successful operation event.
10. Set the number of consecutive probe failures to determine an operation failure.
reaction trigger probe-fail count
The default setting is 3.
If the number of consecutive probe failures for an NQA operation is reached, the NQA client
notifies the feature that uses the NQA template of the operation failure.
Task Command
Display history records of NQA display nqa history [ admin-name
operations. operation-tag ]
Display the current monitoring results of display nqa reaction counters [ admin-name
reaction entries. operation-tag [ item-number ] ]
Display the most recent result of the NQA display nqa result [ admin-name
operation. operation-tag ]
Display NQA server status. display nqa server status
display nqa statistics [ admin-name
Display NQA statistics.
operation-tag ]
45
NQA configuration examples
Example: Configuring the ICMP echo operation
Network configuration
As shown in Figure 7, configure an ICMP echo operation on the NQA client (Device A) to test the
round-trip time to Device B. The next hop of Device A is Device C.
Figure 7 Network diagram
Device C
10.1.1.2/24 10.2.2.1/24
NQA client
10.1.1.1/24 10.2.2.2/24
10.3.1.1/24 10.4.1.2/24
Device A Device B
10.3.1.2/24 10.4.1.1/24
Device D
Procedure
# Assign IP addresses to interfaces, as shown in Figure 7. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create an ICMP echo operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type icmp-echo
# Specify 10.1.1.2 as the next hop. The ICMP echo requests are sent through Device C to Device B.
[DeviceA-nqa-admin-test1-icmp-echo] next-hop ip 10.1.1.2
# Set the probe timeout time to 500 milliseconds for the ICMP echo operation.
[DeviceA-nqa-admin-test1-icmp-echo] probe timeout 500
46
[DeviceA-nqa-admin-test1-icmp-echo] frequency 5000
# After the ICMP echo operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
The output shows that the packets sent by Device A can reach Device B through Device C. No
packet loss occurs during the operation. The minimum, maximum, and average round-trip times are
2, 5, and 3 milliseconds, respectively.
47
Figure 8 Network diagram
NQA client NQA server
10.1.1.1/16 10.2.2.2/16
IP network
Device A Device B
Procedure
1. Assign IP addresses to interfaces, as shown in Figure 8. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure Device A:
# Create an ICMP jitter operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type icmp-jitter
# Specify 10.2.2.2 as the destination address for the operation.
[DeviceA-nqa-admin-test1-icmp-jitter] destination ip 10.2.2.2
# Configure the operation to repeat every 1000 milliseconds.
[DeviceA-nqa-admin-test1-icmp-jitter] frequency 1000
[DeviceA-nqa-admin-test1-icmp-jitter] quit
# Start the ICMP jitter operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the ICMP jitter operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
48
Min negative SD: 1 Min negative DS: 2
Max negative SD: 1 Max negative DS: 2
Negative SD number: 1 Negative DS number: 1
Negative SD sum: 1 Negative DS sum: 2
Negative SD average: 1 Negative DS average: 2
Negative SD square-sum: 1 Negative DS square-sum: 4
SD average: 1 DS average: 2
One way results:
Max SD delay: 1 Max DS delay: 2
Min SD delay: 1 Min DS delay: 2
Number of SD delay: 1 Number of DS delay: 1
Sum of SD delay: 1 Sum of DS delay: 2
Square-Sum of SD delay: 1 Square-Sum of DS delay: 4
Lost packets for unknown reason: 0
# Display the statistics of the ICMP jitter operation.
[DeviceA] display nqa statistics admin test1
NQA entry (admin admin, tag test1) test statistics:
NO. : 1
Start time: 2019-03-09 17:42:10.7
Life time: 156 seconds
Send operation times: 1560 Receive response times: 1560
Min/Max/Average round trip time: 1/2/1
Square-Sum of round trip time: 1563
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
ICMP-jitter results:
RTT number: 1560
Min positive SD: 1 Min positive DS: 1
Max positive SD: 1 Max positive DS: 2
Positive SD number: 18 Positive DS number: 46
Positive SD sum: 18 Positive DS sum: 49
Positive SD average: 1 Positive DS average: 1
Positive SD square-sum: 18 Positive DS square-sum: 55
Min negative SD: 1 Min negative DS: 1
Max negative SD: 1 Max negative DS: 2
Negative SD number: 24 Negative DS number: 57
Negative SD sum: 24 Negative DS sum: 58
Negative SD average: 1 Negative DS average: 1
Negative SD square-sum: 24 Negative DS square-sum: 60
SD average: 16 DS average: 2
One way results:
Max SD delay: 1 Max DS delay: 2
Min SD delay: 1 Min DS delay: 1
49
Number of SD delay: 4 Number of DS delay: 4
Sum of SD delay: 4 Sum of DS delay: 5
Square-Sum of SD delay: 4 Square-Sum of DS delay: 7
Lost packets for unknown reason: 0
Switch A Switch B
Procedure
# Create a DHCP operation.
<SwitchA> system-view
[SwitchA] nqa entry admin test1
[SwitchA-nqa-admin-test1] type dhcp
# After the DHCP operation runs for a period of time, stop the operation.
[SwitchA] undo nqa schedule admin test1
50
NQA entry (admin admin, tag test1) history records:
Index Response Status Time
1 512 Succeeded 2019-11-22 09:56:03.2
The output shows that it took Switch A 512 milliseconds to obtain an IP address from the DHCP
server.
Device A
Procedure
# Assign IP addresses to interfaces, as shown in Figure 10. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create a DNS operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type dns
# Specify the IP address of the DNS server (10.2.2.2) as the destination address.
[DeviceA-nqa-admin-test1-dns] destination ip 10.2.2.2
# After the DNS operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
51
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
The output shows that it took Device A 62 milliseconds to translate domain name host.com into an
IP address.
Device A Device B
Procedure
# Assign IP addresses to interfaces, as shown in Figure 11. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create an FTP operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type ftp
52
[DeviceA-nqa-admin-test1-ftp] quit
# After the FTP operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
The output shows that it took Device A 173 milliseconds to upload a file to the FTP server.
Device A Device B
Procedure
# Assign IP addresses to interfaces, as shown in Figure 12. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create an HTTP operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
53
[DeviceA-nqa-admin-test1] type http
# Configure the HTTP operation to get data from the HTTP server.
[DeviceA-nqa-admin-test1-http] operation get
# After the HTTP operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
The output shows that it took Device A 64 milliseconds to obtain data from the HTTP server.
54
Figure 13 Network diagram
NQA client NQA server
10.1.1.1/16 10.2.2.2/16
IP network
Device A Device B
Procedure
1. Assign IP addresses to interfaces, as shown in Figure 13. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen to UDP port 9000 on IP address 10.2.2.2.
[DeviceB] nqa server udp-echo 10.2.2.2 9000
4. Configure Device A:
# Create a UDP jitter operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type udp-jitter
# Specify 10.2.2.2 as the destination address of the operation.
[DeviceA-nqa-admin-test1-udp-jitter] destination ip 10.2.2.2
# Set the destination port number to 9000.
[DeviceA-nqa-admin-test1-udp-jitter] destination port 9000
# Configure the operation to repeat every 1000 milliseconds.
[DeviceA-nqa-admin-test1-udp-jitter] frequency 1000
[DeviceA-nqa-admin-test1-udp-jitter] quit
# Start the UDP jitter operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the UDP jitter operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
55
Packets arrived late: 0
UDP-jitter results:
RTT number: 10
Min positive SD: 4 Min positive DS: 1
Max positive SD: 21 Max positive DS: 28
Positive SD number: 5 Positive DS number: 4
Positive SD sum: 52 Positive DS sum: 38
Positive SD average: 10 Positive DS average: 10
Positive SD square-sum: 754 Positive DS square-sum: 460
Min negative SD: 1 Min negative DS: 6
Max negative SD: 13 Max negative DS: 22
Negative SD number: 4 Negative DS number: 5
Negative SD sum: 38 Negative DS sum: 52
Negative SD average: 10 Negative DS average: 10
Negative SD square-sum: 460 Negative DS square-sum: 754
SD average: 10 DS average: 10
One way results:
Max SD delay: 15 Max DS delay: 16
Min SD delay: 7 Min DS delay: 7
Number of SD delay: 10 Number of DS delay: 10
Sum of SD delay: 78 Sum of DS delay: 85
Square-Sum of SD delay: 666 Square-Sum of DS delay: 787
SD lost packets: 0 DS lost packets: 0
Lost packets for unknown reason: 0
# Display the statistics of the UDP jitter operation.
[DeviceA] display nqa statistics admin test1
NQA entry (admin admin, tag test1) test statistics:
NO. : 1
Start time: 2019-05-29 13:56:14.0
Life time: 47 seconds
Send operation times: 410 Receive response times: 410
Min/Max/Average round trip time: 1/93/19
Square-Sum of round trip time: 206176
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
UDP-jitter results:
RTT number: 410
Min positive SD: 3 Min positive DS: 1
Max positive SD: 30 Max positive DS: 79
Positive SD number: 186 Positive DS number: 158
Positive SD sum: 2602 Positive DS sum: 1928
Positive SD average: 13 Positive DS average: 12
Positive SD square-sum: 45304 Positive DS square-sum: 31682
56
Min negative SD: 1 Min negative DS: 1
Max negative SD: 30 Max negative DS: 78
Negative SD number: 181 Negative DS number: 209
Negative SD sum: 181 Negative DS sum: 209
Negative SD average: 13 Negative DS average: 14
Negative SD square-sum: 46994 Negative DS square-sum: 3030
SD average: 9 DS average: 1
One way results:
Max SD delay: 46 Max DS delay: 46
Min SD delay: 7 Min DS delay: 7
Number of SD delay: 410 Number of DS delay: 410
Sum of SD delay: 3705 Sum of DS delay: 3891
Square-Sum of SD delay: 45987 Square-Sum of DS delay: 49393
SD lost packets: 0 DS lost packets: 0
Lost packets for unknown reason: 0
Device A Device B
Procedure
1. Assign IP addresses to interfaces, as shown in Figure 14. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure the SNMP agent (Device B):
# Set the SNMP version to all.
<DeviceB> system-view
[DeviceB] snmp-agent sys-info version all
# Set the read community to public.
[DeviceB] snmp-agent community read public
# Set the write community to private.
[DeviceB] snmp-agent community write private
4. Configure Device A:
# Create an SNMP operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type snmp
# Specify 10.2.2.2 as the destination IP address of the SNMP operation.
[DeviceA-nqa-admin-test1-snmp] destination ip 10.2.2.2
# Enable the saving of history records.
57
[DeviceA-nqa-admin-test1-snmp] history-record enable
[DeviceA-nqa-admin-test1-snmp] quit
# Start the SNMP operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the SNMP operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
Device A Device B
Procedure
1. Assign IP addresses to interfaces, as shown in Figure 15. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
58
# Configure a listening service to listen to TCP port 9000 on IP address 10.2.2.2.
[DeviceB] nqa server tcp-connect 10.2.2.2 9000
4. Configure Device A:
# Create a TCP operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type tcp
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqa-admin-test1-tcp] destination ip 10.2.2.2
# Set the destination port number to 9000.
[DeviceA-nqa-admin-test1-tcp] destination port 9000
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-tcp] history-record enable
[DeviceA-nqa-admin-test1-tcp] quit
# Start the TCP operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the TCP operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
59
Figure 16 Network diagram
NQA client NQA server
10.1.1.1/16 10.2.2.2/16
IP network
Device A Device B
Procedure
1. Assign IP addresses to interfaces, as shown in Figure 16. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen to UDP port 8000 on IP address 10.2.2.2.
[DeviceB] nqa server udp-echo 10.2.2.2 8000
4. Configure Device A:
# Create a UDP echo operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type udp-echo
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqa-admin-test1-udp-echo] destination ip 10.2.2.2
# Set the destination port number to 8000.
[DeviceA-nqa-admin-test1-udp-echo] destination port 8000
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-udp-echo] history-record enable
[DeviceA-nqa-admin-test1-udp-echo] quit
# Start the UDP echo operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the UDP echo operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
60
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history records:
Index Response Status Time
1 25 Succeeded 2019-11-22 10:36:17.9
The output shows that the round-trip time between Device A and port 8000 on Device B is 25
milliseconds.
Device A Device B
Procedure
1. Assign IP addresses to interfaces, as shown in Figure 17. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Execute the ip ttl-expires enable command on the intermediate devices and execute
the ip unreachables enable command on Device B.
4. Configure Device A:
# Create a UDP tracert operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type udp-tracert
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqa-admin-test1-udp-tracert] destination ip 10.2.2.2
# Set the destination port number to 33434.
[DeviceA-nqa-admin-test1-udp-tracert] destination port 33434
# Configure Device A to perform three probes to each hop.
[DeviceA-nqa-admin-test1-udp-tracert] probe count 3
# Set the probe timeout time to 500 milliseconds.
[DeviceA-nqa-admin-test1-udp-tracert] probe timeout 500
# Configure the UDP tracert operation to repeat every 5000 milliseconds.
[DeviceA-nqa-admin-test1-udp-tracert] frequency 5000
# Specify GigabitEthernet 1/0/1 as the output interface for UDP packets.
[DeviceA-nqa-admin-test1-udp-tracert] out interface gigabitethernet 1/0/1
# Enable the no-fragmentation feature.
[DeviceA-nqa-admin-test1-udp-tracert] no-fragment enable
# Set the maximum number of consecutive probe failures to 6.
[DeviceA-nqa-admin-test1-udp-tracert] max-failure 6
# Set the TTL value to 1 for UDP packets in the start round of the UDP tracert operation.
61
[DeviceA-nqa-admin-test1-udp-tracert] init-ttl 1
# Start the UDP tracert operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the UDP tracert operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
Device A Device B
Procedure
1. Assign IP addresses to interfaces, as shown in Figure 18. (Details not shown.)
62
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen to UDP port 9000 on IP address 10.2.2.2.
[DeviceB] nqa server udp-echo 10.2.2.2 9000
4. Configure Device A:
# Create a voice operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type voice
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqa-admin-test1-voice] destination ip 10.2.2.2
# Set the destination port number to 9000.
[DeviceA-nqa-admin-test1-voice] destination port 9000
[DeviceA-nqa-admin-test1-voice] quit
# Start the voice operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the voice operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
63
Negative SD number: 255 Negative DS number: 259
Negative SD sum: 759 Negative DS sum: 1796
Negative SD average: 2 Negative DS average: 6
Negative SD square-sum: 53655 Negative DS square-sum: 1691776
SD average: 2 DS average: 6
One way results:
Max SD delay: 343 Max DS delay: 985
Min SD delay: 343 Min DS delay: 985
Number of SD delay: 1 Number of DS delay: 1
Sum of SD delay: 343 Sum of DS delay: 985
Square-Sum of SD delay: 117649 Square-Sum of DS delay: 970225
SD lost packets: 0 DS lost packets: 0
Lost packets for unknown reason: 0
Voice scores:
MOS value: 4.38 ICPIF value: 0
# Display the statistics of the voice operation.
[DeviceA] display nqa statistics admin test1
NQA entry (admin admin, tag test1) test statistics:
NO. : 1
64
Max SD delay: 359 Max DS delay: 985
Min SD delay: 0 Min DS delay: 0
Number of SD delay: 4 Number of DS delay: 4
Sum of SD delay: 1390 Sum of DS delay: 1079
Square-Sum of SD delay: 483202 Square-Sum of DS delay: 973651
SD lost packets: 0 DS lost packets: 0
Lost packets for unknown reason: 0
Voice scores:
Max MOS value: 4.38 Min MOS value: 4.38
Max ICPIF value: 0 Min ICPIF value: 0
Device A Device B
Procedure
# Assign IP addresses to interfaces, as shown in Figure 19. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create a DLSw operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type dlsw
# After the DLSw operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
65
Last succeeded probe time: 2019-11-22 10:40:27.7
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to internal error: 0
Failures due to other errors: 0
The output shows that the response time of the DLSw device is 19 milliseconds.
Procedure
# Assign IP addresses to interfaces, as shown in Figure 20. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Execute the ip ttl-expires enable command on Device B and execute the ip
unreachables enable command on Device C.
# Create a path jitter operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type path-jitter
# After the path jitter operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
66
Verifying the configuration
# Display the most recent result of the path jitter operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Hop IP 10.1.1.2
Basic Results
Send operation times: 10 Receive response times: 10
Min/Max/Average round trip time: 9/21/14
Square-Sum of round trip time: 2419
Extended Results
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
Path-Jitter Results
Jitter number: 9
Min/Max/Average jitter: 1/10/4
Positive jitter number: 6
Min/Max/Average positive jitter: 1/9/4
Sum/Square-Sum positive jitter: 25/173
Negative jitter number: 3
Min/Max/Average negative jitter: 2/10/6
Sum/Square-Sum positive jitter: 19/153
Hop IP 10.2.2.2
Basic Results
Send operation times: 10 Receive response times: 10
Min/Max/Average round trip time: 15/40/28
Square-Sum of round trip time: 4493
Extended Results
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
Path-Jitter Results
Jitter number: 9
Min/Max/Average jitter: 1/10/4
Positive jitter number: 6
Min/Max/Average positive jitter: 1/9/4
Sum/Square-Sum positive jitter: 25/173
Negative jitter number: 3
Min/Max/Average negative jitter: 2/10/6
Sum/Square-Sum positive jitter: 19/153
67
Example: Configuring NQA collaboration
Network configuration
As shown in Figure 21, configure a static route to Switch C with Switch B as the next hop on Switch
A. Associate the static route, a track entry, and an ICMP echo operation to monitor the state of the
static route.
Figure 21 Network diagram
Switch B
Vlan-int3 Vlan-int2
10.2.1.1/24 10.1.1.1/24
Vlan-int3 Vlan-int2
10.2.1.2/24 10.1.1.2/24
Switch A Switch C
Procedure
1. Assign IP addresses to interfaces, as shown in Figure 21. (Details not shown.)
2. On Switch A, configure a static route, and associate the static route with track entry 1.
<SwitchA> system-view
[SwitchA] ip route-static 10.1.1.2 24 10.2.1.1 track 1
3. On Switch A, configure an ICMP echo operation:
# Create an NQA operation with administrator name admin and operation tag test1.
[SwitchA] nqa entry admin test1
# Configure the NQA operation type as ICMP echo.
[SwitchA-nqa-admin-test1] type icmp-echo
# Specify 10.2.1.1 as the destination IP address.
[SwitchA-nqa-admin-test1-icmp-echo] destination ip 10.2.1.1
# Configure the operation to repeat every 100 milliseconds.
[SwitchA-nqa-admin-test1-icmp-echo] frequency 100
# Create reaction entry 1. If the number of consecutive probe failures reaches 5, collaboration is
triggered.
[SwitchA-nqa-admin-test1-icmp-echo] reaction 1 checked-element probe-fail
threshold-type consecutive 5 action-type trigger-only
[SwitchA-nqa-admin-test1-icmp-echo] quit
# Start the ICMP operation.
[SwitchA] nqa schedule admin test1 start-time now lifetime forever
4. On Switch A, create track entry 1, and associate it with reaction entry 1 of the NQA operation.
[SwitchA] track 1 nqa entry admin test1 reaction 1
68
Notification delay: Positive 0, Negative 0 (in seconds)
Tracked object:
NQA entry: admin test1
Reaction: 1
# Display brief information about active routes in the routing table on Switch A.
[SwitchA] display ip routing-table
Destinations : 13 Routes : 13
The output shows that the static route with the next hop 10.2.1.1 is active, and the status of the track
entry is positive.
# Remove the IP address of VLAN-interface 3 on Switch B.
<SwitchB> system-view
[SwitchB] interface vlan-interface 3
[SwitchB-Vlan-interface3] undo ip address
# Display brief information about active routes in the routing table on Switch A.
[SwitchA] display ip routing-table
Destinations : 12 Routes : 12
69
10.2.1.2/32 Direct 0 0 127.0.0.1 InLoop0
10.2.1.255/32 Direct 0 0 10.2.1.2 Vlan3
127.0.0.0/8 Direct 0 0 127.0.0.1 InLoop0
127.0.0.0/32 Direct 0 0 127.0.0.1 InLoop0
127.0.0.1/32 Direct 0 0 127.0.0.1 InLoop0
127.255.255.255/32 Direct 0 0 127.0.0.1 InLoop0
224.0.0.0/4 Direct 0 0 0.0.0.0 NULL0
224.0.0.0/24 Direct 0 0 0.0.0.0 NULL0
255.255.255.255/32 Direct 0 0 127.0.0.1 InLoop0
The output shows that the static route does not exist, and the status of the track entry is negative.
10.1.1.2/24 10.2.2.1/24
NQA client
10.1.1.1/24 10.2.2.2/24
10.3.1.1/24 10.4.1.2/24
Device A Device B
10.3.1.2/24 10.4.1.1/24
Device D
Procedure
# Assign IP addresses to interfaces, as shown in Figure 22. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create ICMP template icmp.
<DeviceA> system-view
[DeviceA] nqa template icmp icmp
# Set the probe timeout time to 500 milliseconds for the ICMP echo operation.
[DeviceA-nqatplt-icmp-icmp] probe timeout 500
70
# Configure the ICMP echo operation to repeat every 3000 milliseconds.
[DeviceA-nqatplt-icmp-icmp] frequency 3000
# Configure the NQA client to notify the feature of the successful operation event if the number of
consecutive successful probes reaches 2.
[DeviceA-nqatplt-icmp-icmp] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of consecutive
failed probes reaches 2.
[DeviceA-nqatplt-icmp-icmp] reaction trigger probe-fail 2
Device A
Procedure
# Assign IP addresses to interfaces, as shown in Figure 23. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create DNS template dns.
<DeviceA> system-view
[DeviceA] nqa template dns dns
# Specify the IP address of the DNS server (10.2.2.2) as the destination IP address.
[DeviceA-nqatplt-dns-dns] destination ip 10.2.2.2
# Configure the NQA client to notify the feature of the successful operation event if the number of
consecutive successful probes reaches 2.
[DeviceA-nqatplt-dns-dns] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of consecutive
failed probes reaches 2.
[DeviceA-nqatplt-dns-dns] reaction trigger probe-fail 2
71
Example: Configuring the TCP template
Network configuration
As shown in Figure 24, configure a TCP template for a feature to perform the TCP operation. The
operation tests whether Device A can establish a TCP connection to Device B.
Figure 24 Network diagram
NQA client NQA server
10.1.1.1/16 10.2.2.2/16
IP network
Device A Device B
Procedure
1. Assign IP addresses to interfaces, as shown in Figure 24. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen to TCP port 9000 on IP address 10.2.2.2.
[DeviceB] nqa server tcp-connect 10.2.2.2 9000
4. Configure Device A:
# Create TCP template tcp.
<DeviceA> system-view
[DeviceA] nqa template tcp tcp
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqatplt-tcp-tcp] destination ip 10.2.2.2
# Set the destination port number to 9000.
[DeviceA-nqatplt-tcp-tcp] destination port 9000
# Configure the NQA client to notify the feature of the successful operation event if the number
of consecutive successful probes reaches 2.
[DeviceA-nqatplt-tcp-tcp] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of
consecutive failed probes reaches 2.
[DeviceA-nqatplt-tcp-tcp] reaction trigger probe-fail 2
72
Figure 25 Network diagram
NQA client NQA server
10.1.1.1/16 10.2.2.2/16
IP network
Device A Device B
Procedure
1. Assign IP addresses to interfaces, as shown in Figure 25. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure Device A:
# Create TCP half open template test.
<DeviceA> system-view
[DeviceA] nqa template tcphalfopen test
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqatplt-tcphalfopen-test] destination ip 10.2.2.2
# Configure the NQA client to notify the feature of the successful operation event if the number
of consecutive successful probes reaches 2.
[DeviceA-nqatplt-tcphalfopen-test] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of
consecutive failed probes reaches 2.
[DeviceA-nqatplt-tcphalfopen-test] reaction trigger probe-fail 2
Device A Device B
Procedure
1. Assign IP addresses to interfaces, as shown in Figure 26. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen to UDP port 9000 on IP address 10.2.2.2.
[DeviceB] nqa server udp-echo 10.2.2.2 9000
4. Configure Device A:
# Create UDP template udp.
73
<DeviceA> system-view
[DeviceA] nqa template udp udp
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqatplt-udp-udp] destination ip 10.2.2.2
# Set the destination port number to 9000.
[DeviceA-nqatplt-udp-udp] destination port 9000
# Configure the NQA client to notify the feature of the successful operation event if the number
of consecutive successful probes reaches 2.
[DeviceA-nqatplt-udp-udp] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of
consecutive failed probes reaches 2.
[DeviceA-nqatplt-udp-udp] reaction trigger probe-fail 2
Device A Device B
Procedure
# Assign IP addresses to interfaces, as shown in Figure 27. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create HTTP template http.
<DeviceA> system-view
[DeviceA] nqa template http http
# Configure the NQA client to notify the feature of the successful operation event if the number of
consecutive successful probes reaches 2.
[DeviceA-nqatplt-http-http] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of consecutive
failed probes reaches 2.
[DeviceA-nqatplt-http-http] reaction trigger probe-fail 2
74
Example: Configuring the HTTPS template
Network configuration
As shown in Figure 28, configure an HTTPS template for a feature to test whether the NQA client can
get data from the HTTPS server (Device B).
Figure 28 Network diagram
NQA client HTTPS server
10.1.1.1/16 10.2.2.2/16
IP network
Device A Device B
Procedure
# Assign IP addresses to interfaces, as shown in Figure 28. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Configure an SSL client policy named abc on Device A, and make sure Device A can use the policy
to connect to the HTTPS server. (Details not shown.)
# Create HTTPS template test.
<DeviceA> system-view
[DeviceA] nqa template https https
# Set the HTTPS operation type to get (the default HTTPS operation type).
[DeviceA-nqatplt-https-https] operation get
# Configure the NQA client to notify the feature of the successful operation event if the number of
consecutive successful probes reaches 2.
[DeviceA-nqatplt-https-https] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of consecutive
failed probes reaches 2.
[DeviceA-nqatplt-https-https] reaction trigger probe-fail 2
75
Figure 29 Network diagram
NQA client FTP server
10.1.1.1/16 10.2.2.2/16
IP network
Device A Device B
Procedure
# Assign IP addresses to interfaces, as shown in Figure 29. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Create FTP template ftp.
<DeviceA> system-view
[DeviceA] nqa template ftp ftp
# Configure the NQA client to notify the feature of the successful operation event if the number of
consecutive successful probes reaches 2.
[DeviceA-nqatplt-ftp-ftp] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of consecutive
failed probes reaches 2.
[DeviceA-nqatplt-ftp-ftp] reaction trigger probe-fail 2
Device A Device B
Procedure
# Assign IP addresses to interfaces, as shown in Figure 30. (Details not shown.)
76
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Configure the RADIUS server. (Details not shown.)
# Create RADIUS template radius.
<DeviceA> system-view
[DeviceA] nqa template radius radius
# Set the shared key to 123456 in plain text for secure RADIUS authentication.
[DeviceA-nqatplt-radius-radius] key simple 123456
# Configure the NQA client to notify the feature of the successful operation event if the number of
consecutive successful probes reaches 2.
[DeviceA-nqatplt-radius-radius] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of consecutive
failed probes reaches 2.
[DeviceA-nqatplt-radius-radius] reaction trigger probe-fail 2
Device A Device B
Procedure
# Assign IP addresses to interfaces, as shown in Figure 31. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
# Configure an SSL client policy named abc on Device A, and make sure Device A can use the policy
to connect to the SSL server on Device B. (Details not shown.)
# Create SSL template ssl.
<DeviceA> system-view
[DeviceA] nqa template ssl ssl
# Set the destination IP address and port number to 10.2.2.2 and 9000, respectively.
[DeviceA-nqatplt-ssl-ssl] destination ip 10.2.2.2
[DeviceA-nqatplt-ssl-ssl] destination port 9000
77
[DeviceA-nqatplt-ssl-ssl] ssl-client-policy abc
# Configure the NQA client to notify the feature of the successful operation event if the number of
consecutive successful probes reaches 2.
[DeviceA-nqatplt-ssl-ssl] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of consecutive
failed probes reaches 2.
[DeviceA-nqatplt-ssl-ssl] reaction trigger probe-fail 2
78
Configuring iNQA
About iNQA
Intelligent Network Quality Analyzer (iNQA) allows you to measure network performance quickly in
large-scale IP networks. iNQA supports measuring packet loss on forward, backward, and
bidirectional flows. The packet loss data includes number of lost packets, packet loss rate, number of
lost bytes, and byte loss rate. The measurement results help you to know when and where the
packet loss occurs and the event severity level.
iNQA benefits
iNQA provides the following benefits:
• True measurement results—iNQA measures the service packets directly to calculate packet
loss results, thus reflecting the real network quality.
• Wide application range—Applicable to Layer 2 network and Layer 3 IP network. iNQA
supports the network-level and direct link measurement flexibly.
• Fast fault location—iNQA obtains the packet loss time, packet loss location, and number of
lost packets in real time.
• Applicable to different applications—You can apply iNQA to multiple scenarios, such as
point-to-point, point-to-multipoint, and multipoint-to-multipoint.
Basic concepts
Figure 32 shows the important iNQA concepts including MP, collector, analyzer, and AMS.
Figure 32 Network diagram
Analyzer
MP 100 MP 200 + MP 300
In-point Mid-point Collector 2 Out-point
Inbound Collector 1 Inbound Collector 3 Outbound
IP network IP network
End-to-end measurement
AMS 1 (point-to-point AMS 2 (point-to-point
measurement) measurement)
Collector
The collector manages MPs, collects data from MPs, and reports the data to the analyzer.
Analyzer
The analyzer collects the data from collector instances and summarizes the data.
A device supports both the collector and analyzer functionalities. You can enable the collector and
analyzer functionalities on different devices or the same device.
79
Target flow
A target flow is vital for iNQA measurement. You can specify a flow by using any combination of the
following items: source IPv4 address/segment, destination IPv4 address/segment, protocol type,
source port number, destination port number, and DSCP value. Using more items defines a more
explicit flow and generates more accurate analysis data.
iNQA measures the flow according to the flow direction. After you define a forward flow, the flow in
the opposite direction is a backward flow. The bidirectional flows refers to a forward flow and a
backward flow. As shown in Figure 33, if you define the flow from Device 1 to Device 2 as the forward
flow, then a flow from Device 2 to Device 1 is the backward flow. If you want to measure the packet
loss on forward and backward flows between Device 1 and Device 2, you can specify bidirectional
flows. The intermediate devices between the ingress and egress devices of the bidirectional flows
can be the same or different.
Figure 33 Flow direction
Forward
Backward
MP
MP is a logical concept. An MP counts statistics and generates data for a flow. To measure packet
loss on an interface on a collector, an MP must be bound to the interface.
An MP contains the following attributes:
• Measurement location of the flow.
An ingress point refers to the point that the flow enters the network.
An egress point refers to the point that the flow leaves the network.
A middle point refers to the point between an ingress point and egress point.
• Flow direction on the measurement point.
A flow entering the MP is an inbound flow, and a flow leaving the MP is an outbound flow.
As shown in Figure 34, MP 100 is marked as in-point/inbound. MP 100 is the ingress point of
the flow and the flow enters MP 100.
As shown in Figure 34, MP 110 is marked as in-point/outbound. MP 110 is the ingress point
of the flow and the flow leaves MP 110.
As shown in Figure 34, MP 200 is marked as out-point/outbound. MP 200 is the egress point
of the flow and the flow leaves MP 200.
As shown in Figure 34, MP 210 is marked as out-point/inbound. MP 210 is the egress point
of the flow and the flow enters MP 210.
80
Figure 34 MP
MP 110 MP 210
In-point Out-point
Outbound Inbound
Device 1 Device 2
MP 100 MP 200
In-point Out-point
Inbound Outbound
AMS
Configured on the analyzer, an AMS defines a measurement span for point-to-point performance
measurement. You can configure multiple AMSs for an instance, and each AMS can be bound to
MPs on any collector of the same instance. Therefore, iNQA can measure and summarize the data
of the forward flow, backward flow, or bidirectional flows in any AMS.
Each AMS has an ingress MP group and egress MP group. The ingress MP group is the set of the
ingress MPs in the AMS and the egress MP group is the set of the egress MPs.
As shown in Figure 32:
• To measure the packet loss between MP 100 and MP 300, AMS is not needed.
• If packet loss occurs between MP 100 and MP 300, configure AMS 1 and AMS 2 on the
analyzer to locate on which span the packet loss occurs.
Bind MP 100 of collector 1 and MP 200 of collector 2 to AMS 1. Add MP 100 to the ingress
MP group of the AMS 1 and MP 200 to the egress MP group of the AMS 1.
Bind MP 200 of collector 2 and MP 300 of collector 3 to AMS 2. Add MP 200 to the ingress
MP group of the AMS 2 and MP 300 to the egress MP group of the AMS 2.
Instance
The instance allows measurement on a per-flow basis. In an instance, you can configure the target
flow, flow direction, MPs, and measurement interval.
On the collector and analyzer, create an instance of the same ID for the same target flow. An
instance can be bound to only one target flow. On the same device, you can configure multiple
instances to measure and collect the packet loss rate of different target flows.
Flag bit
Flag bits, also called color bits, are used to distinguish target flows from unintended traffic flows.
iNQA uses ToS field bits 5 to 7 in the IPv4 packet header as the flag bit. This device supports using
only bit 5 as the flag bit.
The ToS field consist of a 6-bit (bits 0 to 5) DSCP filed and a 2-bit (bits 6 to 7) ECN field. If bit 5 is
used as the flag bit, to avoid inaccurate packet loss statistics, do not use it for DSCP purposes.
Application scenarios
End-to-end packet loss measurement
iNQA measures whether packet loss occurs between the ingress points (where the target flow enters
the IP network) and the egress points (where the flow leaves the network).
• Scenario 1: The target flow has only one ingress point and one egress point , and the two points
are on the same network, for example, the local area network or the core network.
81
As shown in Figure 35, to measure packet loss for the flow in the network, deploy iNQA on
Device 1 and Device 3.
Figure 35 Scenario 1
Device 2
MP 100 MP 300
In-point Out-point
Inbound Outbound
Device 1 Device 3
Interface 1 Interface 2
Device 4
• Scenario 2: The target flow can have multiple ingress points and egress points, and the points
are on the same network, for example, the local area network or the core network.
As shown in Figure 36, to measure packet loss for the flow in the network, deploy iNQA on
Device 1, Device 2, Device 3, and Device 4.
Figure 36 Scenario 2
MP 100 MP 300
In-point Out-point
Inbound Outbound
Device 1 Device 3
Interface 1 Interface 3
MP 200 MP 400
In-point 0ut-point
Inbound Outbound
Interface 2 Interface 4
Device 2 Device 4
• Scenario 3: The ingress points and the egress points of a target flow are on the different
networks.
As shown in Figure 37, to measure whether packet loss occurs when the flow crosses over the
IP network, deploy iNQA on the egress devices of the headquarters and branches to measure.
82
Figure 37 Scenario 3
MP 200 Branch 1
Out-point
Inbound Device 2
Headquarters
MP 100
In-point
Outbound Interface 2
Device 1
IP network
Interface 1 Branch 2
Interface 3
MP 300
Out-point
Inbound Device 3
Operating mechanism
iNQA uses the model of multi-point collection and single-point calculation. Multiple collectors collect
and report the packet data periodically and one analyzer calculates the data periodically.
Before starting the iNQA packet loss measurement, make sure all collectors are time synchronized
through NTP or PTP. Therefore, all collectors can use the same measurement interval to color the
flow and report the packet statistics to the analyzer. As a best practice, the analyzer and all collectors
are time synchronized to facilitate management and maintenance. For more information about NTP,
see "Configuring NTP." For more information about PTP, see "Configuring PTP."
The number of incoming packets and that of outgoing packets in a network should be equal within a
time period. If they are not equal, packet loss occurs in the network.
83
As shown in Figure 39, the flow enters the network from MP 100, passes through MP 200, and
leaves the network from MP 310. The devices where the flow passes are collectors and NTP clients,
and the aggregation device is the analyzer and NTP server.
The iNQA measurement works as follows:
1. The analyzer synchronizes the time with all collectors through the NTP protocol.
2. The ingress MP on collector 1 identifies the target flow . It colors and decolors the packets in the
flow alternatively at intervals, and periodically reports the packet statistics to the analyzer.
3. The middle MP on collector 2 identifies the target flow and reports the packet statistics to the
analyzer periodically.
4. The egress MP on collector 3 identifies the target flow. It decolors the colored packets and
reports the packet statistics to the analyzer periodically.
5. The analyzer calculates packet loss for the flow of the same period and same instance as
follows:
Number of lost packets = Number of incoming packets on the MP – Number of outgoing
packets on the MP
Packet loss rate = (Number of incoming packets on the MP – Number of outgoing packets on
the MP) / Number of incoming packets on the MP
The analyzer calculates the byte loss in the similar way the packet loss is calculated.
For the end-to-end measurement, the data from the ingress MP and egress MP is used.
For the point-to-point measurement, the analyzer calculates the result on a per-AMS basis.
• In AMS 1: Packet loss = Number of packets at MP 100 – Number of packets at MP 200
• In AMS 2: Packet loss = Number of packets at MP 200 – Number of packets at MP 300
Figure 39 Network diagram
5
Analyzer
(NTP server)
1 1 1
End-to-end measurement
AMS 1 AMS 2
84
Restrictions and guidelines: iNQA configuration
Instance restrictions and guidelines
If an analyzer instance is not correctly bound to a collector, or the bound collector does not report the
data on time, the analyzer will uses 0 to calculate the packet loss rate. For example, if an analyzer
instance is bound to only ingress MPs but no egress MPs, the packet loss rate is 100 % because the
analyzer uses 0 as the packet count on egress MPs.
A collector is uniquely identified by its ID. For the same collector, the specified collector ID must be
the same as the collector ID bound to an analyzer instance on the analyzer. The collector ID is the
IPv4 address of the collector, and must be routable from the analyzer. As a best practice, configure
the Router ID of the device as the collector ID.
An analyzer is uniquely identified by its ID. For the same analyzer, the specified analyzer ID must be
the same as the analyzer ID bound with a collector instance on the collector. The analyzer ID is the
IPv4 address of the analyzer, and must be routable from the collector. As a best practice, configure
the Router ID of the device as the analyzer ID.
To measure the same flow, configure the same instance ID, the target flow attributes, and
measurement interval on the analyzer and collectors.
85
iNQA tasks at a glance
To configure iNQA, perform the following tasks:
• Configure a collector
Configuring parameters for a collector
Configuring a collector instance
Configuring an MP
Enabling packet loss measurement
• Configure the analyzer
Configuring parameters for the analyzer
Configuring an analyzer instance
Configuring an AMS
No AMSs are required in the end-to-end packet loss measurements. Configure an AMS in
the point-to-point packet loss measurements.
Enabling the measurement functionality
• Configuring iNQA logging
Prerequisites
Before configuring iNQA, configure NTP or PTP to synchronize the clock between the analyzer and
all collectors. For more information about NTP, see "Configuring NTP." For more information about
PTP, see "Configuring PTP."
Configure a collector
Configuring parameters for a collector
1. Enter system view.
system-view
2. Enable the collector functionality and enter its view.
inqa collector
By default, the collector functionality is disabled.
3. Specify a collector ID.
collector id collector-id
By default, no collector ID is specified.
4. Specify a ToS field bit to flag packet loss measurement.
flag loss-measure tos-bit tos-bit
By default, no ToS field bit is specified to flag packet loss measurement.
5. Bind an analyzer to the collector.
analyzer analyzer-id [ udp-port port-number ] [ vpn-instance
vpn-instance-name ]
By default, no analyzer is bound to a collector.
An analyzer specified in collector view is bound to all collector instances on the collector.
86
An analyzer specified in collector instance view is bound to the specified collector instance. The
analyzer ID in collector instance view takes precedence over that in collector view. A collector
and a collector instance can be bound to only one analyzer. If you execute this command
multiple times in the same view, the most recent configuration takes effect.
87
6. Specify the measurement interval for the collector instance.
interval interval
By default, the measurement interval for a collector instance is 10 seconds.
Make sure the measurement interval for the same collector instance on all collectors are the
same.
To modify the measurement interval for an enabled collector instance, first disable the
measurement and then modify the measurement interval for the same collector instance in all
collectors.
7. (Optional.) Configure a description for a collector instance.
description text
By default, a collector instance does not have a description.
Configuring an MP
1. Enter system view.
system-view
2. Enter collector view.
inqa collector
3. Create a collector instance and enter its view.
instance instance-id
4. Configure an MP.
mp mp-id { in-point | mid-point | out-point } port-direction { inbound
| outbound }
By default, no MP is configured.
5. Return to collector view.
quit
6. Return to system view.
quit
7. Enter interface view.
interface interface-type interface-number
8. Bind an interface to the MP.
inqa mp mp-id
By default, no interface is bound to an MP.
88
Procedure
1. Enter system view.
system-view
2. Enter collector view.
inqa collector
3. Create a collector instance and enter its view.
instance instance-id
4. Enable the fixed duration packet loss measurement for the collector instance.
On non-middle points:
loss-measure enable duration [ duration ]
On middle points:
loss-measure enable mid-point duration [ duration ]
By default, the fixed duration packet loss measurement is disabled.
Enable the fixed duration packet loss measurement on middle points only in the point-to-point
performance measurements.
5. Enable continual packet loss measurement for the collector instance.
loss-measure enable continual
By default, the continual packet loss measurement is disabled.
89
inqa analyzer
3. Create an analyzer instance and enter its view.
instance instance-id
4. (Optional.) Configure a description for an analyzer instance.
description text
By default, an analyzer instance does not have a description.
5. Bind a collector to the analyzer instance.
collector collector-id
By default, no collector is bound to an analyzer instance.
Configuring an AMS
About this task
No AMSs are required in the end-to-end packet loss measurements. Configure an AMS in the
point-to-point packet loss measurements.
Procedure
1. Enter system view.
system-view
2. Enter analyzer view.
inqa analyzer
3. Create an analyzer instance and enter its view.
instance instance-id
4. Create an AMS and enter its view.
ams ams-id
5. Specify the flow direction to be measured in the analyzer AMS.
flow { backward | bidirection | forward }
By default, no flow direction is specified to be measured in the analyzer AMS.
6. Add an MP to the ingress MP group for the AMS.
in-group collector collector-id mp mp-id
By default, the ingress MP group for an AMS does not have any MP.
7. Add an MP to the egress MP group for the AMS.
out-group collector collector-id mp mp-id
By default, the egress MP group for an AMS does not have any MP.
90
By default, the measurement functionality of an analyzer instance is disabled.
Task Command
Display the collector configuration. display inqa collector
Display the collector instance display inqa collector instance
configuration. { instance-id | all }
Task Command
Display the analyzer configuration. display inqa analyzer
91
Task Command
Display the analyzer instance display inqa analyzer instance { instance-id
configuration. | all }
Display the AMS configuration in an display inqa analyzer instance instance-id
analyzer instance. ams { ams-id | all }
display inqa statistics loss instance
Display iNQA packet loss statistics.
instance-id [ ams ams-id ]
Device 1 Device 2
10.1.1.1 10.2.1.1
End-to-end measurement
Branch A Branch B
Prerequisites
1. Assign 10.1.1.1 and 10.2.1.1 to Collector 1 and Collector 2, respectively.
2. Configure OSPF in the IP network to make Collector 1 and the analyzer can reach each other.
(Details not shown.)
3. Configure NTP or PTP on Collector 1 and Collector 2 for clock synchronization. (Details not
shown.)
Procedure
1. Configure Collector 1:
# Specify 10.1.1.1 as the collector ID.
92
<Collector1> system-view
[Collector1] inqa collector
[Collector1-inqa-collector] collector id 10.1.1.1
# Bind the collector to the analyzer with ID 10.2.1.1.
[Collector1-inqa-collector] analyzer 10.2.1.1
# Specify ToS field bit 5 as the flag bit.
[Collector1-inqa-collector] flag loss-measure tos-bit 5
# Configure collector instance 1 to monitor the bidirectional flows entering the network from
GigabitEthernet1/0/1 between 10.1.1.0 and 10.2.1.0.
[Collector1-inqa-collector] instance 1
[Collector1-inqa-collector-instance-1] flow bidirection source-ip 10.1.1.0 24
destination-ip 10.2.1.0 24
[Collector1-inqa-collector-instance-1] mp 100 in-point port-direction inbound
[Collector1-inqa-collector-instance-1] quit
[Collector1-inqa-collector] quit
[Collector1] interface gigabitethernet 1/0/1
[Collector1-GigabitEthernet1/0/1] inqa mp 100
[Collector1-GigabitEthernet1/0/1] quit
# Enable continual packet loss measurement.
[Collector1] inqa collector
[Collector1-inqa-collector] instance 1
[Collector1-inqa-collector-instance-1] loss-measure enable continual
[Collector1-inqa-collector-instance-1] return
<Collector1>
2. Configure Collector 2 and the analyzer:
# Specify 10.2.1.1 as the collector ID.
<AnalyzerColl2> system-view
[AnalyzerColl2] inqa collector
[AnalyzerColl2-inqa-collector] collector id 10.2.1.1
# Bind the collector to the analyzer with ID 10.2.1.1.
[AnalyzerColl2-inqa-collector] analyzer 10.2.1.1
# Specify ToS field bit 5 as the flag bit.
[AnalyzerColl2-inqa-collector] flag loss-measure tos-bit 5
# Configure collector instance 1 to monitor the bidirectional flows entering the network from
GigabitEthernet1/0/1 between 10.1.1.0 and 10.2.1.0.
[AnalyzerColl2-inqa-collector] instance 1
[AnalyzerColl2-inqa-collector-instance-1] flow bidirection source-ip 10.1.1.0 24
destination-ip 10.2.1.0 24
[AnalyzerColl2-inqa-collector-instance-1] mp 200 out-point port-direction outbound
[AnalyzerColl2-inqa-collector-instance-1] quit
[AnalyzerColl2-inqa-collector] quit
[AnalyzerColl2] interface gigabitethernet 1/0/1
[AnalyzerColl2-GigabitEthernet1/0/1] inqa mp 200
[AnalyzerColl2-GigabitEthernet1/0/1] quit
# Enable continual packet loss measurement.
[AnalyzerColl2] inqa collector
[AnalyzerColl2-inqa-collector] instance 1
[AnalyzerColl2-inqa-collector-instance-1] loss-measure enable continual
93
[AnalyzerColl2-inqa-collector-instance-1] quit
[AnalyzerColl2-inqa-collector] quit
# Specify 10.2.1.1 as the analyzer ID.
[AnalyzerColl2] inqa analyzer
[AnalyzerColl2-inqa-analyzer] analyzer id 10.2.1.1
# Bind analyzer instance 1 to Collector 1 and Collector 2.
[AnalyzerColl2-inqa-analyzer] instance 1
[AnalyzerColl2-inqa-analyzer-instance-1] collector 10.1.1.1
[AnalyzerColl2-inqa-analyzer-instance-1] collector 10.2.1.1
# Set the packet loss upper limit and packet loss lower limit to 6% and 4%, respectively.
[AnalyzerColl2-inqa-analyzer-instance-1] loss-measure alarm upper-limit 6
lower-limit 4
# Enable the measurement functionality of analyzer instance 1.
[AnalyzerColl2-inqa-analyzer-instance-1] measure enable
[AnalyzerColl2-inqa-analyzer-instance-1] return
<AnalyzerColl2>
94
VPN-instance-name : --
Current instance count : 1
# Display the configuration of collector instance 1.
<AnalyzerColl2> display inqa collector instance 1
Instance ID : 1
Status : Enabled
Duration : --
description : --
Analyzer ID : --
Analyzer UDP-port : --
VPN-instance-name : --
Interval : 10 sec
Flow configuration:
flow bidirection source-ip 10.1.1.0 24 destination-ip 10.2.1.0 24
MP configuration:
mp 200 out-point outbound, GE1/0/1
# Display the analyzer configuration.
<AnalyzerColl2> display inqa analyzer
Analyzer ID : 10.2.1.1
Protocol UDP-port : 53312
Current instance count : 1
# Display the configuration of analyzer instance 1.
<AnalyzerColl2> display inqa analyzer instance 1
Instance ID : 1
Status : Enable
Description : --
Alarm upper-limit : 6.000000%
Alarm lower-limit : 4.000000%
Current AMS count : 0
Collectors : 10.1.1.1
10.2.1.1
# Display iNQA packet loss statistics of analyzer instance 1.
<AnalyzerColl2> display inqa statistics loss instance 1
Latest packet loss statistics for forward flow:
Period LostPkts PktLoss% LostBytes ByteLoss%
19122483 15 15.000000% 1500 15.000000%
19122482 15 15.000000% 1500 15.000000%
19122481 15 15.000000% 1500 15.000000%
19122480 15 15.000000% 1500 15.000000%
19122479 15 15.000000% 1500 15.000000%
19122478 15 15.000000% 1500 15.000000%
Latest packet loss statistics for backward flow:
Period LostPkts PktLoss% LostBytes ByteLoss%
19122483 15 15.000000% 1500 15.000000%
19122482 15 15.000000% 1500 15.000000%
19122481 15 15.000000% 1500 15.000000%
19122480 15 15.000000% 1500 15.000000%
19122479 15 15.000000% 1500 15.000000%
95
19122478 15 15.000000% 1500 15.000000%
End-to-end measurement
AMS 1 (Point-to-point AMS 2 (Point-to-point
measurement) measurement)
Prerequisites
1. Assign 10.1.1.1, 10.2.1.1, and 10.3.1.1 to Collector 1, Collector 2, and Collector 3, respectively.
2. Configure OSPF in the IP network to make sure the analyzer Collector 1, Collector 3, and the
analyzer can reach each other. (Details not shown.)
3. Configure NTP or PTP on Collector 1, Collector 2, and Collector 3 for clock synchronization.
(Details not shown.)
Procedure
1. Configure Collector 1:
# Specify 10.1.1.1 as the collector ID.
<Collector1> system-view
[Collector1] inqa collector
[Collector1-inqa-collector] collector id 10.1.1.1
# Bind the collector to the analyzer with ID 10.2.1.1.
[Collector1-inqa-collector] analyzer 10.2.1.1
# Specify ToS field bit 5 as the flag bit.
[Collector1-inqa-collector] flag loss-measure tos-bit 5
# Configure collector instance 1 to monitor the forward flow entering the network at
GigabitEthernet1/0/1 from 10.1.1.0 and 10.3.1.0.
[Collector1-inqa-collector] instance 1
[Collector1-inqa-collector-instance-1] flow forward source-ip 10.1.1.0 24
destination-ip 10.3.1.0 24
96
[Collector1-inqa-collector-instance-1] mp 100 in-point port-direction inbound
[Collector1-inqa-collector-instance-1] quit
[Collector1-inqa-collector] quit
[Collector1] interface gigabitethernet 1/0/1
[Collector1-GigabitEthernet1/0/1] inqa mp 100
[Collector1-GigabitEthernet1/0/1] quit
# Run the packet loss measurement for 15 minutes for collector instance 1.
[Collector1] inqa collector
[Collector1-inqa-collector] instance 1
[Collector1-inqa-collector-instance-1] loss-measure enable duration 15
[Collector1-inqa-collector-instance-1] return
<Collector1>
2. Configure Collector 2 and the analyzer:
# Specify 10.2.1.1 as the collector ID.
<AnalyzerColl2> system-view
[AnalyzerColl2] inqa collector
[AnalyzerColl2-inqa-collector] collector id 10.2.1.1
# Bind the collector to the analyzer with ID 10.2.1.1.
[AnalyzerColl2-inqa-collector] analyzer 10.2.1.1
# Specify ToS field bit 5 as the flag bit.
[AnalyzerColl2-inqa-collector] flag loss-measure tos-bit 5
# Configure collector instance 1 to monitor the forward flow entering the network at
GigabitEthernet1/0/1 from 10.1.1.0 and 10.3.1.0.
[AnalyzerColl2-inqa-collector] instance 1
[AnalyzerColl2-inqa-collector-instance-1] flow forward source-ip 10.1.1.0 24
destination-ip 10.3.1.0 24
[AnalyzerColl2-inqa-collector-instance-1] mp 200 mid-point port-direction inbound
[AnalyzerColl2-inqa-collector-instance-1] quit
[AnalyzerColl2-inqa-collector] quit
[AnalyzerColl2] interface gigabitethernet 1/0/1
[AnalyzerColl2-GigabitEthernet1/0/1] inqa mp 200
[AnalyzerColl2-GigabitEthernet1/0/1] quit
# Run the packet loss measurement for 15 minutes for collector instance 1.
[AnalyzerColl2] inqa collector
[AnalyzerColl2-inqa-collector] instance 1
[AnalyzerColl2-inqa-collector-instance-1] loss-measure enable mid-point duration 15
[AnalyzerColl2-inqa-collector-instance-1] quit
[AnalyzerColl2-inqa-collector] quit
# Specify 10.2.1.1 as the analyzer ID.
[AnalyzerColl2] inqa analyzer
[AnalyzerColl2-inqa-analyzer] analyzer id 10.2.1.1
# Bind analyzer instance 1 to Collector 1, Collector 2, and Collector 3.
[AnalyzerColl2-inqa-analyzer] instance 1
[AnalyzerColl2-inqa-analyzer-instance-1] collector 10.1.1.1
[AnalyzerColl2-inqa-analyzer-instance-1] collector 10.2.1.1
[AnalyzerColl2-inqa-analyzer-instance-1] collector 10.3.1.1
# Configure AMS 1 to measure the packet loss rate for the forward flow from MP 100 to MP 200.
[AnalyzerColl2-inqa-analyzer-instance-1] ams 1
97
[AnalyzerColl2-inqa-analyzer-instance-1-ams-1] flow forward
[AnalyzerColl2-inqa-analyzer-instance-1-ams-1] in-group collector 10.1.1.1 mp 100
[AnalyzerColl2-inqa-analyzer-instance-1-ams-1] out-group collector 10.2.1.1 mp 200
[AnalyzerColl2-inqa-analyzer-instance-1-ams-1] quit
# Configure AMS 2 to measure the packet loss rate for the forward flow from MP 200 to MP 300.
[AnalyzerColl2-inqa-analyzer-instance-1] ams 2
[AnalyzerColl2-inqa-analyzer-instance-1-ams-2] flow forward
[AnalyzerColl2-inqa-analyzer-instance-1-ams-2] in-group collector 10.2.1.1 mp 200
[AnalyzerColl2-inqa-analyzer-instance-1-ams-2] out-group collector 10.3.1.1 mp 300
[AnalyzerColl2-inqa-analyzer-instance-1-ams-2] quit
# Set the packet loss upper limit and packet loss lower limit to 6% and 4%, respectively.
[AnalyzerColl2-inqa-analyzer-instance-1] loss-measure alarm upper-limit 6
lower-limit 4
# Enable the measurement functionality of analyzer instance 1.
[AnalyzerColl2-inqa-analyzer-instance-1] measure enable
[AnalyzerColl2-inqa-analyzer-instance-1] return
<AnalyzerColl2>
3. Configure Collector 3:
# Specify 10.3.1.1 as the collector ID.
<Collector3> system-view
[Collector3] inqa collector
[Collector3-inqa-collector] collector id 10.3.1.1
# Bind the collector to the analyzer with ID 10.2.1.1.
[Collector3-inqa-collector] analyzer 10.2.1.1
# Specify ToS field bit 5 as the flag bit.
[Collector3-inqa-collector] flag loss-measure tos-bit 5
# Configure collector instance 1 to monitor the forward flow entering the network at
GigabitEthernet1/0/1 from 10.1.1.0 and 10.3.1.0.
[Collector3-inqa-collector] instance 1
[Collector3-inqa-collector-instance-1] flow forward source-ip 10.1.1.0 24
destination-ip 10.3.1.0 24
[Collector3-inqa-collector-instance-1] mp 300 out-point port-direction outbound
[Collector3-inqa-collector-instance-1] quit
[Collector3-inqa-collector] quit
[Collector3] interface gigabitethernet 1/0/1
[Collector3-GigabitEthernet1/0/1] inqa mp 300
[Collector3-GigabitEthernet1/0/1] quit
# Run the packet loss measurement for 15 minutes for collector instance 1.
[Collector3] inqa collector
[Collector3-inqa-collector] instance 1
[Collector3-inqa-collector-instance-1] loss-measure enable duration 15
[Collector3-inqa-collector-instance-1] return
<Collector3>
98
Collector ID : 10.1.1.1
Loss-measure flag : 5
Analyzer ID : 10.2.1.1
Analyzer UDP-port : 53312
VPN-instance-name : --
Current instance count : 1
# Display the configuration of collector instance 1.
<Collector1> display inqa collector instance 1
Instance ID : 1
Status : Enabled
Duration : 15 min (Non mid-point)
Remaining time : 14 min 52 sec
description : --
Analyzer ID : --
Analyzer UDP-port : --
VPN-instance-name : --
Interval : 10 sec
Flow configuration:
flow forward source-ip 10.1.1.0 24 destination-ip 10.3.1.0 24
MP configuration:
mp 100 in-point inbound, GE1/0/1
2. On Collector 2 and the analyzer:
# Display the collector configuration.
<AnalyzerColl2> display inqa collector
Collector ID : 10.2.1.1
Loss-measure flag : 5
Analyzer ID : 10.2.1.1
Analyzer UDP-port : 53312
VPN-instance-name : --
Current instance count : 1
# Display the configuration of collector instance 1.
<AnalyzerColl2> display inqa collector instance 1
Instance ID : 1
Status : Enabled
Duration : 15 min (Mid-point)
Remaining time : 14 min 50 sec
description : --
Analyzer ID : --
Analyzer UDP-port : --
VPN-instance-name : --
Interval : 10 sec
Flow configuration:
flow forward source-ip 10.1.1.0 24 destination-ip 10.3.1.0 24
MP configuration:
mp 200 mid-point inbound, GE1/0/1
# Display analyzer configuration.
<AnalyzerColl2> display inqa analyzer
Analyzer ID : 10.2.1.1
99
Protocol UDP-port : 53312
Current instance count : 1
# Display the configuration of analyzer instance 1.
<AnalyzerColl2> display inqa analyzer instance 1
Instance ID : 1
Status : Enabled
Description : --
Alarm upper-limit : 6.000000%
Alarm lower-limit : 4.000000%
Current AMS count : 2
Collectors : 10.1.1.1
10.2.1.1
10.3.1.1
# Display the configuration of all AMSs in analyzer instance 1.
<AnalyzerColl2> display inqa analyzer instance 1 ams all
AMS ID : 1
Flow direction : forward
In-group : collector 10.1.1.1 mp 100
Out-group : collector 10.2.1.1 mp 200
AMS ID : 2
Flow direction : forward
In-group : collector 10.2.1.1 mp 200
Out-group : collector 10.3.1.1 mp 300
# Display iNQA packet loss statistics of analyzer instance 1.
<AnalyzerColl2> display inqa statistics loss instance 1 ams 1
Latest packet loss statistics for forward flow:
Period LostPkts PktLoss% LostBytes ByteLoss%
19122483 15 15.000000% 1500 15.000000%
19122482 15 15.000000% 1500 15.000000%
19122481 15 15.000000% 1500 15.000000%
19122480 15 15.000000% 1500 15.000000%
19122479 15 15.000000% 1500 15.000000%
19122478 15 15.000000% 1500 15.000000%
3. On Collector 3:
# Display the collector configuration.
<Collector3> display inqa collector
Collector ID : 10.3.1.1
Loss-measure flag : 5
Analyzer ID : 10.2.1.1
Analyzer UDP-port : 53312
VPN-instance-name : --
Current instance count : 1
# Display the configuration of collector instance 1.
<Collector3> display inqa collector instance 1
Instance ID : 1
Status : Enabled
Duration : 15 min (Non mid-point)
100
Remaining time : 14 min 51 sec
description : --
Analyzer ID : --
Analyzer UDP-port : --
VPN-instance-name : --
Interval : 10 sec
Flow configuration:
flow forward source-ip 10.1.1.0 24 destination-ip 10.3.1.0 24
MP configuration:
mp 300 out-point outbound, GE1/0/1
101
Configuring NTP
About NTP
NTP is used to synchronize system clocks among distributed time servers and clients on a network.
NTP runs over UDP and uses UDP port 123.
IP network
Device A Device B
2.
102
2. When this NTP message arrives at Device B, Device B adds a timestamp showing the time
when the message arrived at Device B. The timestamp is 11:00:01 am (T2).
3. When the NTP message leaves Device B, Device B adds a timestamp showing the time when
the message left Device B. The timestamp is 11:00:02 am (T3).
4. When Device A receives the NTP message, the local time of Device A is 10:00:03 am (T4).
Up to now, Device A can calculate the following parameters based on the timestamps:
• The roundtrip delay of the NTP message: Delay = (T4 – T1) – (T3 – T2) = 2 seconds.
• Time difference between Device A and Device B: Offset = [ (T2 – T1) + (T3 – T4) ] /2 = 1 hour.
Based on these parameters, Device A can be synchronized to Device B.
This is only a rough description of the work mechanism of NTP. For more information, see the related
protocols and standards.
NTP architecture
NTP uses stratums 1 to 16 to define clock accuracy, as shown in Figure 43. A lower stratum value
represents higher accuracy. Clocks at stratums 1 through 15 are in synchronized state, and clocks at
stratum 16 are not synchronized.
Figure 43 NTP architecture
Authoritative
clock
Primary servers
(Stratum 1)
Secondary servers
(Stratum 2)
Tertiary servers
(Stratum 3)
Quaternary servers
(Stratum 4)
A stratum 1 NTP server gets its time from an authoritative time source, such as an atomic clock. It
provides time for other devices as the primary NTP server. A stratum 2 time server receives its time
from a stratum 1 time server, and so on.
To ensure time accuracy and availability, you can specify multiple NTP servers for a device. The
device selects an optimal NTP server as the clock source based on parameters such as stratum. The
clock that the device selects is called the reference source. For more information about clock
selection, see the related protocols and standards.
If the devices in a network cannot synchronize to an authoritative time source, you can perform the
following tasks:
103
• Select a device that has a relatively accurate clock from the network.
• Use the local clock of the device as the reference clock to synchronize other devices in the
network.
104
Mode Working process Principle Application scenario
A broadcast server sends
clock synchronization
A server periodically sends clock
messages to synchronize
synchronization messages to the
clients in the same
broadcast address
subnet. As Figure 43
255.255.255.255. Clients listen
shows, broadcast mode is
to the broadcast messages from
intended for
the servers to synchronize to the A broadcast client can configurations involving
server according to the synchronize to a one or a few servers and a
broadcast messages. broadcast server, but a
Broadcast potentially large client
When a client receives the first broadcast server cannot population.
broadcast message, the client synchronize to a
broadcast client. The broadcast mode has
and the server start to exchange
lower time accuracy than
messages to calculate the
the client/server and
network delay between them.
symmetric active/passive
Then, only the broadcast server
modes because only the
sends clock synchronization
broadcast servers send
messages.
clock synchronization
messages.
A multicast server can
A multicast server periodically provide time
sends clock synchronization A multicast client can synchronization for clients
messages to the user-configured synchronize to a in the same subnet or in
multicast address. Clients listen multicast server, but a different subnets.
Multicast
to the multicast messages from multicast server cannot The multicast mode has
servers and synchronize to the synchronize to a lower time accuracy than
server according to the received multicast client. the client/server and
messages. symmetric active/passive
modes.
.
NTP security
To improve time synchronization security, NTP provides the access control and authentication
functions.
NTP access control
You can control NTP access by using an ACL. The access rights are in the following order, from the
least restrictive to the most restrictive:
• Peer—Allows time requests and NTP control queries (such as alarms, authentication status,
and time server information) and allows the local device to synchronize itself to a peer device.
• Server—Allows time requests and NTP control queries, but does not allow the local device to
synchronize itself to a peer device.
• Synchronization—Allows only time requests from a system whose address passes the access
list criteria.
• Query—Allows only NTP control queries from a peer device to the local device.
When the device receives an NTP request, it matches the request against the access rights in order
from the least restrictive to the most restrictive: peer, server, synchronization, and query.
• If no NTP access control is configured, the peer access right applies.
• If the IP address of the peer device matches a permit statement in an ACL, the access right is
granted to the peer device. If a deny statement or no ACL is matched, no access right is
granted.
• If no ACL is specified for an access right or the ACL specified for the access right is not created,
the access right is not granted.
105
• If none of the ACLs specified for the access rights is created, the peer access right applies.
• If none of the ACLs specified for the access rights contains rules, no access right is granted.
This feature provides minimal security for a system running NTP. A more secure method is NTP
authentication.
NTP authentication
Use this feature to authenticate the NTP messages for security purposes. If an NTP message
passes authentication, the device can receive it and get time synchronization information. If not, the
device discards the message. This function makes sure the device does not synchronize to an
unauthorized time server.
Figure 44 NTP authentication
Key value
Message Message
Sends to the
receiver Message
Key ID Compute the
Digest
digest
Compute the Digest
digest Key ID
Digest Compare
106
Layer 3 Ethernet interfaces.
Layer 3 Ethernet subinterfaces.
Layer 3 aggregate interfaces.
Layer 3 aggregate subinterfaces.
VLAN interfaces.
Tunnel interfaces.
• Do not configure NTP on an aggregate member port.
• The NTP service and SNTP service are mutually exclusive. You can only enable either NTP
service or SNTP service at a time.
• To avoid frequent time changes or even synchronization failures, do not specify more than one
reference source on a network.
• To use NTP for time synchronization, you must use the clock protocol command to specify
NTP for obtaining the time. For more information about the clock protocol command, see
device management commands in Fundamentals Command Reference.
107
system-view
2. Enable the NTP service.
ntp-service enable
By default, the NTP service is disabled.
108
Procedure
1. Enter system view.
system-view
2. Specify a symmetric passive peer for the device.
IPv4:
ntp-service unicast-peer { peer-name | ip-address } [ vpn-instance
vpn-instance-name ] [ authentication-keyid keyid | maxpoll
maxpoll-interval | minpoll minpoll-interval | priority | source
interface-type interface-number | version number ] *
IPv6:
ntp-service ipv6 unicast-peer { peer-name | ipv6-address }
[ vpn-instance vpn-instance-name ] [ authentication-keyid keyid |
maxpoll maxpoll-interval | minpoll minpoll-interval | priority |
source interface-type interface-number ] *
By default, no symmetric passive peer is specified.
109
Configuring NTP in multicast mode
Restrictions and guidelines
To configure NTP in multicast mode, you must configure an NTP multicast client and an NTP
multicast server.
For a multicast client to synchronize to a multicast server, make sure the multicast server is
synchronized by other devices or uses its local clock as the reference source.
Configuring a multicast client
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Configure the device to operate in multicast client mode.
IPv4:
ntp-service multicast-client [ ip-address ]
IPv6:
ntp-service ipv6 multicast-client ipv6-address
By default, the device does not operate in any NTP association mode.
After you execute the command, the device receives NTP multicast messages from the
specified interface.
Configuring the multicast server
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Configure the device to operate in multicast server mode.
IPv4:
ntp-service multicast-server [ ip-address ] [ authentication-keyid
keyid | ttl ttl-number | version number ] *
IPv6:
ntp-service ipv6 multicast-server ipv6-address [ authentication-keyid
keyid | ttl ttl-number ] *
By default, the device does not operate in any NTP association mode.
After you execute the command, the device sends NTP multicast messages from the specified
interface.
110
Restrictions and guidelines
Make sure the local clock can provide the time accuracy required for the network. After you configure
the local clock as the reference source, the local clock is synchronized, and can operate as a time
server to synchronize other devices in the network. If the local clock is incorrect, timing errors occur.
Devices differ in clock precision. As a best practice to avoid network flapping and clock
synchronization failure, configure only one reference clock on the same network segment and make
sure the clock has high precision.
Prerequisites
Before you configure this feature, adjust the local system time to ensure that it is accurate.
Procedure
1. Enter system view.
system-view
2. Configure the local clock as the reference source.
ntp-service refclock-master [ ip-address ] [ stratum ]
By default, the device does not use the local clock as the reference source.
111
Table 3 NTP authentication results
Client Server
Enable NTP Specify the Enable NTP
Trusted key Trusted key
authentication server and key authentication
Successful authentication
Yes Yes Yes Yes Yes
Failed authentication
Yes Yes Yes Yes No
Yes Yes Yes No N/A
Yes Yes No N/A N/A
112
ntp-service authentication-keyid keyid authentication-mode
{ hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 }
{ cipher | simple } string [ acl ipv4-acl-number | ipv6 acl
ipv6-acl-number ] *
By default, no NTP authentication key exists.
4. Configure the key as a trusted key.
ntp-service reliable authentication-keyid keyid
By default, no authentication key is configured as a trusted key.
Failed authentication
Yes Yes Yes N/A Yes No
Yes Yes Yes N/A No N/A
Yes No N/A N/A Yes N/A
No N/A N/A N/A Yes N/A
Larger than the
Yes Yes No N/A N/A
passive peer
Smaller than the
Yes Yes No Yes N/A
passive peer
113
ntp-service authentication enable
By default, NTP authentication is disabled.
3. Configure an NTP authentication key.
ntp-service authentication-keyid keyid authentication-mode
{ hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 }
{ cipher | simple } string [ acl ipv4-acl-number | ipv6 acl
ipv6-acl-number ] *
By default, no NTP authentication key exists.
4. Configure the key as a trusted key.
ntp-service reliable authentication-keyid keyid
By default, no authentication key is configured as a trusted key.
5. Associate the specified key with a passive peer.
IPv4:
ntp-service unicast-peer { ip-address | peer-name } [ vpn-instance
vpn-instance-name ] authentication-keyid keyid
IPv6:
ntp-service ipv6 unicast-peer { ipv6-address | peer-name }
[ vpn-instance vpn-instance-name ] authentication-keyid keyid
Configuring NTP authentication for a passive peer
1. Enter system view.
system-view
2. Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is disabled.
3. Configure an NTP authentication key.
ntp-service authentication-keyid keyid authentication-mode
{ hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 }
{ cipher | simple } string [ acl ipv4-acl-number | ipv6 acl
ipv6-acl-number ] *
By default, no NTP authentication key exists.
4. Configure the key as a trusted key.
ntp-service reliable authentication-keyid keyid
By default, no authentication key is configured as a trusted key.
114
Table 5 NTP authentication results
Failed authentication
Yes Yes Yes Yes No
Yes Yes Yes No N/A
Yes Yes No Yes N/A
Yes No N/A Yes N/A
No N/A N/A Yes N/A
115
By default, no NTP authentication key exists.
4. Configure the key as a trusted key.
ntp-service reliable authentication-keyid keyid
By default, no authentication key is configured as a trusted key.
5. Enter interface view.
interface interface-type interface-number
6. Associate the specified key with the broadcast server.
ntp-service broadcast-server authentication-keyid keyid
By default, the broadcast server is not associated with a key.
Failed authentication
Yes Yes Yes Yes No
Yes Yes Yes No N/A
Yes Yes No Yes N/A
Yes No N/A Yes N/A
No N/A N/A Yes N/A
116
ntp-service authentication-keyid keyid authentication-mode
{ hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 }
{ cipher | simple } string [ acl ipv4-acl-number | ipv6 acl
ipv6-acl-number ] *
By default, no NTP authentication key exists.
4. Configure the key as a trusted key.
ntp-service reliable authentication-keyid keyid
By default, no authentication key is configured as a trusted key.
Configuring NTP authentication for a multicast server
1. Enter system view.
system-view
2. Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is disabled.
3. Configure an NTP authentication key.
ntp-service authentication-keyid keyid authentication-mode
{ hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 }
{ cipher | simple } string [ acl ipv4-acl-number | ipv6 acl
ipv6-acl-number ] *
By default, no NTP authentication key exists.
4. Configure the key as a trusted key.
ntp-service reliable authentication-keyid keyid
By default, no authentication key is configured as a trusted key.
5. Enter interface view.
interface interface-type interface-number
6. Associate the specified key with a multicast server.
IPv4:
ntp-service multicast-server [ ip-address ] authentication-keyid keyid
IPv6:
ntp-service ipv6 multicast-server ipv6-multicast-address
authentication-keyid keyid
By default, no multicast server is associated with the specified key.
117
When the device responds to an NTP request, the source IP address of the NTP response is always
the IP address of the interface that has received the NTP request.
If you have specified the source interface for NTP messages in the ntp-service
unicast-server/ntp-service ipv6 unicast-server or ntp-service unicast-peer/ntp-service ipv6
unicast-peer command, the IP address of the specified interface is used as the source IP address
for NTP messages.
If you have configured the ntp-service broadcast-server or ntp-service
multicast-server/ntp-service ipv6 multicast-server command in an interface view, the IP address
of the interface is used as the source IP address for broadcast or multicast NTP messages.
Procedure
1. Enter system view.
system-view
2. Specify the source address for NTP messages.
IPv4:
ntp-service source { interface-type interface-number | ipv4-address }
IPv6:
ntp-service ipv6 source interface-type interface-number
By default, no source address is specified for NTP messages.
118
The following describes how an association is established in different association modes:
• Client/server mode—After you specify an NTP server, the system creates a static association
on the client. The server simply responds passively upon the receipt of a message, rather than
creating an association (static or dynamic).
• Symmetric active/passive mode—After you specify a symmetric passive peer on a
symmetric active peer, static associations are created on the symmetric active peer, and
dynamic associations are created on the symmetric passive peer.
• Broadcast or multicast mode—Static associations are created on the server, and dynamic
associations are created on the client.
Restrictions and guidelines
A single device can have a maximum of 128 concurrent associations, including static associations
and dynamic associations.
Procedure
1. Enter system view.
system-view
2. Configure the maximum number of dynamic sessions.
ntp-service max-dynamic-sessions number
By default, the maximum number of dynamic sessions is 100.
119
system-view
2. Specify the NTP time-offset thresholds for log and trap outputs.
ntp-service time-offset-threshold { log log-threshold | trap
trap-threshold } *
By default, no NTP time-offset thresholds are set for log and trap outputs.
Task Command
display ntp-service ipv6 sessions
Display information about IPv6 NTP associations.
[ verbose ]
display ntp-service sessions
Display information about IPv4 NTP associations.
[ verbose ]
Display information about NTP service status. display ntp-service status
Display brief information about the NTP servers from display ntp-service trace [ source
the local device back to the primary NTP server. interface-type interface-number ]
Device A Device B
Procedure
1. Assign an IP address to each interface, and make sure Device A and Device B can reach each
other, as shown in Figure 45. (Details not shown.)
2. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Specify the local clock as the reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2
3. Configure Device B:
# Enable the NTP service.
120
<DeviceB> system-view
[DeviceB] ntp-service enable
# Specify NTP for obtaining the time.
[DeviceB] clock protocol ntp
# Specify Device A as the NTP server of Device B.
[DeviceB] ntp-service unicast-server 1.0.1.11
3000::34/64 3000::39/64
Device A Device B
121
Procedure
1. Assign an IP address to each interface, and make sure Device A and Device B can reach each
other, as shown in Figure 46. (Details not shown.)
2. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Specify the local clock as the reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2
3. Configure Device B:
# Enable the NTP service.
<DeviceB> system-view
[DeviceB] ntp-service enable
# Specify NTP for obtaining the time.
[DeviceB] clock protocol ntp
# Specify Device A as the IPv6 NTP server of Device B.
[DeviceB] ntp-service ipv6 unicast-server 3000::34
Source: [12345]3000::34
Reference: 127.127.1.0 Clock stratum: 2
Reachabilities: 15 Poll interval: 64
Last receive time: 19 Offset: 0.0
Roundtrip delay: 0.0 Dispersion: 0.0
Total sessions: 1
122
Example: Configuring NTP symmetric active/passive
association mode
Network configuration
As shown in Figure 47, perform the following tasks:
• Configure Device A's local clock as its reference source, with stratum level 2.
• Configure Device A to operate in symmetric active mode and specify Device B as the passive
peer of Device A.
Figure 47 Network diagram
Symmetric active peer Symmetric passive peer
3.0.1.31/24 3.0.1.32/24
Device A Device B
Procedure
1. Assign an IP address to each interface, and make sure Device A and Device B can reach each
other, as shown in Figure 47. (Details not shown.)
2. Configure Device B:
# Enable the NTP service.
<DeviceB> system-view
[DeviceB] ntp-service enable
# Specify NTP for obtaining the time.
[DeviceB] clock protocol ntp
3. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Specify NTP for obtaining the time.
[DeviceA] clock protocol ntp
# Specify the local clock as the reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2
# Configure Device B as its symmetric passive peer.
[DeviceA] ntp-service unicast-peer 3.0.1.32
123
Clock precision: 2^-19
Root delay: 0.00609 ms
Root dispersion: 1.95859 ms
Reference time: 83aec681.deb6d3e5 Wed, Jan 8 2019 14:33:11.081
System poll interval: 64 s
# Verify that an IPv4 NTP association has been established between Device B and Device A.
[DeviceB] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[12]3.0.1.31 127.127.1.0 2 62 64 34 0.4251 6.0882 1392.1
Notes: 1 source(master), 2 source(peer), 3 selected, 4 candidate, 5 configured.
Total sessions: 1
3000::35/64 3000::36/64
Device A Device B
Procedure
1. Assign an IP address to each interface, and make sure Device A and Device B can reach each
other, as shown in Figure 48. (Details not shown.)
2. Configure Device B:
# Enable the NTP service.
<DeviceB> system-view
[DeviceB] ntp-service enable
# Specify NTP for obtaining the time.
[DeviceB] clock protocol ntp
3. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Specify NTP for obtaining the time.
[DeviceA] clock protocol ntp
# Specify the local clock as the reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2
124
# Configure Device B as the IPv6 symmetric passive peer.
[DeviceA] ntp-service ipv6 unicast-peer 3000::36
Source: [1234]3000::35
Reference: 127.127.1.0 Clock stratum: 2
Reachabilities: 15 Poll interval: 64
Last receive time: 19 Offset: 0.0
Roundtrip delay: 0.0 Dispersion: 0.0
Total sessions: 1
125
Figure 49 Network diagram
Vlan-int2
3.0.1.31/24
Switch C
NTP broadcast server
Vlan-int2
3.0.1.30/24
Switch A
NTP broadcast client
Vlan-int2
3.0.1.32/24
Switch B
NTP broadcast client
Procedure
1. Assign an IP address to each interface, and make sure Switch A, Switch B, and Switch C can
reach each other, as shown in Figure 49. (Details not shown.)
2. Configure Switch C:
# Enable the NTP service.
<SwitchC> system-view
[SwitchC] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchC] clock protocol ntp
# Specify the local clock as the reference source, with stratum level 2.
[SwitchC] ntp-service refclock-master 2
# Configure Switch C to operate in broadcast server mode and send broadcast messages from
VLAN-interface 2.
[SwitchC] interface vlan-interface 2
[SwitchC-Vlan-interface2] ntp-service broadcast-server
3. Configure Switch A:
# Enable the NTP service.
<SwitchA> system-view
[SwitchA] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchA] clock protocol ntp
# Configure Switch A to operate in broadcast client mode and receive broadcast messages on
VLAN-interface 2.
[SwitchA] interface vlan-interface 2
[SwitchA-Vlan-interface2] ntp-service broadcast-client
4. Configure Switch B:
# Enable the NTP service.
<SwitchB> system-view
[SwitchB] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchB] clock protocol ntp
126
# Configure Switch B to operate in broadcast client mode and receive broadcast messages on
VLAN-interface 2.
[SwitchB] interface vlan-interface 2
[SwitchB-Vlan-interface2] ntp-service broadcast-client
# Verify that an IPv4 NTP association has been established between Switch A and Switch C.
[SwitchA-Vlan-interface2] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[1245]3.0.1.31 127.127.1.0 2 1 64 519 -0.0 0.0022 4.1257
Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.
Total sessions: 1
127
Figure 50 Network diagram
Vlan-int2
3.0.1.31/24
Switch C
NTP multicast server
Switch A Switch B
NTP multicast client
Vlan-int2
3.0.1.32/24
Switch D
NTP multicast client
Procedure
1. Assign an IP address to each interface, and make sure the switches can reach each other, as
shown in Figure 50. (Details not shown.)
2. Configure Switch C:
# Enable the NTP service.
<SwitchC> system-view
[SwitchC] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchC] clock protocol ntp
# Specify the local clock as the reference source, with stratum level 2.
[SwitchC] ntp-service refclock-master 2
# Configure Switch C to operate in multicast server mode and send multicast messages from
VLAN-interface 2.
[SwitchC] interface vlan-interface 2
[SwitchC-Vlan-interface2] ntp-service multicast-server
3. Configure Switch D:
# Enable the NTP service.
<SwitchD> system-view
[SwitchD] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchD] clock protocol ntp
# Configure Switch D to operate in multicast client mode and receive multicast messages on
VLAN-interface 2.
[SwitchD] interface vlan-interface 2
[SwitchD-Vlan-interface2] ntp-service multicast-client
4. Verify the configuration:
# Verify that Switch D has synchronized to Switch C, and the clock stratum level is 3 on Switch
D and 2 on Switch C.
Switch D and Switch C are on the same subnet, so Switch D can receive the multicast
messages from Switch C without being enabled with the multicast functions.
[SwitchD-Vlan-interface2] display ntp-service status
Clock status: synchronized
128
Clock stratum: 3
System peer: 3.0.1.31
Local mode: bclient
Reference clock ID: 3.0.1.31
Leap indicator: 00
Clock jitter: 0.044281 s
Stability: 0.000 pps
Clock precision: 2^-19
Root delay: 0.00229 ms
Root dispersion: 4.12572 ms
Reference time: d0d289fe.ec43c720 Sat, Jan 8 2019 7:00:14.922
System poll interval: 64 s
# Verify that an IPv4 NTP association has been established between Switch D and Switch C.
[SwitchD-Vlan-interface2] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[1245]3.0.1.31 127.127.1.0 2 1 64 519 -0.0 0.0022 4.1257
Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.
Total sessions: 1
5. Configure Switch B:
Because Switch A and Switch C are on different subnets, you must enable the multicast
functions on Switch B before Switch A can receive multicast messages from Switch C.
# Enable IP multicast functions.
<SwitchB> system-view
[SwitchB] multicast routing
[SwitchB-mrib] quit
[SwitchB] interface vlan-interface 2
[SwitchB-Vlan-interface2] pim dm
[SwitchB-Vlan-interface2] quit
[SwitchB] vlan 3
[SwitchB-vlan3] port gigabitethernet 1/0/1
[SwitchB-vlan3] quit
[SwitchB] interface vlan-interface 3
[SwitchB-Vlan-interface3] igmp enable
[SwitchB-Vlan-interface3] igmp static-group 224.0.1.1
[SwitchB-Vlan-interface3] quit
[SwitchB] igmp-snooping
[SwitchB-igmp-snooping] quit
[SwitchB] interface gigabitethernet 1/0/1
[SwitchB-GigabitEthernet1/0/1] igmp-snooping static-group 224.0.1.1 vlan 3
6. Configure Switch A:
# Enable the NTP service.
<SwitchA> system-view
[SwitchA] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchA] clock protocol ntp
# Configure Switch A to operate in multicast client mode and receive multicast messages on
VLAN-interface 3.
129
[SwitchA] interface vlan-interface 3
[SwitchA-Vlan-interface3] ntp-service multicast-client
130
Figure 51 Network diagram
Vlan-int2
3000::2/64
Switch C
NTP multicast server
Switch A Switch B
NTP multicast client
Vlan-int2
3000::3/64
Switch D
NTP multicast client
Procedure
1. Assign an IP address to each interface, and make sure the switches can reach each other, as
shown in Figure 51. (Details not shown.)
2. Configure Switch C:
# Enable the NTP service.
<SwitchC> system-view
[SwitchC] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchC] clock protocol ntp
# Specify the local clock as the reference source, with stratum level 2.
[SwitchC] ntp-service refclock-master 2
# Configure Switch C to operate in IPv6 multicast server mode and send multicast messages
from VLAN-interface 2.
[SwitchC] interface vlan-interface 2
[SwitchC-Vlan-interface2] ntp-service ipv6 multicast-server ff24::1
3. Configure Switch D:
# Enable the NTP service.
<SwitchD> system-view
[SwitchD] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchD] clock protocol ntp
# Configure Switch D to operate in IPv6 multicast client mode and receive multicast messages
on VLAN-interface 2.
[SwitchD] interface vlan-interface 2
[SwitchD-Vlan-interface2] ntp-service ipv6 multicast-client ff24::1
4. Verify the configuration:
# Verify that Switch D has synchronized its time with Switch C, and the clock stratum level of
Switch D is 3.
Switch D and Switch C are on the same subnet, so Switch D can Receive the IPv6 multicast
messages from Switch C without being enabled with the IPv6 multicast functions.
[SwitchD-Vlan-interface2] display ntp-service status
Clock status: synchronized
131
Clock stratum: 3
System peer: 3000::2
Local mode: bclient
Reference clock ID: 165.84.121.65
Leap indicator: 00
Clock jitter: 0.000977 s
Stability: 0.000 pps
Clock precision: 2^-19
Root delay: 0.00000 ms
Root dispersion: 8.00578 ms
Reference time: d0c60680.9754fb17 Wed, Dec 29 2019 19:12:00.591
System poll interval: 64 s
# Verify that an IPv6 NTP association has been established between Switch D and Switch C.
[SwitchD-Vlan-interface2] display ntp-service ipv6 sessions
Notes: 1 source(master), 2 source(peer), 3 selected, 4 candidate, 5 configured.
Source: [1234]3000::2
Reference: 127.127.1.0 Clock stratum: 2
Reachabilities: 111 Poll interval: 64
Last receive time: 23 Offset: -0.0
Roundtrip delay: 0.0 Dispersion: 0.0
Total sessions: 1
5. Configure Switch B:
Because Switch A and Switch C are on different subnets, you must enable the IPv6 multicast
functions on Switch B before Switch A can receive IPv6 multicast messages from Switch C.
# Enable IPv6 multicast functions.
<SwitchB> system-view
[SwitchB] ipv6 multicast routing
[SwitchB-mrib6] quit
[SwitchB] interface vlan-interface 2
[SwitchB-Vlan-interface2] ipv6 pim dm
[SwitchB-Vlan-interface2] quit
[SwitchB] vlan 3
[SwitchB-vlan3] port gigabitethernet 1/0/1
[SwitchB-vlan3] quit
[SwitchB] interface vlan-interface 3
[SwitchB-Vlan-interface3] mld enable
[SwitchB-Vlan-interface3] mld static-group ff24::1
[SwitchB-Vlan-interface3] quit
[SwitchB] mld-snooping
[SwitchB-mld-snooping] quit
[SwitchB] interface gigabitethernet 1/0/1
[SwitchB-GigabitEthernet1/0/1] mld-snooping static-group ff24::1 vlan 3
6. Configure Switch A:
# Enable the NTP service.
<SwitchA> system-view
[SwitchA] ntp-service enable
132
# Specify NTP for obtaining the time.
[SwitchA] clock protocol ntp
# Configure Switch A to operate in IPv6 multicast client mode and receive IPv6 multicast
messages on VLAN-interface 3.
[SwitchA] interface vlan-interface 3
[SwitchA-Vlan-interface3] ntp-service ipv6 multicast-client ff24::1
# Verify that an IPv6 NTP association has been established between Switch A and Switch C.
[SwitchA-Vlan-interface3] display ntp-service ipv6 sessions
Notes: 1 source(master), 2 source(peer), 3 selected, 4 candidate, 5 configured.
Source: [124]3000::2
Reference: 127.127.1.0 Clock stratum: 2
Reachabilities: 2 Poll interval: 64
Last receive time: 71 Offset: -0.0
Roundtrip delay: 0.0 Dispersion: 0.0
Total sessions: 1
133
Figure 52 Network diagram
NTP server NTP client
1.0.1.11/24 1.0.1.12/24
Device A Device B
Procedure
1. Assign an IP address to each interface, and make sure Device A and Device B can reach each
other, as shown in Figure 52. (Details not shown.)
2. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Specify the local clock as the reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2
3. Configure Device B:
# Enable the NTP service.
<DeviceB> system-view
[DeviceB] ntp-service enable
# Specify NTP for obtaining the time.
[DeviceB] clock protocol ntp
# Enable NTP authentication on Device B.
[DeviceB] ntp-service authentication enable
# Create a plaintext authentication key, with key ID 42 and key value aNiceKey.
[DeviceB] ntp-service authentication-keyid 42 authentication-mode md5 simple
aNiceKey
# Specify the key as a trusted key.
[DeviceB] ntp-service reliable authentication-keyid 42
# Specify Device A as the NTP server of Device B, and associate the server with key 42.
[DeviceB] ntp-service unicast-server 1.0.1.11 authentication-keyid 42
To enable Device B to synchronize its clock with Device A, enable NTP authentication on
Device A.
4. Configure NTP authentication on Device A:
# Enable NTP authentication.
[DeviceA] ntp-service authentication enable
# Create a plaintext authentication key, with key ID 42 and key value aNiceKey.
[DeviceA] ntp-service authentication-keyid 42 authentication-mode md5 simple
aNiceKey
# Specify the key as a trusted key.
[DeviceA] ntp-service reliable authentication-keyid 42
134
System peer: 1.0.1.11
Local mode: client
Reference clock ID: 1.0.1.11
Leap indicator: 00
Clock jitter: 0.005096 s
Stability: 0.000 pps
Clock precision: 2^-19
Root delay: 0.00655 ms
Root dispersion: 1.15869 ms
Reference time: d0c62687.ab1bba7d Wed, Dec 29 2019 21:28:39.668
System poll interval: 64 s
# Verify that an IPv4 NTP association has been established between Device B and Device A.
[DeviceB] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[1245]1.0.1.11 127.127.1.0 2 1 64 519 -0.0 0.0065 0.0
Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.
Total sessions: 1
Switch C
NTP broadcast server
Vlan-int2
3.0.1.30/24
Switch A
NTP broadcast client
Vlan-int2
3.0.1.32/24
Switch B
NTP broadcast client
135
Procedure
1. Assign an IP address to each interface, and make sure Switch A, Switch B, and Switch C can
reach each other, as shown in Figure 53. (Details not shown.)
2. Configure Switch A:
# Enable the NTP service.
<SwitchA> system-view
[SwitchA] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchA] clock protocol ntp
# Enable NTP authentication on Switch A. Create a plaintext NTP authentication key, with key
ID of 88 and key value of 123456. Specify it as a trusted key.
[SwitchA] ntp-service authentication enable
[SwitchA] ntp-service authentication-keyid 88 authentication-mode md5 simple 123456
[SwitchA] ntp-service reliable authentication-keyid 88
# Configure Switch A to operate in NTP broadcast client mode and receive NTP broadcast
messages on VLAN-interface 2.
[SwitchA] interface vlan-interface 2
[SwitchA-Vlan-interface2] ntp-service broadcast-client
3. Configure Switch B:
# Enable the NTP service.
<SwitchB> system-view
[SwitchB] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchB] clock protocol ntp
# Enable NTP authentication on Switch B. Create a plaintext NTP authentication key, with key
ID of 88 and key value of 123456. Specify it as a trusted key.
[SwitchB] ntp-service authentication enable
[SwitchB] ntp-service authentication-keyid 88 authentication-mode md5 simple 123456
[SwitchB] ntp-service reliable authentication-keyid 88
# Configure Switch B to operate in broadcast client mode and receive NTP broadcast
messages on VLAN-interface 2.
[SwitchB] interface vlan-interface 2
[SwitchB-Vlan-interface2] ntp-service broadcast-client
4. Configure Switch C:
# Enable the NTP service.
<SwitchC> system-view
[SwitchC] ntp-service enable
# Specify NTP for obtaining the time.
[SwitchC] clock protocol ntp
# Specify the local clock as the reference source, with stratum level 3.
[SwitchC] ntp-service refclock-master 3
# Configure Switch C to operate in NTP broadcast server mode and use VLAN-interface 2 to
send NTP broadcast packets.
[SwitchC] interface vlan-interface 2
[SwitchC-Vlan-interface2] ntp-service broadcast-server
[SwitchC-Vlan-interface2] quit
5. Verify the configuration:
136
NTP authentication is enabled on Switch A and Switch B, but not on Switch C, so Switch A and
Switch B cannot synchronize their local clocks to Switch C.
[SwitchB-Vlan-interface2] display ntp-service status
Clock status: unsynchronized
Clock stratum: 16
Reference clock ID: none
6. Enable NTP authentication on Switch C:
# Enable NTP authentication on Switch C. Create a plaintext NTP authentication key, with key
ID of 88 and key value of 123456. Specify it as a trusted key.
[SwitchC] ntp-service authentication enable
[SwitchC] ntp-service authentication-keyid 88 authentication-mode md5 simple 123456
[SwitchC] ntp-service reliable authentication-keyid 88
# Specify Switch C as an NTP broadcast server, and associate key 88 with Switch C.
[SwitchC] interface vlan-interface 2
[SwitchC-Vlan-interface2] ntp-service broadcast-server authentication-keyid 88
137
Configuring SNTP
About SNTP
SNTP is a simplified, client-only version of NTP specified in RFC 4330. It uses the same packet
format and packet exchange procedure as NTP, but provides faster synchronization at the price of
time accuracy.
138
2. Enable the SNTP service.
sntp enable
By default, the SNTP service is disabled.
139
By default, SNTP authentication is disabled.
3. Configure an SNTP authentication key.
sntp authentication-keyid keyid authentication-mode { hmac-sha-1 |
hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 } { cipher | simple } string
[ acl ipv4-acl-number | ipv6 acl ipv6-acl-number ] *
By default, no SNTP authentication key exists.
4. Specify the key as a trusted key.
sntp reliable authentication-keyid keyid
By default, no trusted key is specified.
5. Associate the SNTP authentication key with an NTP server.
IPv4:
sntp unicast-server { server-name | ip-address } [ vpn-instance
vpn-instance-name ] authentication-keyid keyid
IPv6:
sntp ipv6 unicast-server { server-name | ipv6-address } [ vpn-instance
vpn-instance-name ] authentication-keyid keyid
By default, no NTP server is specified.
Task Command
Display information about all IPv6 SNTP associations. display sntp ipv6 sessions
Display information about all IPv4 SNTP associations. display sntp sessions
140
SNTP configuration examples
Example: Configuring SNTP
Network configuration
As shown in Figure 54, perform the following tasks:
• Configure Device A's local clock as its reference source, with stratum level 2.
• Configure Device B to operate in SNTP client mode, and specify Device A as the NTP server.
• Configure NTP authentication on Device A and SNTP authentication on Device B.
Figure 54 Network diagram
NTP server SNTP client
1.0.1.11/24 1.0.1.12/24
Device A Device B
Procedure
1. Assign an IP address to each interface, and make sure Device A and Device B can reach each
other, as shown in Figure 54. (Details not shown.)
2. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Specify NTP for obtaining the time.
[DeviceA] clock protocol ntp
# Configure the local clock as the reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2
# Enable NTP authentication on Device A.
[DeviceA] ntp-service authentication enable
# Configure a plaintext NTP authentication key, with key ID of 10 and key value of aNiceKey.
[DeviceA] ntp-service authentication-keyid 10 authentication-mode md5 simple
aNiceKey
# Specify the key as a trusted key.
[DeviceA] ntp-service reliable authentication-keyid 10
3. Configure Device B:
# Enable the SNTP service.
<DeviceB> system-view
[DeviceB] sntp enable
# Specify NTP for obtaining the time.
[DeviceB] clock protocol ntp
# Enable SNTP authentication on Device B.
[DeviceB] sntp authentication enable
# Configure a plaintext authentication key, with key ID of 10 and key value of aNiceKey.
[DeviceB] sntp authentication-keyid 10 authentication-mode md5 simple aNiceKey
# Specify the key as a trusted key.
141
[DeviceB] sntp reliable authentication-keyid 10
# Specify Device A as the NTP server of Device B, and associate the server with key 10.
[DeviceB] sntp unicast-server 1.0.1.11 authentication-keyid 10
142
Configuring PoE
About PoE
Power over Ethernet (PoE) enables a network device to supply power to terminals over twisted pair
cables.
PoE system
As shown in Figure 55, a PoE system includes the following elements:
• PoE power supply—A PoE power supply provides power for the entire PoE system.
• PSE—A power sourcing equipment (PSE) supplies power to PDs. PSE devices are classified
into single-PSE devices and multiple-PSE devices.
A single-PSE device has only one piece of PSE firmware.
A multiple-PSE device has multiple PSEs. A multiple-PSE device uses PSE IDs to identify
different PSEs. To view the mapping between the ID and slot number of a PSE, execute the
display poe device command.
• PI—A power interface (PI) is a PoE-capable Ethernet interface on a PSE.
• PD—A powered device (PD) receives power from a PSE. PDs include IP telephones, APs,
portable chargers, POS terminals, and Web cameras. You can also connect a PD to a
redundant power source for reliability.
Figure 55 PoE system diagram
PD A
PI A
PI B
PI C
PD B
PoE power supply PSE
PD C
AI-driven PoE
AI-driven PoE innovatively integrates AI technologies into PoE switches and offers the following
benefits:
• Adaptive power supply management
AI-driven PoE can adaptively adjust the power supply parameters to fit power needs in various
scenarios such as multiple types of PDs and address issues such as no power supply to a PD
and power failure, minimizing human intervention.
• Priority-based power management
143
When the power demanded by PDs exceeds the power that can be supplied by the PoE switch,
the system supplies power to PDs based on the PI priorities to ensure power supply to critical
businesses and reduce power supply to PIs of lower priorities.
• Smart power module management
For a PoE switch with a dual power module architecture, AI-driven PoE can automatically
calculate and regulate power output of each power module based on the type and quality of the
power modules, maximizing the power usage of each power module. When a power module is
removed, AI-driven PoE recalculates to preferentially guarantee power supply to PDs that are
being powered.
• Alarms and logs
AI-driven PoE constantly monitors the PoE power supply status. If an exception occurs, it
automatically analyzes, recovers, or even resets the PoE function, and outputs alarms and logs
for reference.
• High PoE security
When an exception such as short circuit or circuit break occurs, AI-driven PoE immediately
starts self-protection to protect the PoE switch from being damaged or burned.
• Fast reboot
Immediately when the PoE switch restarts upon a power failure, AI-driven PoE starts to supply
power to PDs before the PoE switch completes startup, shortening the PD recovery time from a
power failure.
• Perpetual PoE
During a hot reboot of the PoE switch from the reboot command, AI-driven PoE continuously
monitors the PD states and ensures continued power supply to PDs, maintaining terminal
service stability.
• Remote PoE
AI-driven PoE adaptively adjusts the network bandwidth based on the transmission distance
between a PSE and PD. When the transmission distance exceeds 100 m (328.08 ft), the
system automatically reduces the port rate to reduce the line loss and ensure the signal and
power transmission quality. With AI-driven PoE enabled, a PSE can transmit power to a PD of
200 m (656.17 ft) to 250 m (820.21 ft) away.
144
• Configure the settings directly on the PI.
• Configure a PoE profile and apply it to the PI. If you apply a PoE profile to multiple PIs, these PIs
have the same PoE features. If you connect a PD to another PI, you can apply the PoE profile of
the original PI to the new PI. This method relieves the task of configuring PoE on the new PI.
You can only use one way to configure a parameter for a PI. To use the other way to reconfigure a
parameter, you must first remove the original configuration.
You must use the same configuration method for the poe max-power max-power and poe
priority { critical | high | low } commands.
145
• Signal pair mode—Signal pairs (on pins 1, 2, 3, and 6) of the twisted pair cable are used for
power transmission.
• Spare pair mode—Spare pairs (on pins 4, 5, 7, and 8) of the twisted pair cable are used for
power transmission.
Restrictions and guidelines
A PI can supply power to a PD only when the PI and PD use the same power transmission mode.
Procedure
1. Enter system view.
system-view
2. Enter PI view.
interface interface-type interface-number
3. (Optional.) Configure a description for the PD connected to the PI.
poe pd-description text
By default, no description is configured for the PD connected to the PI.
4. Enable PoE for the PI.
poe enable
By default, PoE is disabled on a PI.
146
By default, the PoE module supplies power to an interface immediately after the poe enable
command is executed. This task creates a PoE delay timer after the poe enable command is
executed and allows the PoE module to supply power to the PI only after the timer expires.
Restrictions and guidelines
The undo poe enable command is executed immediately upon configuration and is not affected
by this task.
Procedure
1. Enter system view.
system-view
2. Enable PoE delay.
poe power-delay time
By default, PoE delay is disabled.
147
2. Enable nonstandard PD detection. Choose one option as needed.
Enable nonstandard PD detection for the PSE.
poe legacy enable pse pse-id
By default, nonstandard PD detection is disabled for a PSE.
Execute the following commands in sequence to enable nonstandard PD detection for a PI:
interface interface-type interface-number
poe legacy enable
By default, nonstandard PD detection is disabled for a PI.
148
Configuring the PI priority policy
About this task
The PI priority policy enables the PSE to perform priority-based power allocation to PIs when PSE
power overload occurs. The priority levels for PIs are critical, high, and low in descending order.
When PSE power overload occurs, the PSE supplies power to PDs as follows:
• If the PI priority policy is disabled, the PSE supplies power to PDs depending on whether you
have configured the maximum PSE power.
If you have configured the maximum PSE power, the PSE does not supply power to the
newly-added or existing PD that causes PSE power overload.
If you have not configured the maximum PSE power, the PoE self-protection mechanism is
triggered. The PSE stops supplying power to all PDs.
• If the PI priority policy is enabled, the PSE supplies power to PDs as follows:
If a PD being powered causes PSE power overload, the PSE stops supplying power to the
PD.
If a newly-added PD causes PSE power overload, the PSE supplies power to PDs in priority
descending order of the PIs to which they are connected. If the newly-added PD and a PD
being powered have the same priority, the PD being powered takes precedence. If multiple
PIs being powered have the same priority, the PIs with smaller IDs takes precedence.
Restrictions and guidelines
Before you configure a PI with critical priority, make sure the remaining power from the maximum
PSE power minus the maximum powers of the existing PIs with critical priority is greater than
maximum power of the PI.
Configuration for a PI whose power is preempted remains unchanged.
Procedure
1. Enter system view.
system-view
2. Enable the PI priority policy.
poe pd-policy priority
By default, the PI priority policy is disabled.
3. Enter PI view.
interface interface-type interface-number
4. (Optional.) Configure a priority for the PI.
poe priority { critical | high | low }
By default, the priority for a PI is low.
149
system-view
2. Configure a power alarm threshold for a PSE.
poe utilization-threshold value pse pse-id
By default, the power alarm threshold for a PSE is 80%.
150
By default, a PoE profile is not applied to a PI.
Applying a PoE profile in PI view
1. Enter system view.
system-view
2. Enter PI view.
interface interface-type interface-number
3. Apply the PoE profile to the interface.
apply poe-profile { index index | name profile-name }
By default, a PoE profile is not applied to a PI.
151
3. Enable forced power supply.
poe force-power
By default, forced power supply is disabled.
Task Command
Display general PSE information. display poe device [ slot slot-number ]
Display the power supplying information for display poe interface [ interface-type
the specified PI. interface-number ]
display poe interface power
Display power information for PIs.
[ interface-type interface-number ]
Display detailed PSE information. display poe pse [ pse-id ]
Display the power supplying information for
display poe pse pse-id interface
all PIs on a PSE.
Display power information for all PIs on a
display poe pse pse-id interface power
PSE.
GE AP A
/2 1/0
1/0
GE1/0/3
/5
IP phone A GE
AP B
IP phone B IP phone C
152
Procedure
# Enable the PI priority policy.
<PSE> system-view
[PSE] poe pd-policy priority
# Enable PoE on GigabitEthernet 1/0/1, GigabitEthernet 1/0/2, and GigabitEthernet 1/0/3, and
configure their power supply priority as critical.
[PSE] interface gigabitethernet 1/0/1
[PSE-GigabitEthernet1/0/1] poe enable
[PSE-GigabitEthernet1/0/1] poe priority critical
[PSE-GigabitEthernet1/0/1] quit
[PSE] interface gigabitethernet 1/0/2
[PSE-GigabitEthernet1/0/2] poe enable
[PSE-GigabitEthernet1/0/2] poe priority critical
[PSE-GigabitEthernet1/0/2] quit
[PSE] interface gigabitethernet 1/0/3
[PSE-GigabitEthernet1/0/3] poe enable
[PSE-GigabitEthernet1/0/3] poe priority critical
[PSE-GigabitEthernet1/0/3] quit
# Enable PoE on GigabitEthernet 1/0/4 and GigabitEthernet 1/0/5, and set the maximum power of
GigabitEthernet 1/0/5 to 9000 milliwatts.
[PSE] interface gigabitethernet 1/0/4
[PSE-GigabitEthernet1/0/4] poe enable
[PSE-GigabitEthernet1/0/4] quit
[PSE] interface gigabitethernet 1/0/5
[PSE-GigabitEthernet1/0/5] poe enable
[PSE-GigabitEthernet1/0/5] poe max-power 9000
Troubleshooting PoE
Failure to set the priority of a PI to critical
Symptom
Power supply priority configuration for a PI failed.
Solution
To resolve the issue:
1. Identify whether the remaining guaranteed power of the PSE is lower than the maximum power
of the PI. If it is, increase the maximum PSE power or reduce the maximum power of the PI.
2. Identify whether the priority has been configured through other methods. If the priority has been
configured, remove the configuration.
3. If the issue persists, contact Hewlett Packard Enterprise Support.
153
Failure to apply a PoE profile to a PI
Symptom
PoE profile application for a PI failed.
Solution
To resolve the issue:
1. Identify whether some settings in the PoE profile have been configured. If they have been
configured, remove the configuration.
2. Identify whether the settings in the PoE profile meet the requirements of the PI. If they do not,
modify the settings in the PoE profile.
3. Identify whether another PoE profile is already applied to the PI. If it is, remove the application.
4. If the issue persists, contact Hewlett Packard Enterprise Support.
154
Configuring SNMP
About SNMP
Simple Network Management Protocol (SNMP) is used for a management station to access and
operate the devices on a network, regardless of their vendors, physical characteristics, and
interconnect technologies.
SNMP enables network administrators to read and set the variables on managed devices for state
monitoring, troubleshooting, statistics collection, and other management purposes.
SNMP framework
The SNMP framework contains the following elements:
• SNMP manager—Works on an NMS to monitor and manage the SNMP-capable devices in the
network. It can get and set values of MIB objects on an agent.
• SNMP agent—Works on a managed device to receive and handle requests from the NMS, and
sends notifications to the NMS when events, such as an interface state change, occur.
• Management Information Base (MIB)—Specifies the variables (for example, interface status
and CPU usage) maintained by the SNMP agent for the SNMP manager to read and set.
Figure 57 Relationship between NMS, agent, and MIB
1 2
1 2
1 B 2
5 6
A
A MIB view represents a set of MIB objects (or MIB object hierarchies) with certain access privileges
and is identified by a view name. The MIB objects included in the MIB view are accessible while
those excluded from the MIB view are inaccessible.
A MIB view can have multiple view records each identified by a view-name oid-tree pair.
You control access to the MIB by assigning MIB views to SNMP groups or communities.
155
SNMP operations
SNMP provides the following basic operations:
• Get—NMS retrieves the value of an object node in an agent MIB.
• Set—NMS modifies the value of an object node in an agent MIB.
• Notification—SNMP notifications include traps and informs. The SNMP agent sends traps or
informs to report events to the NMS. The difference between these two types of notification is
that informs require acknowledgment but traps do not. Informs are more reliable but are also
resource-consuming. Traps are available in SNMPv1, SNMPv2c, and SNMPv3. Informs are
available only in SNMPv2c and SNMPv3.
Protocol versions
The device supports SNMPv1, SNMPv2c, and SNMPv3 in non-FIPS mode and supports only
SNMPv3 in FIPS mode. An NMS and an SNMP agent must use the same SNMP version to
communicate with each other.
• SNMPv1—Uses community names for authentication. To access an SNMP agent, an NMS
must use the same community name as set on the SNMP agent. If the community name used
by the NMS differs from the community name set on the agent, the NMS cannot establish an
SNMP session to access the agent or receive traps from the agent.
• SNMPv2c—Uses community names for authentication. SNMPv2c is compatible with SNMPv1,
but supports more operation types, data types, and error codes.
• SNMPv3—Uses a user-based security model (USM) to secure SNMP communication. You can
configure authentication and privacy mechanisms to authenticate and encrypt SNMP packets
for integrity, authenticity, and confidentiality.
FIPS compliance
The device supports the FIPS mode that complies with NIST FIPS 140-2 requirements. Support for
features, commands, and parameters might differ in FIPS mode and non-FIPS mode. For more
information about FIPS mode, see Security Configuration Guide.
156
SNMP tasks at a glance
To configure SNMP, perform the following tasks:
1. (Optional.) Setting the interval at which the SNMP module examines the system configuration
for changes
2. Enabling the SNMP agent
3. Enabling SNMP versions
4. Configuring SNMP basic parameters
(Optional.) Configuring SNMP common parameters
Configuring an SNMPv1 or SNMPv2c community
Configuring an SNMPv3 group and user
5. (Optional.) Configuring SNMP notifications
6. (Optional.) Configuring SNMP logging
7. (Optional.) Setting the interval at which the SNMP module examines the system configuration
for changes
157
To use SNMP notifications in IPv6, enable SNMPv2c or SNMPv3.
Procedure
1. Enter system view.
system-view
2. Enable SNMP versions.
In non-FIPS mode:
snmp-agent sys-info version { all | { v1 | v2c | v3 } * }
In FIPS mode:
snmp-agent sys-info version { all | v3 }
By default, SNMPv3 is enabled.
If you execute the command multiple times with different options, all the configurations take
effect, but only one SNMP version is used by the agent and NMS for communication.
158
By default, the MIB view ViewDefault is predefined. In this view, all the MIB objects in the iso
subtree but the snmpUsmMIB, snmpVacmMIB, and snmpModules.18 subtrees are
accessible.
Each view-name oid-tree pair represents a view record. If you specify the same record with
different MIB sub-tree masks multiple times, the most recent configuration takes effect.
6. Configure the system management information.
Configure the system contact.
snmp-agent sys-info contact sys-contact
By default, the system contact is Hewlett Packard Enterprise Company.
Configure the system location.
snmp-agent sys-info location sys-location
By default, no system location is configured.
7. Create an SNMP context.
snmp-agent context context-name
By default, no SNMP contexts exist.
8. Configure the maximum SNMP packet size (in bytes) that the SNMP agent can handle.
snmp-agent packet max-size byte-count
By default, an SNMP agent can process SNMP packets with a maximum size of 1500 bytes.
9. Set the DSCP value for SNMP responses.
snmp-agent packet response dscp dscp-value
By default, the DSCP value for SNMP responses is 0.
159
2. Create an SNMPv1/v2c community. Choose one option as needed.
In VACM mode:
snmp-agent community { read | write } [ simple | cipher ]
community-name [ mib-view view-name ] [ acl { ipv4-acl-number | name
ipv4-acl-name } | acl ipv6 { ipv6-acl-number | name ipv6-acl-name } ]
*
In RBAC mode:
snmp-agent community [ simple | cipher ] community-name user-role
role-name [ acl { ipv4-acl-number | name ipv4-acl-name } | acl ipv6
{ ipv6-acl-number | name ipv6-acl-name } ] *
3. (Optional.) Map the SNMP community name to an SNMP context.
snmp-agent community-map community-name context context-name
160
Table 7 Basic configuration requirements for different security models
161
Configuring an SNMPv3 group and user in FIPS mode
1. Enter system view.
system-view
2. Create an SNMPv3 group.
snmp-agent group v3 group-name { authentication | privacy }
[ notify-view view-name | read-view view-name | write-view view-name ]
* [ acl { ipv4-acl-number | name ipv4-acl-name } | acl ipv6
{ ipv6-acl-number | name ipv6-acl-name } ] *
3. (Optional.) Calculate the encrypted form for the key in plaintext form.
snmp-agent calculate-password plain-password mode { aes192sha |
aes256sha | sha } { local-engineid | specified-engineid engineid }
4. Create an SNMPv3 user. Choose one option as needed.
In VACM mode:
snmp-agent usm-user v3 user-name group-name [ remote { ipv4-address |
ipv6 ipv6-address } [ vpn-instance vpn-instance-name ] ] { cipher |
simple } authentication-mode sha auth-password [ privacy-mode
{ aes128 | aes192 | aes256 } priv-password ] [ acl { ipv4-acl-number |
name ipv4-acl-name } | acl ipv6 { ipv6-acl-number | name
ipv6-acl-name } ] *
In RBAC mode:
snmp-agent usm-user v3 user-name user-role role-name [ remote
{ ipv4-address | ipv6 ipv6-address } [ vpn-instance
vpn-instance-name ] ] { cipher | simple } authentication-mode sha
auth-password [ privacy-mode { aes128 | aes192 | aes256 }
priv-password ] [ acl { ipv4-acl-number | name ipv4-acl-name } | acl
ipv6 { ipv6-acl-number | name ipv6-acl-name } ] *
To send notifications to an SNMPv3 NMS, you must specify the remote keyword.
5. (Optional.) Assign a user role to the SNMPv3 user created in RBAC mode.
snmp-agent usm-user v3 user-name user-role role-name
By default, an SNMPv3 user has the user role assigned to it at its creation.
162
To generate linkUp or linkDown notifications when the link state of an interface changes, you must
perform the following tasks:
• Enable linkUp or linkDown notification globally by using the snmp-agent trap enable
standard [ linkdown | linkup ] * command.
• Enable linkUp or linkDown notification on the interface by using the enable snmp trap
updown command.
After you enable notifications for a module, whether the module generates notifications also depends
on the configuration of the module. For more information, see the configuration guide for each
module.
To use SNMP notifications in IPv6, enable SNMPv2c or SNMPv3.
Procedure
1. Enter system view.
system-view
2. Enable SNMP notifications.
snmp-agent trap enable [ configuration | protocol | standard
[ authentication | coldstart | linkdown | linkup | warmstart ] * |
system ]
By default, SNMP configuration notifications, standard notifications, and system notifications
are enabled. Whether other SNMP notifications are enabled varies by modules.
For the device to send SNMP notifications for a protocol, first enable the protocol.
3. Enter interface view.
interface interface-type interface-number
4. Enable link state notifications.
enable snmp trap updown
By default, link state notifications are enabled.
163
[ vpn-instance vpn-instance-name ] params securityname
security-string [ v1 | v2c | v3 [ authentication | privacy ] ]
In FIPS mode:
snmp-agent target-host trap address udp-domain { ipv4-target-host |
ipv6 ipv6-target-host } [ udp-port port-number ] [ dscp dscp-value ]
[ vpn-instance vpn-instance-name ] params securityname
security-string v3 { authentication | privacy }
By default, no target host is configured.
3. (Optional.) Configure a source address for sending traps.
snmp-agent trap source interface-type { interface-number |
interface-number.subnumber }
By default, SNMP uses the IP address of the outgoing routed interface as the source IP
address.
4. (Optional.) Enable SNMP alive traps and set the sending interval.
snmp-agent trap periodical-interval interval
By default, SNMP alive traps is enabled and the sending interval is 60 seconds.
Configuring the parameters for sending SNMP informs
1. Enter system view.
system-view
2. Configure a target host.
In non-FIPS mode:
snmp-agent target-host inform address udp-domain { ipv4-target-host |
ipv6 ipv6-target-host } [ udp-port port-number ] [ vpn-instance
vpn-instance-name ] params securityname security-string { v2c | v3
[ authentication | privacy ] }
In FIPS mode:
snmp-agent target-host inform address udp-domain { ipv4-target-host |
ipv6 ipv6-target-host } [ udp-port port-number ] [ vpn-instance
vpn-instance-name ] params securityname security-string v3
{ authentication | privacy }
By default, no target host is configured.
Only SNMPv2c and SNMPv3 support inform packets.
3. (Optional.) Configure a source address for sending informs.
snmp-agent inform source interface-type { interface-number |
interface-number.subnumber }
By default, SNMP uses the IP address of the outgoing routed interface as the source IP
address.
Configuring common parameters for sending notifications
1. Enter system view.
system-view
2. (Optional.) Enable extended linkUp/linkDown notifications.
snmp-agent trap if-mib link extended
By default, the SNMP agent sends standard linkUp/linkDown notifications.
If the NMS does not support extended linkUp/linkDown notifications, do not use this command.
3. (Optional.) Set the notification queue size.
snmp-agent trap queue-size size
164
By default, the notification queue can hold 100 notification messages.
4. (Optional.) Set the notification lifetime.
snmp-agent trap life seconds
The default notification lifetime is 120 seconds.
165
system-view
2. Set the interval at which the SNMP module examines the system configuration for changes.
snmp-agent configuration-examine interval interval
By default, the SNMP module examines the system configuration for changes at intervals of
600 seconds.
Task Command
Display SNMPv1 or SNMPv2c community display snmp-agent community [ read
information. (This command is not supported in FIPS
mode.)
| write ]
166
SNMP configuration examples
Example: Configuring SNMPv1/SNMPv2c
The device does not support this configuration example in FIPS mode.
The configuration procedure is the same for SNMPv1 and SNMPv2c. This example uses SNMPv1.
Network configuration
As shown in Figure 59, the NMS (1.1.1.2/24) uses SNMPv1 to manage the SNMP agent (1.1.1.1/24),
and the agent automatically sends notifications to report events to the NMS.
Figure 59 Network diagram
Agent NMS
1.1.1.1/24 1.1.1.2/24
Procedure
1. Configure the SNMP agent:
# Assign IP address 1.1.1.1/24 to the agent and make sure the agent and the NMS can reach
each other. (Details not shown.)
# Specify SNMPv1, and create read-only community public and read and write community
private.
<Agent> system-view
[Agent] snmp-agent sys-info version v1
[Agent] snmp-agent community read public
[Agent] snmp-agent community write private
# Configure contact and physical location information for the agent.
[Agent] snmp-agent sys-info contact Mr.Wang-Tel:3306
[Agent] snmp-agent sys-info location telephone-closet,3rd-floor
# Enable SNMP notifications, specify the NMS at 1.1.1.2 as an SNMP trap destination, and use
public as the community name. (To make sure the NMS can receive traps, specify the same
SNMP version in the snmp-agent target-host command as is configured on the NMS.)
[Agent] snmp-agent trap enable
[Agent] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname
public v1
2. Configure the SNMP NMS:
Specify SNMPv1.
Create read-only community public, and create read and write community private.
Set the timeout timer and maximum number of retries as needed.
For information about configuring the NMS, see the NMS manual.
NOTE:
The SNMP settings on the agent and the NMS must match.
167
Protocol version: SNMPv1
Operation: Get
Request binding:
1: 1.3.6.1.2.1.2.2.1.4.135471
Response binding:
1: Oid=ifMtu.135471 Syntax=INT Value=1500
Get finished
# Use a wrong community name to get the value of a MIB node on the agent. You can see an
authentication failure trap on the NMS.
1.1.1.1/2934 V1 Trap = authenticationFailure
SNMP Version = V1
Community = public
Command = Trap
Enterprise = 1.3.6.1.4.1.43.1.16.4.3.50
GenericID = 4
SpecificID = 0
Time Stamp = 8:35:25.68
Agent NMS
1.1.1.1/24 1.1.1.2/24
168
# Create SNMPv3 user RBACtest. Assign user role test to RBACtest. Set the authentication
algorithm to SHA-1, authentication key to 123456TESTauth&!, encryption algorithm to AES,
and encryption key to 123456TESTencr&!.
[Agent] snmp-agent usm-user v3 RBACtest user-role test simple authentication-mode sha
123456TESTauth&! privacy-mode aes128 123456TESTencr&!
#Configure contact and physical location information for the agent.
[Agent] snmp-agent sys-info contact Mr.Wang-Tel:3306
[Agent] snmp-agent sys-info location telephone-closet,3rd-floor
#Enable notifications on the agent. Specify the NMS at 1.1.1.2 as the notification destination,
and RBACtest as the username.
[Agent] snmp-agent trap enable
[Agent] snmp-agent target-host trap address udp-domain 1.1.1.2 params
securitynameRBACtest v3 privacy
2. Configure the NMS:
Specify SNMPv3.
Create SNMPv3 user RBACtest.
Enable authentication and encryption. Set the authentication algorithm to SHA-1,
authentication key to 123456TESTauth&!, encryption algorithm to AES, and encryption key
to 123456TESTencr&!.
Set the timeout timer and maximum number of retries.
For information about configuring the NMS, see the NMS manual.
NOTE:
The SNMP settings on the agent and the NMS must match.
169
# Enable notifications on the agent. Specify the NMS at 1.1.1.2 as the trap destination, and
VACMtest as the username.
[Agent] snmp-agent trap enable
[Agent] snmp-agent target-host trap address udp-domain 1.1.1.2 params VACMtest v3
privacy
2. Configure the SNMP NMS:
Specify SNMPv3.
Create SNMPv3 user VACMtest.
Enable authentication and encryption. Set the authentication algorithm to SHA-1,
authentication key to 123456TESTauth&!, encryption algorithm to AES, and encryption key
to 123456TESTencr&!.
Set the timeout timer and maximum number of retries.
For information about configuring the NMS, see the NMS manual.
NOTE:
The SNMP settings on the agent and the NMS must match.
170
Configuring RMON
About RMON
Remote Network Monitoring (RMON) is an SNMP-based network management protocol. It enables
proactive remote monitoring and management of network devices.
RMON groups
Among standard RMON groups, the device implements the statistics group, history group, event
group, alarm group, probe configuration group, and user history group. The Comware system also
implements a private alarm group, which enhances the standard alarm group. The probe
configuration group and user history group are not configurable from the CLI. To configure these two
groups, you must access the MIB.
Statistics group
The statistics group samples traffic statistics for monitored Ethernet interfaces and stores the
statistics in the Ethernet statistics table (ethernetStatsTable). The statistics include:
• Number of collisions.
• CRC alignment errors.
• Number of undersize or oversize packets.
• Number of broadcasts.
• Number of multicasts.
• Number of bytes received.
• Number of packets received.
The statistics in the Ethernet statistics table are cumulative sums.
History group
The history group periodically samples traffic statistics on interfaces and saves the history samples
in the history table (etherHistoryTable). The statistics include:
• Bandwidth utilization.
• Number of error packets.
• Total number of packets.
The history table stores traffic statistics collected for each sampling interval.
Event group
The event group controls the generation and notifications of events triggered by the alarms defined
in the alarm group and the private alarm group. The following are RMON alarm event handling
methods:
171
• Log—Logs event information (including event time and description) in the event log table so the
management device can get the logs through SNMP.
• Trap—Sends an SNMP notification when the event occurs.
• Log-Trap—Logs event information in the event log table and sends an SNMP notification when
the event occurs.
• None—Takes no actions.
Alarm group
The RMON alarm group monitors alarm variables, such as the count of incoming packets
(etherStatsPkts) on an interface. After you create an alarm entry, the RMON agent samples the
value of the monitored alarm variable regularly. If the value of the monitored variable is greater than
or equal to the rising threshold, a rising alarm event is triggered. If the value of the monitored variable
is smaller than or equal to the falling threshold, a falling alarm event is triggered. The event group
defines the action to take on the alarm event.
If an alarm entry crosses a threshold multiple times in succession, the RMON agent generates an
alarm event only for the first crossing. For example, if the value of a sampled alarm variable crosses
the rising threshold multiple times before it crosses the falling threshold, only the first crossing
triggers a rising alarm event, as shown in Figure 61.
Figure 61 Rising and falling alarm events
172
crosses the rising threshold multiple times before it crosses the falling threshold, only the first
crossing triggers a rising alarm event.
Sample types for the alarm group and the private alarm group
The RMON agent supports the following sample types:
• absolute—RMON compares the value of the monitored variable with the rising and falling
thresholds at the end of the sampling interval.
• delta—RMON subtracts the value of the monitored variable at the previous sample from the
current value, and then compares the difference with the rising and falling thresholds.
173
You can create a history control entry successfully even if the specified bucket size exceeds the
available history table size. RMON will set the bucket size as closely to the expected bucket size as
possible.
Procedure
1. Enter system view.
system-view
2. Enter Ethernet interface view.
interface interface-type interface-number
3. Create an RMON history control entry.
rmon history entry-number buckets number interval interval [ owner
text ]
By default, no RMON history control entries exist.
You can create multiple RMON history control entries for an Ethernet interface.
174
Prerequisites
To send notifications to the NMS when an alarm is triggered, configure the SNMP agent as described
in "Configuring SNMP" before configuring the RMON alarm function.
Procedure
1. Enter system view.
system-view
2. (Optional.) Create an RMON event entry.
rmon event entry-number [ description string ] { log | log-trap
security-string | none | trap security-string } [ owner text ]
By default, no RMON event entries exist.
3. Create an RMON alarm entry.
Create an RMON alarm entry.
rmon alarm entry-number alarm-variable sampling-interval
{ absolute | delta } [ startup-alarm { falling | rising |
rising-falling } ] rising-threshold threshold-value1 event-entry1
falling-threshold threshold-value2 event-entry2 [ owner text ]
Create an RMON private alarm entry.
rmon prialarm entry-number prialarm-formula prialarm-des
sampling-interval { absolute | delta } [ startup-alarm { falling |
rising | rising-falling } ] rising-threshold threshold-value1
event-entry1 falling-threshold threshold-value2 event-entry2
entrytype { forever | cycle cycle-period } [ owner text ]
By default, no RMON alarm entries or RMON private alarm entries exist.
You can associate an alarm with an event that has not been created yet. The alarm will trigger
the event only after the event is created.
Task Command
Display RMON alarm entries. display rmon alarm [ entry-number ]
Display RMON event entries. display rmon event [ entry-number ]
Display log information for event
display rmon eventlog [ entry-number ]
entries.
175
RMON configuration examples
Example: Configuring the Ethernet statistics function
Network configuration
As shown in Figure 62, create an RMON Ethernet statistics entry on the device to gather cumulative
traffic statistics for GigabitEthernet 1/0/1.
Figure 62 Network diagram
GE1/0/1
NMS
Server Device
1.1.1.2
Procedure
# Create an RMON Ethernet statistics entry for GigabitEthernet 1/0/1.
<Sysname> system-view
[Sysname] interface gigabitethernet 1/0/1
[Sysname-GigabitEthernet1/0/1] rmon statistics 1 owner user1
# Get the traffic statistics from the NMS through SNMP. (Details not shown.)
176
Figure 63 Network diagram
GE1/0/1
NMS
Server Device
1.1.1.2
Procedure
# Create an RMON history control entry to sample traffic statistics every minute for GigabitEthernet
1/0/1. Retain a maximum of eight samples for the interface in the history statistics table.
<Sysname> system-view
[Sysname] interface gigabitethernet 1/0/1
[Sysname-GigabitEthernet1/0/1] rmon history 1 buckets 8 interval 60 owner user1
# Get the traffic statistics from the NMS through SNMP. (Details not shown.)
177
Figure 64 Network diagram
GE1/0/1
NMS
Server Device
1.1.1.2
Procedure
# Configure the SNMP agent (the device) with the same SNMP settings as the NMS at 1.1.1.2. This
example uses SNMPv1, read community public, and write community private.
<Sysname> system-view
[Sysname] snmp-agent
[Sysname] snmp-agent community read public
[Sysname] snmp-agent community write private
[Sysname] snmp-agent sys-info version v1
[Sysname] snmp-agent trap enable
[Sysname] snmp-agent trap log
[Sysname] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname
public
# Create an RMON event entry and an RMON alarm entry to send SNMP notifications when the
delta sample for 1.3.6.1.2.1.16.1.1.1.4.1 exceeds 100 or drops below 50.
[Sysname] rmon event 1 trap public owner user1
[Sysname] rmon alarm 1 1.3.6.1.2.1.16.1.1.1.4.1 5 delta rising-threshold 100 1
falling-threshold 50 1 owner user1
NOTE:
The string 1.3.6.1.2.1.16.1.1.1.4.1 is the object instance for GigabitEthernet 1/0/1. The digits before
the last digit (1.3.6.1.2.1.16.1.1.1.4) represent the object for total incoming traffic statistics. The last
digit (1) is the RMON Ethernet statistics entry index for GigabitEthernet 1/0/1.
178
Interface : GigabitEthernet1/0/1<ifIndex.3>
etherStatsOctets : 57329 , etherStatsPkts : 455
etherStatsBroadcastPkts : 53 , etherStatsMulticastPkts : 353
etherStatsUndersizePkts : 0 , etherStatsOversizePkts : 0
etherStatsFragments : 0 , etherStatsJabbers : 0
etherStatsCRCAlignErrors : 0 , etherStatsCollisions : 0
etherStatsDropEvents (insufficient resources): 0
Incoming packets by size :
64 : 7 , 65-127 : 413 , 128-255 : 35
256-511: 0 , 512-1023: 0 , 1024-1518: 0
179
Configuring the Event MIB
About the Event MIB
The Event Management Information Base (Event MIB) is an SNMPv3-based network management
protocol and is an enhancement to remote network monitoring (RMON). The Event MIB uses
Boolean tests, existence tests, and threshold tests to monitor MIB objects on a local or remote
system. It triggers the predefined notification or set action when a monitored object meets the trigger
condition.
Trigger
The Event MIB uses triggers to manage and associate the three elements of the Event MIB:
monitored object, trigger condition, and action.
Monitored objects
The Event MIB can monitor the following MIB objects:
• Table node.
• Conceptual row node.
• Table column node.
• Simple leaf node.
• Parent node of a leaf node.
To monitor a single MIB object, specify it by its OID or name. To monitor a set of MIB objects, specify
the common OID or name of the group and enable wildcard matching. For example, specify ifDescr.2
to monitor the description for the interface with index 2. Specify ifDescr and enable wildcard
matching to monitor the descriptions for all interfaces.
Trigger test
A trigger supports Boolean, existence, and threshold tests.
Boolean test
A Boolean test compares the value of the monitored object with the reference value and takes
actions according to the comparison result. The comparison types include unequal, equal, less,
lessorequal, greater, and greaterorequal. For example, if the comparison type is equal, an event
is triggered when the value of the monitored object equals the reference value. The event will not be
triggered again until the value becomes unequal and comes back to equal.
Existence test
An existence test monitors and manages the absence, presence, and change of a MIB object, for
example, interface status. When a monitored object is specified, the system reads the value of the
monitored object regularly.
• If the test type is Absent, the system triggers an alarm event and takes the specified action
when the state of the monitored object changes to absent.
• If the test type is Present, the system triggers an alarm event and takes the specified action
when the state of the monitored object changes to present.
• If the test type is Changed, the system triggers an alarm event and takes the specified action
when the value of the monitored object changes.
180
Threshold test
A threshold test regularly compares the value of the monitored object with the threshold values.
• A rising alarm event is triggered if the value of the monitored object is greater than or equal to
the rising threshold.
• A falling alarm event is triggered if the value of the monitored object is smaller than or equal to
the falling threshold.
• A rising alarm event is triggered if the difference between the current sampled value and the
previous sampled value is greater than or equal to the delta rising threshold.
• A falling alarm event is triggered if the difference between the current sampled value and the
previous sampled value is smaller than or equal to the delta falling threshold.
• A falling alarm event is triggered if the values of the monitored object, the rising threshold, and
the falling threshold are the same.
• A falling alarm event is triggered if the delta rising threshold, the delta falling threshold, and the
difference between the current sampled value and the previous sampled value is the same.
The alarm management module defines the set or notification action to take on alarm events.
If the value of the monitored object crosses a threshold multiple times in succession, the managed
device triggers an alarm event only for the first crossing. For example, if the value of a sampled
object crosses the rising threshold multiple times before it crosses the falling threshold, only the first
crossing triggers a rising alarm event, as shown in Figure 65.
Figure 65 Rising and falling alarm events
Variable value
Rising threshold
Falling threshold
Time
Event actions
The Event MIB triggers one or both of the following actions when the trigger condition is met:
• Set action—Uses SNMP to set the value of the monitored object.
• Notification action—Uses SNMP to send a notification to the NMS. If an object list is specified
for the notification action, the notification will carry the specified objects in the object list.
Object list
An object list is a set of MIB objects. You can specify an object list in trigger view, trigger-test view
(including trigger-Boolean view, trigger existence view, and trigger threshold view), and
action-notification view. If a notification action is triggered, the device sends a notification carrying
the object list to the NMS.
181
If you specify an object list respectively in any two of the views or all the three views, the object lists
are added to the triggered notifications in this sequence: trigger view, trigger-test view, and
action-notification view.
Object owner
Trigger, event, and object list use an owner and name for unique identification. The owner must be
an SNMPv3 user that has been created on the device. If you specify a notification action for a trigger,
you must establish an SNMPv3 connection between the device and NMS by using the SNMPv3
username. For more information about SNMPv3 user, see "SNMP configuration".
182
• Make sure the SNMP agent and NMS are configured correctly and the SNMP agent can send
notifications to the NMS correctly.
Configuring an event
Creating an event
1. Enter system view.
system-view
2. Create an event and enter its view.
snmp mib event owner event-owner name event-name
3. (Optional.) Configure a description for the event.
183
description text
By default, an event does not have a description.
184
object list owner group-owner name group-name
By default, no object list is specified for the notification action.
If you do not specify an object list for the notification action or the specified object list does not
contain variables, no variables will be carried in the notification.
Configuring a trigger
Creating a trigger and configuring its basic parameters
1. Enter system view.
system-view
2. Create a trigger and enter its view.
snmp mib event trigger owner trigger-owner name trigger-name
The trigger owner must be an existing SNMPv3 user.
3. (Optional.) Configure a description for the trigger.
description text
By default, a trigger does not have a description.
4. Set a sampling interval for the trigger.
frequency interval
By default, the sampling interval is 600 seconds.
Make sure the sampling interval is greater than or equal to the Event MIB minimum sampling
interval.
5. Specify a sampling method.
sample { absolute | delta }
The default sampling method is absolute.
6. Specify an object to be sampled by its OID.
oid object-identifier
By default, the OID is 0.0. No object is specified for a trigger.
If you execute this command multiple times, the most recent configuration takes effect.
7. (Optional.) Enable OID wildcarding.
185
wildcard oid
By default, OID wildcarding is disabled.
8. (Optional.) Configure a context for the monitored object.
context context-name
By default, no context is configured for a monitored object.
9. (Optional.) Enable context wildcarding.
wildcard context
By default, context wildcarding is disabled.
10. (Optional.) Specify the object list to be added to the triggered notification.
object list owner group-owner name group-name
By default, no object list is specified for a trigger.
186
snmp mib event trigger owner trigger-owner name trigger-name
3. Specify an existence test for the trigger and enter trigger-existence view.
test existence
By default ,no test is configured for a trigger.
4. Specify an event for the existence trigger test.
event owner event-owner name event-name
By default, no event is specified for an existence trigger test.
5. (Optional.) Specify the object list to be added to the notification triggered by the test.
object list owner group-owner name group-name
By default, no object list is specified for an existence trigger test.
6. Specify an existence trigger test type.
type { absent | changed | present }
The default existence trigger test types are present and absent.
7. Specify an existence trigger test type for the first sampling.
startup { absent | present }
By default, both the present and absent existence trigger test types are allowed for the first
sampling.
187
8. Specify the falling threshold and the falling alarm event triggered when the sampled value is
smaller than or equal to the threshold.
falling { event owner event-owner name event-name | value
integer-value }
By default, the falling threshold is 0, and no falling alarm event is specified.
9. Specify the rising threshold and the ring alarm event triggered when the sampled value is
greater than or equal to the threshold.
rising { event owner event-owner name event-name | value
integer-value }
By default, the rising threshold is 0, and no rising alarm event is specified.
188
Display and maintenance commands for Event
MIB
Execute display commands in any view.
Task Command
Display Event MIB configuration and
display snmp mib event
statistics.
LAN
Agent
Procedure
189
[Sysname] snmp-agent context contextnameA
# Enable SNMP notifications for the Event MIB module. Specify the NMS at 192.168.1.26 as
the target host for the notifications.
[Sysname] snmp-agent trap enable event-mib
[Sysname] snmp-agent target-host trap address udp-domain 192.168.1.26 params
securityname owner1 v3
2. Configure the Event MIB global sampling parameters.
# Set the Event MIB minimum sampling interval to 50 seconds.
[Sysname] snmp mib event sample minimum 50
# Set the maximum number to 100 for object instances that can be concurrently sampled.
[Sysname] snmp mib event sample instance maximum 100
3. Create and configure a trigger:
# Create a trigger and enter its view. Specify its owner as owner1 and its name as triggerA.
[Sysname] snmp mib event trigger owner owner1 name triggerA
# Set the sampling interval to 60 seconds. Make sure the sampling interval is greater than or
equal to the Event MIB minimum sampling interval.
[Sysname-trigger-owner1-triggerA] frequency 60
# Specify object OID 1.3.6.1.2.1.2.2.1.1 as the monitored object. Enable OID wildcarding.
[Sysname-trigger-owner1-triggerA] oid 1.3.6.1.2.1.2.2.1.1
[Sysname-trigger-owner1-triggerA] wildcard oid
# Configure context contextnameA for the monitored object and enable context wildcarding.
[Sysname-trigger-owner1-triggerA] context contextnameA
[Sysname-trigger-owner1-triggerA] wildcard context
# Specify the existence trigger test for the trigger.
[Sysname-trigger-owner1-triggerA] test existence
[Sysname-trigger-owner1-triggerA-existence] quit
# Enable trigger sampling.
[Sysname-trigger-owner1-triggerA] trigger enable
[Sysname-trigger-owner1-triggerA] quit
# Display information about the trigger with owner owner1 and name trigger A.
[Sysname] display snmp mib event trigger owner owner1 name triggerA
Trigger entry triggerA owned by owner1:
TriggerComment : N/A
TriggerTest : existence
TriggerSampleType : absoluteValue
TriggerValueID : 1.3.6.1.2.1.2.2.1.1<ifIndex>
TriggerValueIDWildcard : true
190
TriggerTargetTag : N/A
TriggerContextName : contextnameA
TriggerContextNameWildcard : true
TriggerFrequency(in seconds): 60
TriggerObjOwner : N/A
TriggerObjName : N/A
TriggerEnabled : true
Existence entry:
ExiTest : present | absent
ExiStartUp : present | absent
ExiObjOwner : N/A
ExiObjName : N/A
ExiEvtOwner : N/A
ExiEvtName : N/A
LAN
Agent
Procedure
191
[Sysname] snmp-agent target-host trap address udp-domain 192.168.1.26 params
securityname owner1 v3
2. Configure the Event MIB global sampling parameters.
# Set the Event MIB minimum sampling interval to 50 seconds.
[Sysname] snmp mib event sample minimum 50
# Set the maximum number to 100 for object instances that can be concurrently sampled.
[Sysname] snmp mib event sample instance maximum 100
3. Configure Event MIB object lists objectA, objectB, and objectC.
[Sysname] snmp mib event object list owner owner1 name objectA 1 oid
1.3.6.1.4.1.25506.2.6.1.1.1.1.6.11
[Sysname] snmp mib event object list owner owner1 name objectB 1 oid
1.3.6.1.4.1.25506.2.6.1.1.1.1.7.11
[Sysname] snmp mib event object list owner owner1 name objectC 1 oid
1.3.6.1.4.1.25506.2.6.1.1.1.1.8.11
4. Configure an event:
# Create an event and enter its view. Specify its owner as owner1 and its name as EventA.
[Sysname] snmp mib event owner owner1 name EventA
# Specify the notification action for the event. Specify object OID 1.3.6.1.4.1.25506.2.6.2.0.5
(hh3cEntityExtMemUsageThresholdNotification) to execute the notification.
[Sysname-event-owner1-EventA] action notification
[Sysname-event-owner1-EventA-notification] oid 1.3.6.1.4.1.25506.2.6.2.0.5
# Specify the object list with owner owner 1 and name objectC to be added to the notification
when the notification action is triggered
[Sysname-event-owner1-EventA-notification] object list owner owner1 name objectC
[Sysname-event-owner1-EventA-notification] quit
# Enable the event.
[Sysname-event-owner1-EventA] event enable
[Sysname-event-owner1-EventA] quit
5. Configure a trigger:
# Create a trigger and enter its view. Specify its owner as owner1 and its name as triggerA.
[Sysname] snmp mib event trigger owner owner1 name triggerA
# Set the sampling interval to 60 seconds. Make sure the interval is greater than or equal to the
global minimum sampling interval.
[Sysname-trigger-owner1-triggerA] frequency 60
# Specify object OID 1.3.6.1.4.1.25506.2.6.1.1.1.1.9.11 as the monitored object.
[Sysname-trigger-owner1-triggerA] oid 1.3.6.1.4.1.25506.2.6.1.1.1.1.9.11
# Specify the object list with owner owner1 and name objectA to be added to the notification
when the notification action is triggered.
[Sysname-trigger-owner1-triggerA] object list owner owner1 name objectA
# Configure a Boolean trigger test. Set its comparison type to greater, reference value to 10,
and specify the event with owner owner1 and name EventA, object list with owner owner1 and
name objectB for the test.
[Sysname-trigger-owner1-triggerA] test boolean
[Sysname-trigger-owner1-triggerA-boolean] comparison greater
[Sysname-trigger-owner1-triggerA-boolean] value 10
[Sysname-trigger-owner1-triggerA-boolean] event owner owner1 name EventA
[Sysname-trigger-owner1-triggerA-boolean] object list owner owner1 name objectB
[Sysname-trigger-owner1-triggerA-boolean] quit
192
# Enable trigger sampling.
[Sysname-trigger-owner1-triggerA] trigger enable
[Sysname-trigger-owner1-triggerA] quit
193
TriggerValueID : 1.3.6.1.4.1.25506.2.6.1.1.1.1.9.11<hh3cEntityExt
MemUsageThreshold.11>
TriggerValueIDWildcard : false
TriggerTargetTag : N/A
TriggerContextName : N/A
TriggerContextNameWildcard : false
TriggerFrequency(in seconds): 60
TriggerObjOwner : owner1
TriggerObjName : objectA
TriggerEnabled : true
Boolean entry:
BoolCmp : greater
BoolValue : 10
BoolStartUp : true
BoolObjOwner : owner1
BoolObjName : objectB
BoolEvtOwner : owner1
BoolEvtName : EventA
# When the value of the monitored object 1.3.6.1.4.1.25506.2.6.1.1.1.1.9.11 becomes greater than
10, the NMS receives an mteTriggerFired notification.
LAN
Agent
Procedure
194
[Sysname] snmp-agent target-host trap address udp-domain 192.168.1.26 params
securityname owner1 v3
[Sysname] snmp-agent trap enable
2. Configure the Event MIB global sampling parameters.
# Set the Event MIB minimum sampling interval to 50 seconds.
[Sysname] snmp mib event sample minimum 50
# Set the maximum number to 10 for object instances that can be concurrently sampled.
[Sysname] snmp mib event sample instance maximum 10
3. Create and configure a trigger:
# Create a trigger and enter its view. Specify its owner as owner1 and its name as triggerA.
[Sysname] snmp mib event trigger owner owner1 name triggerA
# Set the sampling interval to 60 seconds. Make sure the interval is greater than or equal to the
Event MIB minimum sampling interval.
[Sysname-trigger-owner1-triggerA] frequency 60
# Specify object OID 1.3.6.1.4.1.25506.2.6.1.1.1.1.7.11 as the monitored object.
[Sysname-trigger-owner1-triggerA] oid 1.3.6.1.4.1.25506.2.6.1.1.1.1.7.11
# Configure a threshold trigger test. Specify the rising threshold to 80 and the falling threshold
to 10 for the test.
[Sysname-trigger-owner1-triggerA] test threshold
[Sysname-trigger-owner1-triggerA-threshold] rising value 80
[Sysname-trigger-owner1-triggerA-threshold] falling value 10
[Sysname-trigger-owner1-triggerA-threshold] quit
# Enable trigger sampling.
[Sysname-trigger-owner1-triggerA] trigger enable
[Sysname-trigger-owner1-triggerA] quit
195
TriggerFrequency(in seconds): 60
TriggerObjOwner : N/A
TriggerObjName : N/A
TriggerEnabled : true
Threshold entry:
ThresStartUp : risingOrFalling
ThresRising : 80
ThresFalling : 10
ThresDeltaRising : 0
ThresDeltaFalling : 0
ThresObjOwner : N/A
ThresObjName : N/A
ThresRisEvtOwner : N/A
ThresRisEvtName : N/A
ThresFalEvtOwner : N/A
ThresFalEvtName : N/A
ThresDeltaRisEvtOwner : N/A
ThresDeltaRisEvtName : N/A
ThresDeltaFalEvtOwner : N/A
ThresDeltaFalEvtName : N/A
196
Configuring NETCONF
About NETCONF
Network Configuration Protocol (NETCONF) is an XML-based network management protocol. It
provides programmable mechanisms to manage and configure network devices. Through
NETCONF, you can configure device parameters, retrieve parameter values, and collect statistics.
For a network that has devices from vendors, you can develop a NETCONF-based NMS system to
configure and manage devices in a simple and effective way.
NETCONF structure
NETCONF has the following layers: content layer, operations layer, RPC layer, and transport
protocol layer.
Table 9 NETCONF layers and XML layers
NETCONF
XML layer Description
layer
Configuration data, Contains a set of managed objects, which can be configuration data,
Content status data, and status data, and statistics. For information about the operable data,
statistics see the NETCONF XML API reference for the device.
197
For information about the NETCONF operations supported by the device and the operable data, see
the NETCONF XML API reference for the device.
The following example shows a NETCONF message for getting all parameters of all interfaces on
the device:
<?xml version="1.0" encoding="utf-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-bulk>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface/>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get-bulk>
</rpc>
198
</get-bulk>
</rpc>
</env:Body>
</env:Envelope>
FIPS compliance
The device supports the FIPS mode that complies with NIST FIPS 140-2 requirements. Support for
features, commands, and parameters might differ in FIPS mode (see Security Configuration Guide)
and non-FIPS mode.
199
2. (Optional.) Retrieving device configuration information
Retrieving device configuration and state information
Retrieving non-default settings
Retrieving NETCONF information
Retrieving YANG file content
Retrieving NETCONF session information
3. (Optional.) Filtering data
• Table-based filtering
• Column-based filtering
4. (Optional.) Locking or unlocking the running configuration
a. Locking the running configuration
b. Unlocking the running configuration
5. (Optional.) Modifying the configuration
6. (Optional.) Managing configuration files
Saving the running configuration
Loading the configuration
Rolling back the configuration
7. (Optional.) Performing CLI operations through NETCONF
8. (Optional.) Subscribing to events
Subscribing to syslog events
Subscribing to events monitored by NETCONF
Subscribing to events reported by modules
Canceling an event subscription
9. (Optional.) Terminating NETCONF sessions
10. (Optional.) Returning to the CLI
200
• Common namespace—The common namespace is shared by all modules. In a packet that
uses the common namespace, the namespace is indicated in the <top> element, and the
modules are listed under the <top> element.
Example:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-bulk>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get-bulk>
</rpc>
• Module-specific namespace—Each module has its own namespace. A packet that uses a
module-specific namespace does not have the <top> element. The namespace follows the
module name.
Example:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-bulk>
<filter type="subtree">
<Ifmgr xmlns="http://www.hp.com/netconf/data:1.0-Ifmgr">
<Interfaces>
</Interfaces>
</Ifmgr>
</filter>
</get-bulk>
</rpc>
Keyword Description
Specifies the following sessions:
• NETCONF over SSH sessions.
agent • NETCONF over Telnet sessions.
• NETCONF over console sessions.
By default, the idle timeout time is 0, and the sessions never time out.
201
Keyword Description
Specifies the following sessions:
• NETCONF over SOAP over HTTP sessions.
soap
• NETCONF over SOAP over HTTPS sessions.
The default setting is 10 minutes.
202
In non-FIPS mode:
netconf soap { http | https } acl { ipv4-acl-number | name
ipv4-acl-name }
In FIPS mode:
netconf soap https acl { ipv4-acl-number | name ipv4-acl-name }
By default, no IPv4 ACL is applied to control NETCONF over SOAP access.
Only clients permitted by the IPv4 ACL can establish NETCONF over SOAP sessions.
5. Specify a mandatory authentication domain for NETCONF users.
netconf soap domain domain-name
By default, no mandatory authentication domain is specified for NETCONF users. For
information about authentication domains, see Security Configuration Guide.
6. Use the custom user interface to establish a NETCONF over SOAP session with the device.
For information about the custom user interface, see the user guide for the interface.
203
Procedure
To enter XML view, execute the following command in user view:
xml
If the XML view prompt appears, the NETCONF over Telnet session or NETCONF over console
session is established successfully.
Exchanging capabilities
About this task
After a NETCONF session is established, the device sends its capabilities to the client. You must use
a hello message to send the capabilities of the client to the device before you can perform any other
NETCONF operations.
Hello message from the device to the client
<?xml version="1.0" encoding="UTF-8"?><hello
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"><capabilities><capability>urn:ietf:pa
rams:netconf:base:1.1</capability><capability>urn:ietf:params:netconf:writable-runnin
g</capability><capability>urn:ietf:params:netconf:capability:notification:1.0</capabi
lity><capability>urn:ietf:params:netconf:capability:validate:1.1</capability><capabil
ity>urn:ietf:params:netconf:capability:interleave:1.0</capability><capability>urn:hp:
params:netconf:capability:hp-netconf-ext:1.0</capability></capabilities><session-id>1
</session-id></hello>]]>]]>
The <capabilities> element carries the capabilities supported by the device. The supported
capabilities vary by device model.
The <session-id> element carries the unique ID assigned to the NETCONF session.
Hello message from the client to the device
After receiving the hello message from the device, copy the following hello message to notify the
device of the capabilities supported by the client:
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
capability-set
</capability>
</capabilities>
</hello>
Item Description
Specifies a set of capabilities supported by the client.
capability-set Use the <capability> and </capability> tags to enclose each user-defined
capability set.
204
client. If the process for a relevant module is not started yet, the operation returns the following
message:
<?xml version="1.0"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data/>
</rpc-reply>
Item Description
getoperation Operation name, get or get-bulk.
Specifies the filtering conditions, such as the module name, submodule name, table
name, and column name.
• If you specify a module name, the operation retrieves the data for the specified
module. If you do not specify a module name, the operation retrieves the data for
all modules.
• If you specify a submodule name, the operation retrieves the data for the
specified submodule. If you do not specify a submodule name, the operation
filter
retrieves the data for all submodules.
• If you specify a table name, the operation retrieves the data for the specified
table. If you do not specify a table name, the operation retrieves the data for all
tables.
• If you specify only the index column, the operation retrieves the data for all
columns. If you specify the index column and any other columns, the operation
retrieves the data for the index column and the specified columns.
205
Item Description
Specifies the index.
index
If you do not specify this item, the index value starts with 1 by default.
Specifies the data entry quantity.
The count attribute complies with the following rules:
• The count attribute can be placed in the module node and table node. In
other nodes, it cannot be resolved.
• When the count attribute is placed in the module node, a descendant node
inherits this count attribute if the descendant node does not contain the
count count attribute.
• The <get-bulk> operation retrieves all the rest data entries starting from the
data entry next to the one with the specified index if either of the following
conditions occurs:
You do not specify the count attribute.
The number of matching data entries is less than the value of the count
attribute.
The following <get-bulk> message example specifies the count and index attributes:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:xc="http://www.hp.com/netconf/base:1.0">
<get-bulk>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0"
xmlns:base="http://www.hp.com/netconf/base:1.0">
<Syslog>
<Logs xc:count="5">
<Log>
<Index>10</Index>
</Log>
</Logs>
</Syslog>
</top>
</filter>
</get-bulk>
</rpc>
When retrieving interface information, the device cannot identify whether an integer value for the
<IfIndex> element represents an interface name or index. When retrieving VPN instance information,
the device cannot identify whether an integer value for the <vrfindex> element represents a VPN
name or index. To resolve the issue, you can use the valuetype attribute to specify the value type.
The valuetype attribute has the following values:
Value Description
name The element is carrying a name.
index The element is carrying an index.
Default value. The device uses the value of the element as a name for
auto information matching. If no match is found, the device uses the value as an index
for interface or information matching.
The following example specifies an index-type value for the <IfIndex> element:
206
<rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<getoperation>
<filter>
<top xmlns="http://www.hp.com/netconf/config:1.0"
xmlns:base="http://www.hp.com/netconf/base:1.0">
<VLAN>
<TrunkInterfaces>
<Interface>
<IfIndex base:valuetype="index">1</IfIndex>
</Interface>
</TrunkInterfaces>
</VLAN>
</top>
</filter >
</getoperation>
</rpc>
If the <get> or < get-bulk> operation succeeds, the device returns the retrieved data in the following
format:
<?xml version="1.0"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
Device state and configuration data
</data>
</rpc-reply>
If the <get-config> or <get-bulk-config> operation succeeds, the device returns the retrieved data in
the following format:
<?xml version="1.0"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
207
Data matching the specified filter
</data>
</rpc-reply>
If you do not specify a value for getType, the retrieval operation retrieves all NETCONF information.
The value for getType can be one of the following operations:
Operation Description
capabilities Retrieves device capabilities.
schemas Retrieves the list of the YANG file names from the device.
208
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-schema xmlns='urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring'>
<identifier>syslog-data</identifier>
<version>2019-01-01</version>
<format>yang</format>
</get-schema>
</rpc>
If the <get-schema> operation succeeds, the device returns a response in the following format:
<?xml version="1.0"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
Content of the specified YANG file
</data>
</rpc-reply>
If the <get-sessions> operation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-sessions>
<Session>
<SessionID>Configuration session ID</SessionID>
<Line>Line information</Line>
<UserName>Name of the user creating the session</UserName>
<Since>Time when the session was created</Since>
<LockHeld>Whether the session holds a lock</LockHeld>
</Session>
</get-sessions>
</rpc-reply>
209
<capabilities>
<capability>urn:ietf:params:netconf:base:1.0</capability>
</capabilities>
</hello>
210
Example: Retrieving non-default configuration data
Network configuration
Retrieve all non-default configuration data.
Procedure
# Enter XML view.
<Sysname> xml
211
</Interface>
<Interface>
<IfIndex>1313</IfIndex>
<VlanType>2</VlanType>
</Interface>
</Interfaces>
</Ifmgr>
<Syslog>
<LogBuffer>
<BufferSize>120</BufferSize>
</LogBuffer>
</Syslog>
<System>
<Device>
<SysName>Sysname</SysName>
<TimeZone>
<Zone>+11:44</Zone>
<ZoneName>beijing</ZoneName>
</TimeZone>
</Device>
</System>
</top>
</data>
</rpc-reply>
212
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Syslog/>
</top>
</filter>
</get-config>
</rpc>
# Copy the following message to the client to exchange capabilities with the device:
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>
# Copy the following message to the client to get the current NETCONF session information on the
device:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-sessions/>
</rpc>
213
<get-sessions>
<Session>
<SessionID>1</SessionID>
<Line>vty0</Line>
<UserName></UserName>
<Since>2019-01-07T00:24:57</Since>
<LockHeld>false</LockHeld>
</Session>
</get-sessions>
</rpc-reply>
Filtering data
About data filtering
You can define a filter to filter information when you perform a <get>, <get-bulk>, <get-config>, or
<get-bulk-config> operation. Data filtering includes the following types:
• Table-based filtering—Filters table information.
• Column-based filtering—Filters information for a single column.
Table-based filtering
About this task
The namespace is http://www.hp.com/netconf/base:1.0. The attribute name is filter. For
information about the support for table-based match, see the NETCONF XML API references.
# Copy the following text to the client to retrieve the longest data with IP address 1.1.1.0 and mask
length 24 from the IPv4 routing table:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:hp="http://www.hp.com/netconf/base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Route>
<Ipv4Routes>
<RouteEntry hp:filter="IP 1.1.1.0 MaskLen 24 longer"/>
</Ipv4Routes>
</Route>
214
</top>
</filter>
</get>
</rpc>
Column-based filtering
About this task
Column-based filtering includes full match filtering, regular expression match filtering, and
conditional match filtering. Full match filtering has the highest priority and conditional match filtering
has the lowest priority. When more than one filtering criterion is specified, the one with the highest
priority takes effect.
Full match filtering
You can specify an element value in an XML message to implement full match filtering. If multiple
element values are provided, the system returns the data that matches all the specified values.
# Copy the following text to the client to retrieve configuration data of all interfaces in UP state:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<AdminStatus>1</AdminStatus>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get>
</rpc>
You can also specify an attribute name that is the same as a column name of the current table at the
row to implement full match filtering. The system returns only configuration data that matches this
attribute name. The XML message equivalent to the above element-value-based full match filtering
is as follows:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get>
<filter type="subtree">
<top
xmlns="http://www.hp.com/netconf/data:1.0"xmlns:data="http://www.hp.com/netconf/data:
1.0">
<Ifmgr>
<Interfaces>
<Interface data:AdminStatus="1"/>
</Interfaces>
</Ifmgr>
215
</top>
</filter>
</get>
</rpc>
The above examples show that both element-value-based full match filtering and
attribute-name-based full match filtering can retrieve the same index and column information for all
interfaces in up state.
Regular expression match filtering
To implement a complex data filtering with characters, you can add a regExp attribute for a specific
element.
The supported data types include integer, date and time, character string, IPv4 address, IPv4 mask,
IPv6 address, MAC address, OID, and time zone.
# Copy the following text to the client to retrieve the descriptions of interfaces, of which all the
characters must be upper-case letters from A to Z:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:hp="http://www.hp.com/netconf/base:1.0">
<get-config>
<source>
<running/>
</source>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<Description hp:regExp="^[A-Z]*$"/>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get-config>
</rpc>
216
Operation Operator Remarks
Equal to the specified value. The supported data types include
Equal match="equal:value"
date, digit, character string, OID, and BOOL.
Not equal to the specified value. The supported data types
Not equal match="notEqual:value"
include date, digit, character string, OID, and BOOL.
Includes the specified string. The supported data types include
Include match="include:string"
only character string.
Excludes the specified string. The supported data types include
Not include match="exclude:string"
only character string.
Starts with the specified string. The supported data types
Start with match="startWith:string"
include character string and OID.
Ends with the specified string. The supported data types
End with match="endWith:string"
include only character string.
# Copy the following text to the client to retrieve extension information about the entity whose CPU
usage is more than 50%:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:hp="http://www.hp.com/netconf/base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Device>
<ExtPhysicalEntities>
<Entity>
<CpuUsage hp:match="more:50"></CpuUsage>
</Entity>
</ExtPhysicalEntities>
</Device>
</top>
</filter>
</get>
</rpc>
217
</capabilities>
</hello>
# Retrieve all data including Gig in the Description column of the Interfaces table under the Ifmgr
module.
<?xml version="1.0"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:hp="http://www.hp.com/netconf/base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<Description hp:regExp="(Gig)+"/>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get>
</rpc>
218
Example: Filtering data by conditional match
Network configuration
Retrieve data in the Name column with the ifindex value not less than 5000 in the Interfaces table
under the Ifmgr module.
Procedure
# Enter XML view.
<Sysname> xml
# Retrieve data in the Name column with the ifindex value not less than 5000 in the Interfaces table
under the Ifmgr module.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:hp="http://www.hp.com/netconf/base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<IfIndex hp:match="notLess:5000"/>
<Name/>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get>
</rpc>
219
</Interface>
</Interfaces>
</Ifmgr>
</top>
</data>
</rpc-reply>
If the <lock> operation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
220
<target>
<running/>
</target>
</unlock>
</rpc>
If the <unlock> operation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
If another client sends a lock request, the device returns the following response:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<rpc-error>
<error-type>protocol</error-type>
221
<error-tag>lock-denied</error-tag>
<error-severity>error</error-severity>
<error-message xml:lang="en"> Lock failed because the NETCONF lock is held by another
session.</error-message>
<error-info>
<session-id>1</session-id>
</error-info>
</rpc-error>
</rpc-reply>
The output shows that the <lock> operation failed. The client with session ID 1 is holding the lock,
Procedure
# Copy the following text to perform the <edit-config> operation:
<?xml version="1.0"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>
<target><running></running></target>
<error-option>
error-option
</error-option>
<config>
<top xmlns="http://www.hp.com/netconf/config:1.0">
Specify the module name, submodule name, table name, and column name
</top>
</config>
</edit-config>
</rpc>
The <error-option> element indicates the action to be taken in response to an error that occurs
during the operation. It has the following values:
Value Description
stop-on-error Stops the <edit-config> operation.
continue-on-error Continues the <edit-config> operation.
222
Value Description
Rolls back the configuration to the configuration before the <edit-config>
operation was performed.
By default, an <edit-config> operation cannot be performed while the device is
rolling back the configuration. If the rollback time exceeds the maximum time that
the client can wait, the client determines that the <edit-config> operation has
rollback-on-error failed and performs the operation again. Because the previous rollback is not
completed, the operation triggers another rollback. If this process repeats itself,
CPU and memory resources will be exhausted and the device will reboot.
To allow an <edit-config> operation to be performed during a configuration
rollback, perform an <action> operation to change the value of the
DisableEditConfigWhenRollback attribute to false.
If the <edit-config> operation succeeds, the device returns a response in the following format:
<?xml version="1.0">
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
You can also perform the <get> operation to verify that the current element value is the same as the
value specified through the <edit-config> operation.
# Change the log buffer size for the Syslog module to 512.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:web="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>
<target>
<running/>
</target>
<config>
<top xmlns="http://www.hp.com/netconf/config:1.0" web:operation="merge">
<Syslog>
<LogBuffer>
<BufferSize>512</BufferSize>
</LogBuffer>
</Syslog>
</top>
223
</config>
</edit-config>
</rpc>
Procedure
# Copy the following text to the client:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save OverWrite="false" Binary-only="false">
<file>Configuration file name</file>
</save>
</rpc>
Item Description
Specifies a .cfg configuration file by its name. The name must start with the
storage medium name.
If you specify the file column, a file name is required.
If the Binary-only attribute is false, the device saves the running configuration
file to both the text and binary configuration files.
• If the specified .cfg file does not exist, the device creates the binary and text
configuration files to save the running configuration.
• If you do not specify the file column, the device saves the running
configuration to the text and binary next-startup configuration files.
Determines whether to overwrite the specified file if the file already exists. The
following values are available:
• true—Overwrite the file.
OverWrite
• false—Do not overwrite the file. The running configuration cannot be
saved, and the system displays an error message.
The default value is true.
224
Item Description
Determines whether to save the running configuration only to the binary
configuration file. The following values are available:
• true—Save the running configuration only to the binary configuration file.
If file specifies a nonexistent file, the <save> operation fails.
If you do not specify the file column, the device identifies whether the
main next-startup configuration file is specified. If yes, the device saves
the running configuration to the corresponding binary file. If not, the
Binary-only <save> operation fails.
• false—Save the running configuration to both the text and binary
configuration files. For more information, see the description for the file
column in this table.
Saving the running configuration to both the text and binary configuration files
requires more time.
The default value is false.
If the <save> operation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
225
<ok/>
</rpc-reply>
Procedure
# Copy the following text to the client to load a configuration file for the device:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<load>
<file>Configuration file name</file>
</load>
</rpc>
The configuration file name must start with the storage media name and end with the .cfg extension.
If the <load> operation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
226
Rolling back the configuration based on a configuration file
# Copy the following text to the client to roll back the running configuration to the configuration in a
configuration file:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<rollback>
<file>Specify the configuration file name</file>
</rollback>
</rpc>
If the <rollback> operation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
Item Description
Specifies the rollback idle timeout time in the range of 1 to 65535 seconds.
confirm-timeout
The default is 600 seconds. This item is optional.
If the <save-point/begin> operation succeeds, the device returns a response in the following
format:
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
227
<save-point>
<commit>
<commit-id>1</commit-id>
</commit>
</save-point>
</data>
</rpc-reply>
3. Modify the running configuration. For more information, see "Modifying the configuration."
4. Mark the rollback point.
The system supports a maximum of 50 rollback points. If the limit is reached, specify the force
attribute for the <save-point>/<commit> operation to overwrite the earliest rollback point.
# Copy the following text to the client to perform a <save-point>/<commit> operation:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<commit>
<label>SUPPORT VLAN<label>
<comment>vlan 1 to 100 and interfaces.</comment>
</commit>
</save-point>
</rpc>
The <label> and <comment> elements are optional.
If the <save-point>/<commit> operation succeeds, the device returns a response in the
following format:
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
<save-point>
<commit>
<commit-id>2</commit-id>
</commit>
</save-point>
</data>
</rpc-reply>
5. Retrieve the rollback point configuration records.
The following text shows the message format for a <save-point/get-commits> request:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<get-commits>
<commit-id/>
<commit-index/>
<commit-label/>
</get-commits>
</save-point>
</rpc>
Specify the <commit-id/>, <commit-index/>, or <commit-label/> element to retrieve the
specified rollback point configuration records. If no element is specified, the operation retrieves
records for all rollback point settings.
# Copy the following text to the client to perform a <save-point>/<get-commits> operation:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<get-commits>
228
<commit-label>SUPPORT VLAN</commit-label>
</get-commits>
</save-point>
</rpc>
If the <save-point/get-commits> operation succeeds, the device returns a response in the
following format:
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
<save-point>
<commit-information>
<CommitID>2</CommitID>
<TimeStamp>Sun Jan 1 11:30:28 2019</TimeStamp>
<UserName>test</UserName>
<Label>SUPPORT VLAN</Label>
</commit-information>
</save-point>
</data>
</rpc-reply>
6. Retrieve the configuration data corresponding to a rollback point.
The following text shows the message format for a <save-point>/<get-commit-information>
request:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<get-commit-information>
<commit-information>
<commit-id/>
<commit-index/>
<commit-label/>
</commit-information>
<compare-information>
<commit-id/>
<commit-index/>
<commit-label/>
</compare-information>
</get-commit-information>
</save-point>
</rpc>
Specify one of the following elements: <commit-id/>, <commit-index/>, and <commit-label/>.
The <compare-information> element is optional.
Item Description
commit-id Uniquely identifies a rollback point.
229
# Copy the following text to the client to perform a <save-point>/<get-commit-information>
operation:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<get-commit-information>
<commit-information>
<commit-label>SUPPORT VLAN</commit-label>
</commit-information>
</get-commit-information>
</save-point>
</rpc>
If the <save-point/get-commit-information> operation succeeds, the device returns a response
in the following format:
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
<save-point>
<commit-information>
<content>
…
interface vlan 1
…
</content>
</commit-information>
</save-point>
</data>
</rpc-reply>
7. Roll back the configuration based on a rollback point.
The configuration can also be automatically rolled back based on the most recently configured
rollback point when the NETCONF session idle timer expires.
# Copy the following text to the client to perform a <save-point>/<rollback> operation:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<rollback>
<commit-id/>
<commit-index/>
<commit-label/>
</rollback>
</save-point>
</rpc>
Specify one of the following elements: <commit-id/>, <commit-index/>, and <commit-label/>. If
no element is specified, the operation rolls back configuration based on the most recently
configured rollback point.
Item Description
commit-id Uniquely identifies a rollback point.
230
If the <save-point/rollback> operation succeeds, the device returns a response in the following
format:
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok></ok>
</rpc-reply>
8. End the rollback configuration.
# Copy the following text to the client to perform a <save-point>/<end> operation:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<end/>
</save-point>
</rpc>
If the <save-point/end> operation succeeds, the device returns a response in the following
format:
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
9. Unlock the configuration. For more information, see "Locking or unlocking the running
configuration."
Procedure
# Copy the following text to the client to execute the commands:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<CLI>
<Execution>
Commands
</Execution>
</CLI>
</rpc>
The <Execution> element can contain multiple commands, with one command on one line.
If the CLI operation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
231
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<CLI>
<Execution>
<![CDATA[Responses to the commands]]>
</Execution>
</CLI>
</rpc-reply>
# Copy the following text to the client to execute the display vlan command:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<CLI>
<Execution>
display vlan
</Execution>
</CLI>
</rpc>
232
Subscribing to events
About event subscription
When an event takes place on the device, the device sends information about the event to
NETCONF clients that have subscribed to the event.
Item Description
Specifies the event stream. The name for the syslog event stream is
stream NETCONF.
Specifies the event. For information about the events to which you can
event
subscribe, see the system log message references for the device.
If the subscription succeeds, the device returns a response in the following format:
233
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns:netconf="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
If the subscription fails, the device returns an error message in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<rpc-error>
<error-type>error-type</error-type>
<error-tag>error-tag</error-tag>
<error-severity>error-severity</error-severity>
<error-message xml:lang="en">error-message</error-message>
</rpc-error>
</rpc-reply>
234
Item Description
Specifies the event stream. The name for the event stream is
stream
NETCONF_MONITOR_EXTENSION.
Specifies the interval for NETCONF to obtain events that matches the
interval subscription condition. The value range is 1 to 4294967 seconds. The default
value is 300 seconds.
If the subscription succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns:netconf="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
235
<startTime>start-time</startTime>
<stopTime>stop-time</stopTime>
</create-subscription>
</rpc>
Item Description
Specifies the event stream. Supported event streams vary by device
stream
model.
If the subscription succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns:netconf="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
Item Description
stream Specifies the event stream.
If the cancelation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
236
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="100">
<ok/>
</rpc-reply>
If the subscription to be canceled does not exist, the device returns an error message in the following
format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<rpc-error>
<error-type>error-type</error-type>
<error-tag>error-tag</error-tag>
<error-severity>error-severity</error-severity>
<error-message xml:lang="en">The subscription stream to be canceled doesn't exist: Stream
name=XXX_STREAM.</error-message>
</rpc-error>
</rpc-reply>
# When another client (192.168.100.130) logs in to the device, the device sends a notification to the
client that has subscribed to all events:
237
<?xml version="1.0" encoding="UTF-8"?>
<notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
<eventTime>2019-01-04T12:30:52</eventTime>
<event xmlns="http://www.hp.com/netconf/event:1.0">
<Group>SHELL</Group>
<Code>SHELL_LOGIN</Code>
<Slot>1</Slot>
<Severity>Notification</Severity>
<context>VTY logged in from 192.168.100.130.</context>
</event>
</notification>
Procedure
# Copy the following message to the client to terminate a NETCONF session:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<kill-session>
<session-id>
Specified session-ID
</session-id>
</kill-session>
</rpc>
If the <kill-session> operation succeeds, the device returns a response in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
238
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>
When the device receives the close-session request, it sends the following response and returns to
CLI's user view:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
239
Supported NETCONF operations
This chapter describes NETCONF operations available with Comware 7.
action
Usage guidelines
This operation sends a non-configuration operation instruction to the device, for example, a reset
instruction.
XML example
# Clear statistics information for all interfaces.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<action>
<top xmlns="http://www.hp.com/netconf/action:1.0">
<Ifmgr>
<ClearAllIfStatistics>
<Clear>
</Clear>
</ClearAllIfStatistics>
</Ifmgr>
</top>
</action>
</rpc>
CLI
Usage guidelines
This operation executes CLI commands.
A request message encloses commands in the <CLI> element. A response message encloses the
command output in the <CLI> element.
You can use the following elements to execute commands:
• Execution—Executes commands in user view.
• Configuration—Executes commands in system view. To execute commands in a lower-level
view of the system view, use the <Configuration> element to enter the view first.
To use this element, include the exec-use-channel attribute and specify a value for the
attribute:
false—Executes commands without using a channel.
true—Executes commands by using a temporary channel. The channel is automatically
closed after the execution.
persist—Executes commands by using the persistent channel for the session.
To use the persistent channel, first perform an <Open-channel> operation to open the
persistent channel. If you do not do so, the system will automatically open the persistent
channel.
After using the persistent channel, perform a <Close-channel> operation to close the
channel and return to system view. If you do not perform an <Open-channel> operation, the
system stays in the view and will execute subsequent commands in the view.
240
You can also specify the error-when-rollback attribute in the <Configuration> element to
indicate whether CLI operations are allowed during a configuration error-triggered configuration
rollback. This attribute takes effect only if the value of the <error-option> element in <edit-config>
operations is set to rollback-on-error. It has the following values:
true—Rejects CLI operation requests and returns error messages.
false (the default)—Allows CLI operations.
For CLI operations to be correctly performed, set the value of the error-when-rollback attribute
to true.
A NETCONF session supports only one persistent channel and but supports multiple temporary
channels.
This operation does not support executing interactive CLI commands.
You cannot execute the quit command by using a channel to exit user view.
XML example
# Execute the vlan 3 command in system view without using a channel.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<CLI>
<Configuration exec-use-channel="false">vlan 3</Configuration>
</CLI>
</rpc>
close-session
Usage guidelines
This operation terminates the current NETCONF session, unlocks the configuration, and releases
the resources (for example, memory) used by the session. After this operation, you exit the XML
view.
XML example
# Terminate the current NETCONF session.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<close-session/>
</rpc>
edit-config: create
Usage guidelines
This operation creates target configuration items.
To use the create attribute in an <edit-config> operation, you must specify the target configuration
item.
• If the table supports creating a target configuration item and the item does not exist, the
operation creates the item and configures the item.
• If the specified item already exists, a data-exist error message is returned.
XML example
# Set the buffer size to 120.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:xc="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>
241
<target>
<running/>
</target>
<config>
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Syslog xmlns="http://www.hp.com/netconf/config:1.0" xc:operation="create">
<LogBuffer>
<BufferSize>120</BufferSize>
</LogBuffer>
</Syslog>
</top>
</config>
</edit-config>
</rpc>
edit-config: delete
Usage guidelines
This operation deletes the specified configuration.
• If the specified target has only the table index, the operation removes all configuration of the
specified target, and the target itself.
• If the specified target has the table index and configuration data, the operation removes the
specified configuration data of this target.
• If the specified target does not exist, an error message is returned, showing that the target does
not exist.
XML example
The syntax is the same as the edit-config message with the create attribute. Change the operation
attribute from create to delete.
edit-config: merge
Usage guidelines
This operation commits target configuration items to the running configuration.
To use the merge attribute in an <edit-config> operation, you must specify the target configuration
item (on a specific level):
• If the specified item exists, the operation directly updates the setting for the item.
• If the specified item does not exist, the operation creates the item and configures the item.
• If the specified item does not exist and it cannot be created, an error message is returned.
XML example
The XML data format is the same as the edit-config message with the create attribute. Change the
operation attribute from create to merge.
edit-config: remove
Usage guidelines
This operation removes the specified configuration.
242
• If the specified target has only the table index, the operation removes all configuration of the
specified target, and the target itself.
• If the specified target has the table index and configuration data, the operation removes the
specified configuration data of this target.
• If the specified target does not exist, or the XML message does not specify any targets, a
success message is returned.
XML example
The syntax is the same as the edit-config message with the create attribute. Change the operation
attribute from create to remove.
edit-config: replace
Usage guidelines
This operation replaces the specified configuration.
• If the specified target exists, the operation replaces the configuration of the target with the
configuration carried in the message.
• If the specified target does not exist but is allowed to be created, the operation creates the
target and then applies the configuration.
• If the specified target does not exist and is not allowed to be created, the operation is not
conducted and an invalid-value error message is returned.
XML example
The syntax is the same as the edit-config message with the create attribute. Change the operation
attribute from create to replace.
edit-config: test-option
Usage guidelines
This operation determines whether to commit a configuration item in an <edit-configure> operation.
The <test-option> element has one of the following values:
• test-then-set—Performs a syntax check, and commits an item if the item passes the check. If
the item fails the check, the item is not committed. This is the default test-option value.
• set—Commits the item without performing a syntax check.
• test-only—Performs only a syntax check. If the item passes the check, a success message is
returned. Otherwise, an error message is returned.
XML example
# Test the configuration for an interface.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>
<target>
<running/>
</target>
<test-option>test-only</test-option>
<config xmlns:xc="urn:ietf:params:xml:ns:netconf:base:1.0">
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Ifmgr xc:operation="merge">
<Interfaces>
<Interface>
243
<IfIndex>262</IfIndex>
<Description>222</Description>
<ConfigSpeed>2</ConfigSpeed>
<ConfigDuplex>1</ConfigDuplex>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</config>
</edit-config>
</rpc>
edit-config: default-operation
Usage guidelines
This operation modifies the running configuration of the device by using the default operation
method.
NETCONF uses one of the following operation attributes to modify configuration: merge, create,
delete, and replace. If you do not specify an operation attribute for an edit-config message,
NETCONF uses the default operation method. The value specified for the <default-operation>
element takes effect only once. If you do not specify an operation attribute or the default operation
method for an <edit-config> message, merge always applies.
The <default-operation> element has the following values:
• merge—Default value for the <default-operation> element.
• replace—Value used when the operation attribute is not specified and the default operation
method is specified as replace.
• none—Value used when the operation attribute is not specified and the default operation
method is specified as none. If this value is specified, the <edit-config> operation is used only
for schema verification rather than issuing a configuration. If the schema verification is passed,
a successful message is returned. Otherwise, an error message is returned.
XML example
# Issue an empty operation for schema verification purposes.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>
<target>
<running/>
</target>
<default-operation>none</default-operation>
<config xmlns:xc="urn:ietf:params:xml:ns:netconf:base:1.0">
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<IfIndex>262</IfIndex>
<Description>222222</Description>
</Interface>
</Interfaces>
</Ifmgr>
244
</top>
</config>
</edit-config>
</rpc>
edit-config: error-option
Usage guidelines
This operation determines the action to take in case of a configuration error.
The <error-option> element has the following values:
• stop-on-error—Stops the operation and returns an error message. This is the default
error-option value.
• continue-on-error—Continues the operation and returns an error message.
• rollback-on-error—Rolls back the configuration.
XML example
# Issue the configuration for two interfaces, setting the <error-option> element to continue-on-error.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>
<target>
<running/>
</target> <error-option>continue-on-error</error-option>
<config xmlns:xc="urn:ietf:params:xml:ns:netconf:base:1.0">
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Ifmgr xc:operation="merge">
<Interfaces>
<Interface>
<IfIndex>262</IfIndex>
<Description>222</Description>
<ConfigSpeed>1024</ConfigSpeed>
<ConfigDuplex>1</ConfigDuplex>
</Interface>
<Interface>
<IfIndex>263</IfIndex>
<Description>333</Description>
<ConfigSpeed>1024</ConfigSpeed>
<ConfigDuplex>1</ConfigDuplex>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</config>
</edit-config>
</rpc>
245
edit-config: incremental
Usage guidelines
This operation adds configuration data to a column without affecting the original data.
The incremental attribute applies to a list column such as the vlan permitlist column.
You can use the incremental attribute for <edit-config> operations with an attribute other than
replace.
Support for the incremental attribute varies by module. For more information, see NETCONF XML
API references.
XML example
# Add VLANs 1 through 10 to an untagged VLAN list that has untagged VLANs 12 through 15.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:hp="http://www.hp.com/netconf/base:1.0">
<edit-config>
<target>
<running/>
</target>
<config xmlns:xc="urn:ietf:params:xml:ns:netconf:base:1.0">
<top xmlns="http://www.hp.com/netconf/config:1.0">
<VLAN xc:operation="merge">
<HybridInterfaces>
<Interface>
<IfIndex>262</IfIndex>
<UntaggedVlanList hp:incremental="true">1-10</UntaggedVlanList>
</Interface>
</HybridInterfaces>
</VLAN>
</top>
</config>
</edit-config>
</rpc>
get
Usage guidelines
This operation retrieves device configuration and state information.
XML example
# Retrieve device configuration and state information for the Syslog module.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:xc="http://www.hp.com/netconf/base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Syslog>
</Syslog>
</top>
246
</filter>
</get>
</rpc>
get-bulk
Usage guidelines
This operation retrieves a number of data entries (including device configuration and state
information) starting from the data entry next to the one with the specified index.
XML example
# Retrieve device configuration and state information for all interfaces.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-bulk>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/data:1.0">
<Ifmgr>
<Interfaces xc:count="5" xmlns:xc="http://www.hp.com/netconf/base:1.0">
<Interface/>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get-bulk>
</rpc>
get-bulk-config
Usage guidelines
This operation retrieves a number of non-default configuration data entries starting from the data
entry next to the one with the specified index.
XML example
# Retrieve non-default configuration for all interfaces.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-bulk-config>
<source>
<running/>
</source>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Ifmgr>
</Ifmgr>
</top>
</filter>
</get-bulk-config>
</rpc>
247
get-config
Usage guidelines
This operation retrieves non-default configuration data. If no non-default configuration data exists,
the device returns a response with empty data.
XML example
# Retrieve non-default configuration data for the interface table.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:xc="http://www.hp.com/netconf/base:1.0">
<get-config>
<source>
<running/>
</source>
<filter type="subtree">
<top xmlns="http://www.hp.com/netconf/config:1.0">
<Ifmgr>
<Interfaces>
<Interface/>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get-config>
</rpc>
get-sessions
Usage guidelines
This operation retrieves information about all NETCONF sessions in the system. You cannot specify
a session ID to retrieve information about a specific NETCONF session.
XML example
# Retrieve information about all NETCONF sessions in the system.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-sessions/>
</rpc>
kill-session
Usage guidelines
This operation terminates the NETCONF session for another user. This operation cannot terminate
the NETCONF session for the current user.
XML example
# Terminate the NETCONF session with session ID 1.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<kill-session>
<session-id>1</session-id>
248
</kill-session>
</rpc>
load
Usage guidelines
This operation loads the configuration. After the device finishes a <load> operation, the configuration
in the specified file is merged into the running configuration of the device.
XML example
# Merge the configuration in file a1.cfg to the running configuration of the device.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<load>
<file>a1.cfg</file>
</load>
</rpc>
lock
Usage guidelines
This operation locks the configuration. After you lock the configuration, other users cannot perform
<edit-config> operations. Other NETCONF operations are allowed.
After a user locks the configuration, other users cannot use NETCONF or any other configuration
methods such as CLI and SNMP to configure the device.
XML example
# Lock the configuration.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<lock>
<target>
<running/>
</target>
</lock>
</rpc>
rollback
Usage guidelines
This operation rolls back the configuration. To do so, you must specify the configuration file in the
<file> element. After the device finishes the <rollback> operation, the current device configuration is
totally replaced with the configuration in the specified configuration file.
XML example
# Roll back the running configuration to the configuration in file 1A.cfg.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<rollback>
<file>1A.cfg</file>
</rollback>
</rpc>
249
save
Usage guidelines
This operation saves the running configuration. You can use the <file> element to specify a file for
saving the configuration. If the text does not include the file column, the running configuration is
automatically saved to the main next-startup configuration file.
The OverWrite attribute determines whether the running configuration overwrites the original
configuration file when the specified file already exists.
The Binary-only attribute determines whether to save the running configuration only to the binary
configuration file.
XML example
# Save the running configuration to file test.cfg.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save OverWrite="false" Binary-only="true">
<file>test.cfg</file>
</save>
</rpc>
unlock
Usage guidelines
This operation unlocks the configuration, so other users can configure the device.
Terminating a NETCONF session automatically unlocks the configuration.
XML example
# Unlock the configuration.
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<unlock>
<target>
<running/>
</target>
</unlock>
</rpc>
250
Configuring CWMP
About CWMP
CPE WAN Management Protocol (CWMP), also called "TR-069," is a DSL Forum technical
specification for remote management of network devices.
The protocol was initially designed to provide remote autoconfiguration through a server for large
numbers of dispersed end-user devices in a network. CWMP can be used on different types of
networks, including Ethernet.
IP network
251
The following are methods available for the ACS to issue configuration to the CPE:
• Transfers the configuration file to the CPE, and specifies the file as the next-startup
configuration file. At a reboot, the CPE starts up with the ACS-specified configuration file.
• Runs the configuration in the CPE's RAM. The configuration takes effect immediately on the
CPE. For the running configuration to survive a reboot, you must save the configuration on the
CPE.
CPE software management
The ACS can manage CPE software upgrade.
When the ACS finds a software version update, the ACS notifies the CPE to download the software
image file from a specific location. The location can be the URL of the ACS or an independent file
server.
If the CPE successfully downloads the software image file and the file is validated, the CPE notifies
the ACS of a successful download. If the CPE fails to download the software image file or the file is
invalidated, the CPE notifies the ACS of an unsuccessful download.
Data backup
The ACS can require the CPE to upload a configuration file or log file to a specific location. The
destination location can be the ACS or a file server.
CPE status and performance monitoring
The ACS can monitor the status and performance of CPEs. Table 12 shows the available CPE status
and performance objects for the ACS to monitor.
Table 12 CPE status and performance objects available for the ACS to monitor
252
Category Objects Remarks
Interval for periodic connection from the CPE
PeriodicInformInterval to the ACS for configuration and software
update.
Scheduled time for connection from the CPE
PeriodicInformTime to the ACS for configuration and software
update.
ConnectionRequestURL (CPE
N/A
URL)
ConnectionRequestUsername
(CPE username) CPE username and password for
ConnectionRequestPassword authentication from the ACS to the CPE.
(CPE password)
253
CWMP connection establishment
Step 1 through step 5 in Figure 70 show the procedure of establishing a connection between the
CPE and the ACS.
1. After obtaining the basic ACS parameters, the CPE initiates a TCP connection to the ACS.
2. If HTTPS is used, the CPE and the ACS initialize SSL for a secure HTTP connection.
3. The CPE sends an Inform message in HTTPS to initiate a CWMP session.
4. After the CPE passes authentication, the ACS returns an Inform response to establish the
session.
5. After sending all requests, the CPE sends an empty HTTP post message.
Figure 70 CWMP connection establishment
CPE
254
Figure 71 Main and backup ACS switchover
CPE
255
Enabling CWMP from the CLI
1. Enter system view.
system-view
2. Enter CWMP view.
cwmp
3. Enable CWMP.
cwmp enable
By default, CWMP is disabled.
NOTE:
The ACS URL, username and password must use the hexadecimal format and be space separated.
256
[Sysname] dhcp server ip-pool 0
[Sysname-dhcp-pool-0] option 43 hex
0127687474703A2F2F3136392E3235342E37362E33313A373534372F61637320313233342035363738
257
cwmp acs default password { cipher | simple } string
By default, no password has been configured for authentication to the default ACS URL.
258
cwmp cpe username username
By default, no username has been configured for authentication to the CPE.
4. (Optional.) Configure the password for authentication to the CPE.
cwmp cpe password { cipher | simple } string
By default, no password has been configured for authentication to the CPE.
The password setting is optional. You can specify only a username for authentication.
259
Configuring autoconnect parameters
About this task
You can configure the CPE to connect to the ACS periodically, or at a scheduled time for
configuration or software update.
The CPE retries a connection automatically when one of the following events occurs:
• The CPE fails to connect to the ACS. The CPE considers a connection attempt as having failed
when the close-wait timer expires. This timer starts when the CPE sends an Inform request. If
the CPE fails to receive a response before the timer expires, the CPE resends the Inform
request.
• The connection is disconnected before the session on the connection is completed.
To protect system resources, limit the number of retries that the CPE can make to connect to the
ACS.
Configuring the periodic Inform feature
1. Enter system view.
system-view
2. Enter CWMP view.
cwmp
3. Enable the periodic Inform feature.
cwmp cpe inform interval enable
By default, this function is disabled.
4. Set the Inform interval.
cwmp cpe inform interval interval
By default, the CPE sends an Inform message to start a session every 600 seconds.
Scheduling a connection initiation
1. Enter system view.
system-view
2. Enter CWMP view.
cwmp
3. Schedule a connection initiation.
cwmp cpe inform time time
By default, no connection initiation has been scheduled.
Setting the maximum number of connection retries
1. Enter system view.
system-view
2. Enter CWMP view.
cwmp
3. Set the maximum number of connection retries.
cwmp cpe connect retry retries
By default, the CPE retries a failed connection until the connection is established.
260
Setting the close-wait timer
About this task
The close-wait timer specifies the following:
• The maximum amount of time the CPE waits for the response to a session request. The CPE
determines that its session attempt has failed when the timer expires.
• The amount of time the connection to the ACS can be idle before it is terminated. The CPE
terminates the connection to the ACS if no traffic is sent or received before the timer expires.
Procedure
1. Enter system view.
system-view
2. Enter CWMP view.
cwmp
3. Set the close-wait timer.
cwmp cpe wait timeout seconds
By default, the close-wait timer is 30 seconds.
Task Command
Display CWMP configuration. display cwmp configuration
Display the current status of CWMP. display cwmp status
261
Figure 72 Network diagram
ACS DHCP server DNS server
10.185.10.41 10.185.10.52 10.185.10.60
Table 15 shows the ACS attributes for the CPEs to connect to the ACS.
Table 15 ACS attributes
Item Setting
Preferred ACS URL http://10.185.10.41:9090
ACS username admin
ACS password 12345
262
a. Launch a Web browser on the ACS configuration terminal.
b. In the address bar of the Web browser, enter the ACS URL and port number. This example
uses http://10.185.10.41:8080/imc.
c. On the login page, enter the ACS login username and password, and then click Login.
2. Create a CPE group for each equipment room:
a. Select Service > BIMS > CPE Group from the top navigation bar.
The CPE Group page appears.
Figure 73 CPE Group page
b. Click Add.
c. Enter a username, and then click OK.
Figure 74 Adding a CPE group
d. Repeat the previous two steps to create a CPE group for CPEs in Room B.
3. Add CPEs to the CPE group for each equipment room:
a. Select Service > BIMS > Resource Management > Add CPE from the top navigation bar.
b. On the Add CPE page, configure the following parameters:
− Authentication Type—Select ACS UserName.
− CPE Name—Enter a CPE name.
− ACS Username—Enter admin.
− ACS Password Generated—Select Manual Input.
− ACS Password—Enter a password for ACS authentication.
− ACS Confirm Password—Re-enter the password.
− CPE Model—Select the CPE model.
− CPE Group—Select the CPE group.
263
Figure 75 Adding a CPE
c. Click OK.
d. Verify that the CPE has been added successfully from the All CPEs page.
Figure 76 Viewing CPEs
e. Repeat the previous steps to add CPE 2 and CPE 3 to the CPE group for Room A, and add
CPEs in Room B to the CPE group for Room B.
4. Configure a configuration template for each equipment room:
a. Select Service > BIMS > Configuration Management > Configuration Templates from
the top navigation bar.
264
Figure 77 Configuration Templates page
b. Click Import.
c. Select a source configuration file, select Configuration Segment as the template type, and
then click OK.
The created configuration template will be displayed in the Configuration Template list
after a successful file import.
IMPORTANT:
If the first command in the configuration template file is system-view, make sure no
characters exist in front of the command.
265
Figure 79 Configuration Template list
b. Click Import.
c. Select a source file, and then click OK.
Figure 81 Importing CPE software
d. Repeat the previous steps to add software library entries for CPEs of different models.
6. Create an auto-deployment task for each equipment room:
a. Select Service > BIMS > Configuration Management > Deployment Guide from the top
navigation bar.
266
Figure 82 Deployment Guide
267
Figure 84 Operation result
e. Repeat the previous steps to add a deployment task for CPEs in Room B.
Configuring the DHCP server
In this example, an HPE device is operating as the DHCP server.
1. Configure an IP address pool to assign IP addresses and DNS server address to the CPEs.
This example uses subnet 10.185.10.0/24 for IP address assignment.
# Enable DHCP.
<DHCP_server> system-view
[DHCP_server] dhcp enable
# Enable DHCP server on VLAN-interface 1.
[DHCP_server] interface vlan-interface 1
[DHCP_server-Vlan-interface1] dhcp select server
[DHCP_server-Vlan-interface1] quit
# Exclude the DNS server address 10.185.10.60 and the ACS IP address 10.185.10.41 from
dynamic allocation.
[DHCP_server] dhcp server forbidden-ip 10.185.10.41
[DHCP_server] dhcp server forbidden-ip 10.185.10.60
# Create DHCP address pool 0.
[DHCP_server] dhcp server ip-pool 0
# Assign subnet 10.185.10.0/24 to the address pool, and specify the DNS server address
10.185.10.60 in the address pool.
[DHCP_server-dhcp-pool-0] network 10.185.10.0 mask 255.255.255.0
[DHCP_server-dhcp-pool-0] dns-list 10.185.10.60
2. Configure DHCP Option 43 to contain the ACS URL, username, and password in hexadecimal
format.
[DHCP_server-dhcp-pool-0] option 43 hex
013B687474703A2F2F6163732E64617461626173653A393039302F616373207669636B79203132333
435
268
[CPE1] ip address dhcp-alloc
269
Configuring EAA
About EAA
Embedded Automation Architecture (EAA) is a monitoring framework that enables you to self-define
monitored events and actions to take in response to an event. It allows you to create monitor policies
by using the CLI or Tcl scripts.
EAA framework
EAA framework includes a set of event sources, a set of event monitors, a real-time event manager
(RTM), and a set of user-defined monitor policies, as shown in Figure 85.
Figure 85 EAA framework
Event
sources
CLI Syslog SNMP Hotplug
SNMP-
Interface Process Track
notification
EM
RTM
EAA monitor
policies
Event sources
Event sources are software or hardware modules that trigger events (see Figure 85).
For example, the CLI module triggers an event when you enter a command. The Syslog module (the
information center) triggers an event when it receives a log message.
Event monitors
EAA creates one event monitor to monitor the system for the event specified in each monitor policy.
An event monitor notifies the RTM to run the monitor policy when the monitored event occurs.
RTM
RTM manages the creation, state machine, and execution of monitor policies.
270
EAA monitor policies
A monitor policy specifies the event to monitor and actions to take when the event occurs.
You can configure EAA monitor policies by using the CLI or Tcl.
A monitor policy contains the following elements:
• One event.
• A minimum of one action.
• A minimum of one user role.
• One running time setting.
For more information about these elements, see "Elements in a monitor policy."
Each SNMP event is associated with two user-defined thresholds: start and restart.
SNMP event occurs when the monitored MIB variable's value crosses the start
threshold in the following situations:
SNMP
• The monitored variable's value crosses the start threshold for the first time.
• The monitored variable's value crosses the start threshold each time after it
crosses the restart threshold.
SNMP-Notification event occurs when the monitored MIB variable's value in an SNMP
SNMP-Notification notification matches the specified condition. For example, the broadcast traffic rate on
an Ethernet interface reaches or exceeds 30%.
271
Track event occurs when the state of the track entry changes from Positive to Negative
or from Negative to Positive. If you specify multiple track entries for a policy, EAA
triggers the policy only when the state of all the track entries changes from Positive
Track (Negative) to Negative (Positive).
If you set a suppress time for a policy, the timer starts when the policy is triggered. The
system does not process the messages that report the track entry state change from
Positive (Negative) to Negative (Positive) until the timer times out.
Action
You can create a series of order-dependent actions to take in response to the event specified in the
monitor policy.
The following are available actions:
• Executing a command.
• Sending a log.
• Enabling an active/standby switchover.
• Executing a reboot without saving the running configuration.
User role
For EAA to execute an action in a monitor policy, you must assign the policy the user role that has
access to the action-specific commands and resources. If EAA lacks access to an action-specific
command or resource, EAA does not perform the action and all the subsequent actions.
For example, a monitor policy has four actions numbered from 1 to 4. The policy has user roles that
are required for performing actions 1, 3, and 4. However, it does not have the user role required for
performing action 2. When the policy is triggered, EAA executes only action 1.
For more information about user roles, see RBAC in Fundamentals Configuration Guide.
Runtime
The runtime limits the amount of time that the monitor policy runs its actions from the time it is
triggered. This setting prevents a policy from running its actions permanently to occupy resources.
272
the value of _slot is 1. When a member device in slot 2 joins or leaves the IRF fabric, the value
of _slot is 2.
Table 18 shows all system-defined variables.
Table 18 System-defined EAA environment variables by event type
Hotplug _slot: ID of the member device that joins or leaves the IRF fabric
User-defined variables
You can use user-defined variables for all types of events.
User-defined variable names can contain digits, characters, and the underscore sign (_), except that
the underscore sign cannot be the leading character.
273
Configuring a monitor policy
Restrictions and guidelines
Make sure the actions in different policies do not conflict. Policy execution result will be unpredictable
if policies that conflict in actions are running concurrently.
You can assign the same policy name to a CLI-defined policy and a Tcl-defined policy. However, you
cannot assign the same name to policies that are the same type.
A monitor policy supports only one event and runtime. If you configure multiple events for a policy,
the most recent one takes effect.
A monitor policy supports a maximum of 64 valid user roles. User roles added after this limit is
reached do not take effect.
274
Configure an SNMP-Notification event.
event snmp-notification oid oid oid-val oid-val op op [ drop ]
Configure a Syslog event.
event syslog priority priority msg msg occurs times period period
Configure a track event.
event track track-list state { negative | positive } [ suppress-time
suppress-time ]
By default, a monitor policy does not contain an event.
If you configure multiple events for a policy, the most recent one takes effect.
5. Configure the actions to take when the event occurs.
Choose the following tasks as needed:
Configure a CLI action.
action number cli command-line
Configure a reboot action.
action number reboot [ slot slot-number [ subslot subslot-number ] ]
Configure an active/standby switchover action.
action number switchover
Configure a logging action.
action number syslog priority priority facility local-number msg
msg-body
By default, a monitor policy does not contain any actions.
6. (Optional.) Assign a user role to the policy.
user-role role-name
By default, a monitor policy contains user roles that its creator had at the time of policy creation.
An EAA policy cannot have both the security-audit user role and any other user roles.
Any previously assigned user roles are automatically removed when you assign the
security-audit user role to the policy. The previously assigned security-audit user
role is automatically removed when you assign any other user roles to the policy.
7. (Optional.) Configure the policy action runtime.
running-time time
The default policy action runtime is 20 seconds.
If you configure multiple action runtimes for a policy, the most recent one takes effect.
8. Enable the policy.
commit
By default, CLI-defined policies are not enabled.
A CLI-defined policy can take effect only after you perform this step.
275
::platformtools::rtm::event_register event-type arg1 arg2 arg3 …
user-role role-name1 | [ user-role role-name2 | [ … ] ] [ running-time
running-time ]
The arg1 arg2 arg3 … arguments represent event matching rules. If an argument value
contains spaces, use double quotation marks ("") to enclose the value. For example, "a b c."
The configuration requirements for the event-type, user-role, and running-time
arguments are the same as those for a CLI-defined monitor policy.
• The other lines
From the second line, the Tcl script defines the actions to be executed when the monitor policy
is triggered. You can use multiple lines to define multiple actions. The system executes these
actions in sequence. The following actions are available:
Standard Tcl commands.
EAA-specific Tcl actions:
− switchover ( ::platformtools::rtm::action switchover )
− syslog (::platformtools::rtm::action syslog priority priority
facility local-number msg msg-body). For more information about these
arguments, see EAA commands in Network Management and Monitoring Command
Reference.
Commands supported by the device.
Restrictions and guidelines
To revise the Tcl script of a policy, you must suspend all monitor policies first, and then resume the
policies after you finish revising the script. The system cannot execute a Tcl-defined policy if you edit
its Tcl script without first suspending these policies.
Procedure
1. Download the Tcl script file to the device by using FTP or TFTP.
For more information about using FTP and TFTP, see Fundamentals Configuration Guide.
2. Create and enable a Tcl monitor policy.
a. Enter system view.
system-view
b. Create a Tcl-defined policy and bind it to the Tcl script file.
rtm tcl-policy policy-name tcl-filename
By default, no Tcl policies exist.
Make sure the script file is saved on all IRF member devices. This practice ensures that the
policy can run correctly after a master/subordinate switchover occurs or the member device
where the script file resides leaves the IRF.
276
system-view
2. Suspend monitor policies.
rtm scheduler suspend
Task Command
Display the running configuration of all
display current-configuration
CLI-defined monitor policies.
Display user-defined EAA environment
display rtm environment [ var-name ]
variables.
Procedure
# Edit a Tcl script file (rtm_tcl_test.tcl, in this example) for EAA to send the message "rtm_tcl_test is
running" when a command that contains the display this string is executed.
::platformtools::rtm::event_register cli sync mode execute pattern display this
user-role network-admin
::platformtools::rtm::action syslog priority 1 facility local4 msg rtm_tcl_test is
running
# Download the Tcl script file from the TFTP server at 1.2.1.1.
<Sysname> tftp 1.2.1.1 get rtm_tcl_test.tcl
# Create Tcl-defined policy test and bind it to the Tcl script file.
<Sysname> system-view
277
[Sysname] rtm tcl-policy test rtm_tcl_test.tcl
[Sysname] quit
# Enable the information center to output log messages to the current monitoring terminal.
<Sysname> terminal monitor
The current terminal is enabled to display logs.
<Sysname> system-view
[Sysname] info-center enable
Information center is enabled.
[Sysname] quit
# Execute the display this command. Verify that the system displays an "rtm_tcl_test is running"
message and a message that the policy is being executed successfully.
[Sysname] display this
%Aug 1 09:50:04:634 2019 Sysname RTM/1/RTM_ACTION: rtm_tcl_test is running
%Aug 1 09:50:04:636 2019 Sysname RTM/6/RTM_POLICY: TCL policy test is running
successfully.
#
return
# Add a CLI event that occurs when a question mark (?) is entered at any command line that contains
letters and digits.
[Sysname-rtm-test] event cli async mode help pattern [a-zA-Z0-9]
# Add an action that sends the message "hello world" with a priority of 4 from the logging facility
local3 when the event occurs.
[Sysname-rtm-test] action 0 syslog priority 4 facility local3 msg “hello world”
# Add an action that enters system view when the event occurs.
[Sysname-rtm-test] action 2 cli system-view
278
# Set the policy action runtime to 2000 seconds.
[Sysname-rtm-test] running-time 2000
# Enable the information center to output log messages to the current monitoring terminal.
[Sysname-rtm-test] return
<Sysname> terminal monitor
The current terminal is enabled to display logs.
<Sysname> system-view
[Sysname] info-center enable
Information center is enabled.
[Sysname] quit
# Enter a question mark (?) at a command line that contains a letter d. Verify that the system displays
a "hello world" message and a message that the policy is being executed successfully on the
terminal screen.
<Sysname> d?
debugging
delete
diagnostic-logfile
dir
display
279
Figure 87 Network diagram
IP network
Device C
10.2.1.2
Device A Device B
GE1/0/1 GE1/0/1
Device D Device E
10.3.1.2 10.3.2.2
Procedure
# Create track entry 1 and associate it with the link state of GigabitEthernet 1/0/1.
<Device A> system-view
[Device A] track 1 interface gigabitethernet 1/0/1
# Configure a CLI-defined EAA monitor policy so that the system automatically disables session
establishment with Device D and Device E when GigabitEthernet 1/0/1 is down.
[Device A] rtm cli-policy test
[Device A-rtm-test] event track 1 state negative
[Device A-rtm-test] action 0 cli system-view
[Device A-rtm-test] action 1 cli bgp 100
280
[Device A-rtm-test] action 2 cli peer 10.3.1.2 ignore
[Device A-rtm-test] action 3 cli peer 10.3.2.2 ignore
[Device A-rtm-test] user-role network-admin
[Device A-rtm-test] commit
[Device A-rtm-test] quit
# Execute the display bgp peer ipv4 command on Device A to display BGP peer information.
If no BGP peer information is displayed, Device A does not have any BGP peers.
# Add a CLI event that occurs when a command line that contains loopback0 is executed.
[Sysname-rtm-test] event cli async mode execute pattern loopback0
# Add an action that enters system view when the event occurs.
[Sysname-rtm-test] action 0 cli system-view
# Add an action that creates the interface Loopback 0 and enters loopback interface view.
[Sysname-rtm-test] action 1 cli interface loopback 0
# Add an action that assigns the IP address 1.1.1.1 to Loopback 0. The loopback0IP variable is
used in the action for IP address assignment.
[Sysname-rtm-test] action 2 cli ip address $loopback0IP 24
# Add an action that sends the matching loopback0 command with a priority of 0 from the logging
facility local7 when the event occurs.
[Sysname-rtm-test] action 3 syslog priority 0 facility local7 msg $_cmd
281
# Enable the policy.
[Sysname-rtm-test] commit
[Sysname-rtm-test] return
<Sysname>
# Execute the loopback0 command. Verify that the system displays a "loopback0" message and a
message that the policy is being executed successfully on the terminal screen.
[Sysname] interface loopback0
[Sysname-LoopBack0]%Aug 1 09:46:10:592 2019 Sysname RTM/7/RTM_ACTION: interface
loopback0
%Aug 1 09:46:10:613 2019 Sysname RTM/6/RTM_POLICY: CLI policy test is running
successfully.
# Verify that Loopback 0 has been created and its IP address is 1.1.1.1.
<Sysname-LoopBack0> display interface loopback brief
Brief information on interfaces in route mode:
Link: ADM - administratively down; Stby - standby
Protocol: (s) - spoofing
Interface Link Protocol Primary IP Description
Loop0 UP UP(s) 1.1.1.1
<Sysname-LoopBack0>
282
Monitoring and maintaining processes
About monitoring and maintaining processes
The system software of the device is a full-featured, modular, and scalable network operating system
based on the Linux kernel. The system software features run the following types of independent
processes:
• User process—Runs in user space. Most system software features run user processes. Each
process runs in an independent space so the failure of a process does not affect other
processes. The system automatically monitors user processes. The system supports
preemptive multithreading. A process can run multiple threads to support multiple activities.
Whether a process supports multithreading depends on the software implementation.
• Kernel thread—Runs in kernel space. A kernel thread executes kernel code. It has a higher
security level than a user process. If a kernel thread fails, the system breaks down. You can
monitor the running status of kernel threads.
283
Starting a third-party process
Restrictions and guidelines
If you execute the third-part-process start command multiple times but specify the same
process name, whether the command can be executed successfully depends on the process. You
can use the display current-configuration | include third-part-process
command to view the result.
Procedure
1. Enter system view.
system-view
2. Start a third-party process.
third-part-process start name process-name [ arg args ]
284
Task Command
display memory [ summary ] [ slot slot-number
Display memory usage.
[ cpu cpu-number ] ]
display process [ all | job job-id | name
Display process state
process-name ] [ slot slot-number [ cpu
information.
cpu-number ] ]
Display CPU usage for all display process cpu [ slot slot-number [ cpu
processes. cpu-number ] ]
monitor process [ dumbtty ] [ iteration number ]
Monitor process running state.
[ slot slot-number [ cpu cpu-number ] ]
monitor thread [ dumbtty ] [ iteration number ]
Monitor thread running state.
[ slot slot-number [ cpu cpu-number ] ]
For more information about the display memory command, see Fundamentals Command
Reference.
285
Display and maintenance commands for user processes
Execute display commands in any view and other commands in user view.
Task Command
display exception context [ count
Display context information for process exceptions. value ] [ slot slot-number [ cpu
cpu-number ] ]
display exception filepath [ slot
Display the core dump file directory.
slot-number [ cpu cpu-number ] ]
display process log [ slot
Display log information for all user processes.
slot-number [ cpu cpu-number ] ]
display process memory [ slot
Display memory usage for all user processes.
slot-number [ cpu cpu-number ] ]
display process memory heap job
Display heap memory usage for a user process. job-id [ verbose ] [ slot
slot-number [ cpu cpu-number ] ]
display process memory heap job
Display memory content starting from a specified job-id address starting-address
memory block for a user process. length memory-length [ slot
slot-number [ cpu cpu-number ] ]
display process memory heap job
Display the addresses of memory blocks with a job-id size memory-size [ offset
specified size used by a user process. offset-size ] [ slot slot-number
[ cpu cpu-number ] ]
reset exception context [ slot
Clear context information for process exceptions.
slot-number [ cpu cpu-number ] ]
286
monitor kernel deadloop enable [ slot slot-number [ cpu cpu-number
[ core core-number&<1-64> ] ] ]
By default, kernel thread deadloop detection is enabled.
3. (Optional.) Set the threshold for identifying a kernel thread deadloop.
monitor kernel deadloop time time [ slot slot-number [ cpu
cpu-number ] ]
By default, the threshold for identifying a kernel thread deadloop is 20 seconds.
4. (Optional.) Exclude a kernel thread from kernel thread deadloop detection.
monitor kernel deadloop exclude-thread tid [ slot slot-number [ cpu
cpu-number ] ]
When enabled, kernel thread deadloop detection monitors all kernel threads by default.
5. (Optional.) Specify the action to be taken in response to a kernel thread deadloop.
monitor kernel deadloop action { reboot | record-only } [ slot
slot-number [ cpu cpu-number ] ]
The default action is reboot.
287
Task Command
Display kernel thread deadloop detection display kernel deadloop configuration
configuration. [ slot slot-number [ cpu cpu-number ] ]
display kernel deadloop show-number
Display kernel thread deadloop information. [ offset ] [ verbose ] [ slot slot-number
[ cpu cpu-number ] ]
display kernel exception show-number
Display kernel thread exception information. [ offset ] [ verbose ] [ slot slot-number
[ cpu cpu-number ] ]
display kernel reboot show-number
Display kernel thread reboot information. [ offset ] [ verbose ] [ slot slot-number
[ cpu cpu-number ] ]
display kernel starvation
Display kernel thread starvation detection
configuration [ slot slot-number [ cpu
configuration.
cpu-number ] ]
display kernel starvation show-number
Display kernel thread starvation information. [ offset ] [ verbose ] [ slot slot-number
[ cpu cpu-number ] ]
reset kernel deadloop [ slot slot-number
Clear kernel thread deadloop information.
[ cpu cpu-number ] ]
reset kernel exception [ slot
Clear kernel thread exception information.
slot-number [ cpu cpu-number ] ]
reset kernel reboot [ slot slot-number
Clear kernel thread reboot information.
[ cpu cpu-number ] ]
reset kernel starvation [ slot
Clear kernel thread starvation information.
slot-number [ cpu cpu-number ] ]
288
Configuring port mirroring
About port mirroring
Port mirroring copies the packets passing through a port to a port that connects to a data monitoring
device for packet analysis.
Terminology
The following terms are used in port mirroring configuration.
Mirroring source
The mirroring sources can be one or more monitored ports called source ports.
Packets passing through mirroring sources are copied to a port connecting to a data monitoring
device for packet analysis. The copies are called mirrored packets.
Source device
The device where the mirroring sources reside is called a source device.
Mirroring destination
The mirroring destination connects to a data monitoring device and is the destination port (also
known as the monitor port) of mirrored packets. Mirrored packets are sent out of the monitor port to
the data monitoring device.
A monitor port might receive multiple copies of a packet when it monitors multiple mirroring sources.
For example, two copies of a packet are received on Port A when the following conditions exist:
• Port A is monitoring bidirectional traffic of Port B and Port C on the same device.
• The packet travels from Port B to Port C.
Destination device
The device where the monitor port resides is called the destination device.
Mirroring direction
The mirroring direction specifies the direction of the traffic that is copied on a mirroring source.
• Inbound—Copies packets received.
• Outbound—Copies packets sent.
• Bidirectional—Copies packets received and sent.
Mirroring group
Port mirroring is implemented through mirroring groups. Mirroring groups can be classified into local
mirroring groups, remote source groups, and remote destination groups.
Reflector port, egress port, and remote probe VLAN
Reflector ports, remote probe VLANs, and egress ports are used for Layer 2 remote port mirroring.
The remote probe VLAN is a dedicated VLAN for transmitting mirrored packets to the destination
device. Both the reflector port and egress port reside on a source device and send mirrored packets
to the remote probe VLAN.
On port mirroring devices, all ports except source, destination, reflector, and egress ports are called
common ports.
289
Port mirroring classification
Port mirroring can be classified into local port mirroring and remote port mirroring.
• Local port mirroring—Also known as Switch Port Analyzer (SPAN). In local port mirroring, the
source device is directly connected to a data monitoring device. The source device also acts as
the destination device and forwards mirrored packets directly to the data monitoring device.
• Remote port mirroring—The source device is not directly connected to a data monitoring
device. The source device sends mirrored packets to the destination device, which forwards the
packets to the data monitoring device.
Remote port mirroring is also known as Layer 2 remote port mirroring because the source
device and destination device are on the same Layer 2 network.
Port A Port B
Port A Port B
Data monitoring
Host Device
device
As shown in Figure 88, the source port (Port A) and the monitor port (Port B) reside on the same
device. Packets received on Port A are copied to Port B. Port B then forwards the packets to the data
monitoring device for analysis.
290
4. Upon receiving the mirrored packets, the destination device determines whether the ID of the
mirrored packets is the same as the remote probe VLAN ID. If the two VLAN IDs match, the
destination device forwards the mirrored packets to the data monitoring device through the
monitor port.
Figure 89 Layer 2 remote port mirroring implementation through the reflector port
method
Mirroring process in the device
Destination
Port C
Port B Port A Port B Port A device
Source Remote Intermediate Remote
Port A Port B
device probe VLAN device probe VLAN
Data monitoring
Host
device
291
Figure 90 Layer 2 remote port mirroring implementation through the egress port method
Mirroring process in the
device
Port A Port B
Source Destination
device Port B Port A Port B Port A device
292
1. Configuring mirroring sources
Choose one of the following tasks:
Configuring source ports
2. Configuring the monitor port
293
Procedure
• Configure the monitor port in system view:
a. Enter system view.
system-view
b. Configure the monitor port for a local mirroring group.
mirroring-group group-id monitor-port interface-type
interface-number
By default, no monitor port is configured for a local mirroring group.
• Configure the monitor port in interface view:
c. Enter system view.
system-view
d. Enter interface view.
interface interface-type interface-number
e. Configure the port as the monitor port for a mirroring group.
mirroring-group group-id monitor-port
By default, a port does not act as the monitor port for any local mirroring groups.
294
Procedure
1. Enter system view.
system-view
2. Create a remote source group.
mirroring-group group-id remote-source
3. Configure mirroring sources for the remote source group. Choose one option as needed:
Configure mirroring ports in system view.
mirroring-group group-id mirroring-port interface-list { both |
inbound | outbound }
Execute the following commands in sequence to enter interface view and then configure the
interface as a source port.
interface interface-type interface-number
mirroring-group group-id mirroring-port { both | inbound |
outbound }
quit
4. Configure the reflector port for the remote source group.
mirroring-group group-id reflector-port reflector-port
By default, no reflector port is configured for a remote source group.
5. Create a VLAN and enter its view.
vlan vlan-id
6. Assign the ports that connect to the data monitoring devices to the VLAN.
port interface-list
By default, a VLAN does not contain any ports.
7. Return to system view.
quit
8. Specify the VLAN as the remote probe VLAN for the remote source group.
mirroring-group group-id remote-probe vlan vlan-id
By default, no remote probe VLAN is configured for a remote source group.
295
To monitor the bidirectional traffic of a source port, disable MAC address learning for the remote
probe VLAN on the source, intermediate, and destination devices. For more information about MAC
address learning, see Layer 2—LAN Switching Configuration Guide.
The member port of a Layer 2 or Layer 3 aggregate interface cannot be configured as the monitor
port for Layer 2 remote port mirroring.
296
2. Configuring mirroring sources
3. Configuring the egress port
4. Configuring the remote probe VLAN
297
Configuring the remote probe VLAN
Restrictions and guidelines
This task is required on the both the source and destination devices.
Only an existing static VLAN can be configured as a remote probe VLAN.
When a VLAN is configured as a remote probe VLAN, use the remote probe VLAN for port mirroring
exclusively.
Configure the same remote probe VLAN for the remote source group and the remote destination
group.
The device supports only one remote probe VLAN.
Procedure
1. Enter system view.
system-view
2. Configure the remote probe VLAN for the remote source or destination group.
mirroring-group group-id remote-probe vlan vlan-id
By default, no remote probe VLAN is configured for a remote source or destination group.
298
2. Create a remote source group.
mirroring-group group-id remote-source
299
A remote source group supports only one reflector port.
Configuring the reflector port in system view
1. Enter system view.
system-view
2. Configure the reflector port for a remote source group.
mirroring-group group-id reflector-port interface-type
interface-number
By default, no reflector port is configured for a remote source group.
Configuring the reflector port in interface view
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Configure the port as the reflector port for a remote source group.
mirroring-group group-id reflector-port
By default, a port does not act as the reflector port for any remote source groups.
300
For more information about the port trunk permit vlan and port hybrid vlan
commands, see Layer 2—LAN Switching Command Reference.
Configuring the egress port in interface view
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Configure the port as the egress port for a remote source group.
mirroring-group group-id monitor-egress
By default, a port does not act as the egress port for any remote source groups.
Task Command
display mirroring-group { group-id | all
Display mirroring group information. | local | remote-destination |
remote-source }
Marketing Dept.
GE1/0/1
GE1/0/3
Device
GE1/0/2 Server
Technical Dept.
Source port
Monitor port
301
Procedure
# Create local mirroring group 1.
<Device> system-view
[Device] mirroring-group 1 local
# Configure GigabitEthernet 1/0/1 and GigabitEthernet 1/0/2 as source ports for local mirroring group
1.
[Device] mirroring-group 1 mirroring-port gigabitethernet 1/0/1 gigabitethernet 1/0/2
both
# Configure GigabitEthernet 1/0/3 as the monitor port for local mirroring group 1.
[Device] mirroring-group 1 monitor-port gigabitethernet 1/0/3
# Disable the spanning tree feature on the monitor port (GigabitEthernet 1/0/3).
[Device] interface gigabitethernet 1/0/3
[Device-GigabitEthernet1/0/3] undo stp enable
[Device-GigabitEthernet1/0/3] quit
302
Figure 92 Network diagram
Dept A
Server A
GE
4
1/0
/1 1 /0/
GE
GE1/0/2
GE
1/0
/0/3 /5
Dept B E1
G Device
Server B
Dept C
Procedure
# Create remote source group 1.
<Device> system-view
[Device] mirroring-group 1 remote-source
# Configure GigabitEthernet 1/0/1 through GigabitEthernet 1/0/3 as source ports of remote source
group 1.
[Device] mirroring-group 1 mirroring-port GigabitEthernet1/0/1 to GigabitEthernet1/0/3
both
# Configure an unused port (GigabitEthernet 1/0/6 in this example) as the reflector port of remote
source group 1.
[Device] mirroring-group 1 reflector-port GigabitEthernet1/0/6
This operation may delete all settings made on the interface. Continue? [Y/N]:y
# Create VLAN 10 and assign the ports connecting the data monitoring devices to the VLAN.
[Device] vlan 10
[Device-vlan10] port GigabitEthernet1/0/4 to GigabitEthernet1/0/5
[Device-vlan10] quit
303
Figure 93 Network diagram
Source device Intermediate device Destination device
Device A Device B Device C
GE1/0/2 GE1/0/1 GE1/0/2 GE1/0/1
GE1/0/3
VLAN 2 VLAN 2
GE1/0/1 GE1/0/2
Marketing Dept.
Server
Procedure
1. Configure Device C (the destination device):
# Configure GigabitEthernet 1/0/1 as a trunk port, and assign the port to VLAN 2.
<DeviceC> system-view
[DeviceC] interface gigabitethernet 1/0/1
[DeviceC-GigabitEthernet1/0/1] port link-type trunk
[DeviceC-GigabitEthernet1/0/1] port trunk permit vlan 2
[DeviceC-GigabitEthernet1/0/1] quit
# Create a remote destination group.
[DeviceC] mirroring-group 2 remote-destination
# Create VLAN 2.
[DeviceC] vlan 2
# Disable MAC address learning for VLAN 2.
[DeviceC-vlan2] undo mac-address mac-learning enable
[DeviceC-vlan2] quit
# Configure VLAN 2 as the remote probe VLAN for the mirroring group.
[DeviceC] mirroring-group 2 remote-probe vlan 2
# Configure GigabitEthernet 1/0/2 as the monitor port for the mirroring group.
[DeviceC] interface gigabitethernet 1/0/2
[DeviceC-GigabitEthernet1/0/2] mirroring-group 2 monitor-port
# Disable the spanning tree feature on GigabitEthernet 1/0/2.
[DeviceC-GigabitEthernet1/0/2] undo stp enable
# Assign GigabitEthernet 1/0/2 to VLAN 2.
[DeviceC-GigabitEthernet1/0/2] port access vlan 2
[DeviceC-GigabitEthernet1/0/2] quit
2. Configure Device B (the intermediate device):
# Create VLAN 2.
<DeviceB> system-view
[DeviceB] vlan 2
# Disable MAC address learning for VLAN 2.
[DeviceB-vlan2] undo mac-address mac-learning enable
[DeviceB-vlan2] quit
# Configure GigabitEthernet 1/0/1 as a trunk port, and assign the port to VLAN 2.
304
[DeviceB] interface gigabitethernet 1/0/1
[DeviceB-GigabitEthernet1/0/1] port link-type trunk
[DeviceB-GigabitEthernet1/0/1] port trunk permit vlan 2
[DeviceB-GigabitEthernet1/0/1] quit
# Configure GigabitEthernet 1/0/2 as a trunk port, and assign the port to VLAN 2.
[DeviceB] interface gigabitethernet 1/0/2
[DeviceB-GigabitEthernet1/0/2] port link-type trunk
[DeviceB-GigabitEthernet1/0/2] port trunk permit vlan 2
[DeviceB-GigabitEthernet1/0/2] quit
3. Configure Device A (the source device):
# Create a remote source group.
<DeviceA> system-view
[DeviceA] mirroring-group 1 remote-source
# Create VLAN 2.
[DeviceA] vlan 2
# Disable MAC address learning for VLAN 2.
[DeviceA-vlan2] undo mac-address mac-learning enable
[DeviceA-vlan2] quit
# Configure VLAN 2 as the remote probe VLAN for the mirroring group.
[DeviceA] mirroring-group 1 remote-probe vlan 2
# Configure GigabitEthernet 1/0/1 as a source port for the mirroring group.
[DeviceA] mirroring-group 1 mirroring-port gigabitethernet 1/0/1 both
# Configure GigabitEthernet 1/0/3 as the reflector port for the mirroring group.
[DeviceA] mirroring-group 1 reflector-port gigabitethernet 1/0/3
This operation may delete all settings made on the interface. Continue? [Y/N]: y
# Configure GigabitEthernet 1/0/2 as a trunk port, and assign the port to VLAN 2.
[DeviceA] interface gigabitethernet 1/0/2
[DeviceA-GigabitEthernet1/0/2] port link-type trunk
[DeviceA-GigabitEthernet1/0/2] port trunk permit vlan 2
[DeviceA-GigabitEthernet1/0/2] quit
305
Example: Configuring Layer 2 remote port mirroring (RSPAN
with egress port)
Network configuration
On the Layer 2 network shown in Figure 94, configure Layer 2 remote port mirroring to enable the
server to monitor the bidirectional traffic of the Marketing Department.
Figure 94 Network diagram
Source device Intermediate device Destination device
Device A Device B Device C
GE1/0/2 GE1/0/1 GE1/0/2 GE1/0/1
VLAN 2 VLAN 2
GE1/0/1 GE1/0/2
Marketing Dept.
Server
Procedure
1. Configure Device C (the destination device):
# Configure GigabitEthernet 1/0/1 as a trunk port, and assign the port to VLAN 2.
<DeviceC> system-view
[DeviceC] interface gigabitethernet 1/0/1
[DeviceC-GigabitEthernet1/0/1] port link-type trunk
[DeviceC-GigabitEthernet1/0/1] port trunk permit vlan 2
[DeviceC-GigabitEthernet1/0/1] quit
# Create a remote destination group.
[DeviceC] mirroring-group 2 remote-destination
# Create VLAN 2.
[DeviceC] vlan 2
# Disable MAC address learning for VLAN 2.
[DeviceC-vlan2] undo mac-address mac-learning enable
[DeviceC-vlan2] quit
# Configure VLAN 2 as the remote probe VLAN for the mirroring group.
[DeviceC] mirroring-group 2 remote-probe vlan 2
# Configure GigabitEthernet 1/0/2 as the monitor port for the mirroring group.
[DeviceC] interface gigabitethernet 1/0/2
[DeviceC-GigabitEthernet1/0/2] mirroring-group 2 monitor-port
# Disable the spanning tree feature on GigabitEthernet 1/0/2.
[DeviceC-GigabitEthernet1/0/2] undo stp enable
# Assign GigabitEthernet 1/0/2 to VLAN 2 as an access port.
[DeviceC-GigabitEthernet1/0/2] port access vlan 2
[DeviceC-GigabitEthernet1/0/2] quit
2. Configure Device B (the intermediate device):
306
# Create VLAN 2.
<DeviceB> system-view
[DeviceB] vlan 2
# Disable MAC address learning for VLAN 2.
[DeviceB-vlan2] undo mac-address mac-learning enable
[DeviceB-vlan2] quit
# Configure GigabitEthernet 1/0/1 as a trunk port, and assign the port to VLAN 2.
[DeviceB] interface gigabitethernet 1/0/1
[DeviceB-GigabitEthernet1/0/1] port link-type trunk
[DeviceB-GigabitEthernet1/0/1] port trunk permit vlan 2
[DeviceB-GigabitEthernet1/0/1] quit
# Configure GigabitEthernet 1/0/2 as a trunk port, and assign the port to VLAN 2.
[DeviceB] interface gigabitethernet 1/0/2
[DeviceB-GigabitEthernet1/0/2] port link-type trunk
[DeviceB-GigabitEthernet1/0/2] port trunk permit vlan 2
[DeviceB-GigabitEthernet1/0/2] quit
3. Configure Device A (the source device):
# Create a remote source group.
<DeviceA> system-view
[DeviceA] mirroring-group 1 remote-source
# Create VLAN 2.
[DeviceA] vlan 2
# Disable MAC address learning for VLAN 2.
[DeviceA-vlan2] undo mac-address mac-learning enable
[DeviceA-vlan2] quit
# Configure VLAN 2 as the remote probe VLAN of the mirroring group.
[DeviceA] mirroring-group 1 remote-probe vlan 2
# Configure GigabitEthernet 1/0/1 as a source port for the mirroring group.
[DeviceA] mirroring-group 1 mirroring-port gigabitethernet 1/0/1 both
# Configure GigabitEthernet 1/0/2 as the egress port for the mirroring group.
[DeviceA] mirroring-group 1 monitor-egress gigabitethernet 1/0/2
# Configure GigabitEthernet 1/0/2 as a trunk port, and assign the port to VLAN 2.
[DeviceA] interface gigabitethernet 1/0/2
[DeviceA-GigabitEthernet1/0/2] port link-type trunk
[DeviceA-GigabitEthernet1/0/2] port trunk permit vlan 2
# Disable the spanning tree feature on the port.
[DeviceA-GigabitEthernet1/0/2] undo stp enable
[DeviceA-GigabitEthernet1/0/2] quit
307
# Verify the mirroring group configuration on Device A.
[DeviceA] display mirroring-group all
Mirroring group 1:
Type: Remote source
Status: Active
Mirroring port:
GigabitEthernet1/0/1 Both
Monitor egress port: GigabitEthernet1/0/2
Remote probe VLAN: 2
308
Configuring flow mirroring
About flow mirroring
Flow mirroring copies packets matching a class to a destination for packet analyzing and monitoring.
It is implemented through QoS.
To implement flow mirroring through QoS, perform the following tasks:
• Define traffic classes and configure match criteria to classify packets to be mirrored. Flow
mirroring allows you to flexibly classify packets to be analyzed by defining match criteria.
• Configure traffic behaviors to mirror the matching packets to the specified destination.
You can configure an action to mirror the matching packets to one of the following destinations:
• Interface—The matching packets are copied to an interface and then forwarded to a data
monitoring device for analysis.
• CPU—The matching packets are copied to the CPU of an IRF member device. The CPU
analyzes the packets or delivers them to upper layers.
For more information about QoS policies, traffic classes, and traffic behaviors, see ACL and QoS
Configuration Guide.
309
Figure 95 Flow mirroring SPAN
Network
Monitoring device
Flow mirroring
destination interface
Packets from Host A
Host A Host B
To implement RSPAN, forward the mirrored packet to a remote Layer 2 interface based on the VLAN
of the mirrored packet or through QoS traffic redirecting.
310
Figure 96 ERSPAN in encapsulation parameter mode
Network
Source IP network
device
Monitoring
device
Flow mirroring
destination interface
Packets from Host A
Host A Host B
311
traffic classifier classifier-name [ operator { and | or } ]
3. Configure match criteria.
if-match match-criteria
By default, no match criterion is configured in a traffic class.
4. (Optional.) Display traffic class information.
display traffic classifier
This command is available in any view.
312
Applying a QoS policy
Applying a QoS policy to an interface
Restrictions and guidelines
You can apply a QoS policy to an interface to mirror the traffic of the interface.
A policy can be applied to multiple interfaces.
A QoS policy cannot be applied in the outbound direction of an interface. In the inbound direction of
an interface, only one QoS policy can be applied.
The device does not support mirroring outbound traffic of an aggregate interface.
Procedure
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Apply a policy to the interface.
qos apply policy policy-name inbound
4. (Optional.) Display the QoS policy applied to the interface.
display qos policy interface
This command is available in any view.
313
Procedure
1. Enter system view.
system-view
2. Apply a QoS policy globally.
qos apply policy policy-name global inbound
3. (Optional.) Display global QoS policies.
display qos policy global
This command is available in any view.
Internet
GE1/01 Device
GE1/02 GE1/04
GE1/03
Procedure
# Create working hour range work, in which working hours are from 8:00 to 18:00 on weekdays.
<Device> system-view
[Device] time-range work 8:00 to 18:00 working-day
# Create IPv4 advanced ACL 3000 to allow packets from the Technical Department to access the
Internet and the Marketing Department during working hours.
[Device] acl advanced 3000
[Device-acl-ipv4-adv-3000] rule permit tcp source 192.168.2.0 0.0.0.255 destination-port
eq www
[Device-acl-ipv4-adv-3000] rule permit ip source 192.168.2.0 0.0.0.255 destination
192.168.1.0 0.0.0.255 time-range work
314
[Device-acl-ipv4-adv-3000] quit
# Create traffic class tech_c, and configure the match criterion as ACL 3000.
[Device] traffic classifier tech_c
[Device-classifier-tech_c] if-match acl 3000
[Device-classifier-tech_c] quit
# Create traffic behavior tech_b, configure the action of mirroring traffic to GigabitEthernet 1/0/3.
[Device] traffic behavior tech_b
[Device-behavior-tech_b] mirror-to interface gigabitethernet 1/0/3
[Device-behavior-tech_b] quit
# Create QoS policy tech_p, and associate traffic class tech_c with traffic behavior tech_b in the
QoS policy.
[Device] qos policy tech_p
[Device-qospolicy-tech_p] classifier tech_c behavior tech_b
[Device-qospolicy-tech_p] quit
315
Configuring sFlow
About sFlow
sFlow is a traffic monitoring technology.
As shown in Figure 98, the sFlow system involves an sFlow agent embedded in a device and a
remote sFlow collector. The sFlow agent collects interface counter information and packet
information and encapsulates the sampled information in sFlow packets. When the sFlow packet
buffer is full, or the aging timer (fixed to 1 second) expires, the sFlow agent performs the following
actions:
• Encapsulates the sFlow packets in the UDP datagrams.
• Sends the UDP datagrams to the specified sFlow collector.
The sFlow collector analyzes the information and displays the results. One sFlow collector can
monitor multiple sFlow agents.
sFlow provides the following sampling mechanisms:
• Flow sampling—Obtains packet information.
• Counter sampling—Obtains interface counter information.
sFlow can use flow sampling and counter sampling at the same time.
Figure 98 sFlow system
sFlow agent
Ethernet IP UDP
Flow header header sFlow Datagram
sampling header
Counter
sampling
Device
sFlow collector
316
Procedure
1. Enter system view.
system-view
2. Configure an IP address for the sFlow agent.
sflow agent { ip ipv4-address | ipv6 ipv6-address }
By default, no IP address is configured for the sFlow agent.
3. Configure the sFlow collector information.
sflow collector collector-id [ vpn-instance vpn-instance-name ] { ip
ipv4-address | ipv6 ipv6-address } [ port port-number | datagram-size
size | time-out seconds | description string ] *
By default, no sFlow collector information is configured.
4. Specify the source IP address of sFlow packets.
sflow source { ip ipv4-address | ipv6 ipv6-address } *
By default, the source IP address is determined by routing.
317
By default, no sFlow instance or sFlow collector is specified for flow sampling.
Task Command
Display sFlow configuration. display sflow
318
Figure 99 Network diagram
sFlow collector
3.3.3.2/16
GE1/0/3
Vlan-int4 GE1/0/2
3.3.3.1/16 Vlan-int3
2.2.2.1/16
GE1/0/1
Host A Vlan-int2 Server
1.1.1.2/16 Device 2.2.2.2/16
1.1.1.1/16
Procedure
1. Configure VLANs, assign Ethernet interfaces to VLANs, and configure IP addresses and
subnet masks for VLAN interfaces, as shown in Figure 99. (Details not shown.)
2. Configure the sFlow agent and configure information about the sFlow collector:
# Configure the IP address for the sFlow agent.
<Device> system-view
[Device] sflow agent ip 3.3.3.1
# Configure information about the sFlow collector. Specify the sFlow collector ID as 1, IP
address as 3.3.3.2, port number as 6343 (default), and description as netserver.
[Device] sflow collector 1 ip 3.3.3.2 description netserver
3. Configure counter sampling:
# Enable counter sampling and set the counter sampling interval to 120 seconds on
GigabitEthernet 1/0/1.
[Device] interface gigabitethernet 1/0/1
[Device-GigabitEthernet1/0/1] sflow counter interval 120
# Specify sFlow collector 1 for counter sampling.
[Device-GigabitEthernet1/0/1] sflow counter collector 1
4. Configure flow sampling:
# Enable flow sampling and set the flow sampling mode to random and sampling interval to
32767.
[Device-GigabitEthernet1/0/1] sflow sampling-mode random
[Device-GigabitEthernet1/0/1] sflow sampling-rate 32767
# Specify sFlow collector 1 for flow sampling.
[Device-GigabitEthernet1/0/1] sflow flow collector 1
319
ID IP Port Aging Size VPN-instance Description
1 3.3.3.2 6343 N/A 1400 netserver
Port counter sampling information:
Interface Instance CID Interval(s)
GE1/0/1 1 1 120
Port flow sampling information:
Interface Instance FID MaxHLen Rate Mode Status
GE1/0/1 1 1 128 32767 Random Active
Troubleshooting sFlow
The remote sFlow collector cannot receive sFlow packets
Symptom
The remote sFlow collector cannot receive sFlow packets.
Analysis
The possible reasons include:
• The sFlow collector is not specified.
• sFlow is not configured on the interface.
• The IP address of the sFlow collector specified on the sFlow agent is different from that of the
remote sFlow collector.
• No IP address is configured for the Layer 3 interface that sends sFlow packets.
• An IP address is configured for the Layer 3 interface that sends sFlow packets. However, the
UDP datagrams with this source IP address cannot reach the sFlow collector.
• The physical link between the device and the sFlow collector fails.
• The sFlow collector is bound to a non-existent VPN.
• The length of an sFlow packet is less than the sum of the following two values:
The length of the sFlow packet header.
The number of bytes that flow sampling can copy per packet.
Solution
To resolve the problem:
1. Use the display sflow command to verify that sFlow is correctly configured.
2. Verify that a correct IP address is configured for the device to communicate with the sFlow
collector.
3. Verify that the physical link between the device and the sFlow collector is up.
4. Verify that the VPN bound to the sFlow collector already exists.
5. Verify that the length of an sFlow packet is greater than the sum of the following two values:
The length of the sFlow packet header.
The number of bytes (as a best practice, use the default setting) that flow sampling can copy
per packet.
320
Configuring the information center
About the information center
The information center on the device receives logs generated by source modules and outputs logs to
different destinations according to log output rules. Based on the logs, you can monitor device
performance and troubleshoot network problems.
Figure 100 Information center diagram
Output destination
Logs
Log types
Logs are classified into the following types:
• Standard system logs—Record common system information. Unless otherwise specified, the
term "logs" in this document refers to standard system logs.
• Diagnostic logs—Record debugging messages.
• Security logs—Record security information, such as authentication and authorization
information.
• Hidden logs—Record log information not displayed on the terminal, such as input commands.
• Trace logs—Record system tracing and debugging messages, which can be viewed only after
the devkit package is installed.
Log levels
Logs are classified into eight severity levels from 0 through 7 in descending order. The information
center outputs logs with a severity level that is higher than or equal to the specified level. For
example, if you specify a severity level of 6 (informational), logs that have a severity level from 0 to 6
are output.
Table 19 Log levels
321
Severity value Level Description
Informational message. For example, a command or a ping
6 Informational
operation is executed.
7 Debugging Debugging message.
Log destinations
The system outputs logs to the following destinations: console, monitor terminal, log buffer, log host,
and log file. Log output destinations are independent and you can configure them after enabling the
information center. One log can be sent to multiple destinations.
322
Default output rules for hidden logs
Hidden logs can be output to the log host, the log buffer, and the log file. Table 23 shows the default
output rules for hidden logs.
Table 23 Default output rules for hidden logs
323
Output destination Format
Non-customized format:
<PRI>Timestamp Sysname %%vvModule/Level/Mnemonic: Location; Content
Example:
<190>Nov 24 16:22:21 2019 Sysname %%10SHELL/5/SHELL_LOGIN:
-DevIP=1.1.1.1; VTY logged in from 192.168.1.26<190>Nov 24
16:22:21 2016 Sysname %%10 SHELL/5/SHELL_LOGIN: -DevIP=1.1.1.1;
VTY logged in from 192.168.1.26
CMCC format:
<PRI>Timestamp Sysname %vvModule/Level/Mnemonic: location; Content
Log host <189>Oct 9 14:59:04 2019 Sysname %10SHELL/5/SHELL_LOGIN: VTY
logged in from 192.168.1.21
SGCC format:
<PRI> Timestamp Sysname Devtype Content
<189> 2019-03-18 15:39:45 Sysname FW 0 2 admin 10.1.1.1
Unicom format:
<PRI>Timestamp Hostip vvModule/Level/Serial_number: Content
Example:
<189>Oct 13 16:48:08 2019 10.1.1.1
10SHELL/5/210231a64jx073000020: VTY logged in from 192.168.1.21
Field Description
A log to a destination other than the log host has an identifier in front of the
timestamp:
• An identifier of percent sign (%) indicates a log with a level equal to or
Prefix (information type) higher than informational.
• An identifier of asterisk (*) indicates a debugging log or a trace log.
• An identifier of caret (^) indicates a diagnostic log.
A log destined for the log host has a priority identifier in front of the timestamp.
The priority is calculated by using this formula: facility*8+level, where:
• facility is the facility name. Facility names local0 through local7
correspond to values 16 through 23. The facility name can be configured
PRI (priority) using the info-center loghost command. It is used to identify log
sources on the log host, and to query and filter the logs from specific log
sources.
• level is in the range of 0 to 7. See Table 19 for more information about
severity levels.
Records the time when the log was generated.
Timestamp Logs sent to the log host and those sent to the other destinations have different
timestamp precisions, and their timestamp formats are configured with different
commands. For more information, see Table 27 and Table 28.
324
Field Description
Sysname (host name or The sysname is the host name or IP address of the device that generated the
host IP address) log. You can use the sysname command to modify the name of the device.
Device type. This field exists only in logs that are sent to the log host in SGCC
Devtype
format.
325
Timestamp parameters Description
Current date and time.
• For logs output to a log host, the timestamp can be in the format of MMM
DD hh:mm:ss YYYY (accurate to seconds) or MMM DD hh:mm:ss.ms
YYYY (accurate to milliseconds).
• For logs output to other destinations, the timestamp is in the format of
MMM DD hh:mm:ss:ms YYYY.
date All logs support this parameter.
Example:
%May 30 05:36:29:579 2019 Sysname FTPD/5/FTPD_LOGIN: User ftp
(192.168.1.23) has logged in successfully.
May 30 05:36:29:579 2019 is a timestamp in the date format in logs sent to
the console.
Timestamp format stipulated in ISO 8601, accurate to seconds (default) or
milliseconds.
Only logs that are sent to a log host support this parameter.
Example:
iso
<189>2019-05-30T06:42:44 Sysname %%10FTPD/5/FTPD_LOGIN: User
ftp (192.168.1.23) has logged in successfully.
2019-05-30T06:42:44 is a timestamp in the iso format accurate to seconds.
A timestamp accurate to milliseconds is like 2019-05-30T06:42:44.708.
No timestamp is included.
All logs support this parameter.
none Example:
% Sysname FTPD/5/FTPD_LOGIN: User ftp (192.168.1.23) has logged
in successfully.
No timestamp is included.
Current date and time without year or millisecond information, in the format of
MMM DD hh:mm:ss.
Only logs that are sent to a log host support this parameter.
no-year-date Example:
<189>May 30 06:44:22 Sysname %%10FTPD/5/FTPD_LOGIN: User ftp
(192.168.1.23) has logged in successfully.
May 30 06:44:22 is a timestamp in the no-year-date format.
FIPS compliance
The device supports the FIPS mode that complies with NIST FIPS 140-2 requirements. Support for
features, commands, and parameters might differ in FIPS mode and non-FIPS mode. For more
information about FIPS mode, see Security Configuration Guide.
326
Outputting logs to the console
Outputting logs to the monitor terminal
Outputting logs to log hosts
Outputting logs to the log buffer
Saving logs to the log file
3. (Optional.) Setting the minimum storage period for logs
4. (Optional.) Enabling synchronous information output
5. (Optional.) Configuring log suppression
Choose the following tasks as needed:
Enabling duplicate log suppression
Configuring log suppression for a module
Disabling an interface from generating link up or link down logs
6. (Optional.) Enabling SNMP notifications for system logs
327
Configuring log suppression for a module
3. Saving diagnostic logs to the diagnostic log file
328
terminal monitor
By default, log output to the console is enabled.
6. Enable output of debugging messages to the console.
terminal debugging
By default, output of debugging messages to the console is disabled.
This command enables output of debugging-level log messages to the console.
7. Set the lowest severity level of logs that can be output to the console.
terminal logging level severity
The default setting is 6 (informational).
329
Outputting logs to log hosts
Restrictions and guidelines
The device supports the following methods (in descending order of priority) for outputting logs of a
module to designated log hosts:
• Fast log output.
For information about the modules that support fast log output and how to configure fast log
output, see "Configuring fast log output."
• Flow log.
For information about the modules that support flow log output and how to configure flow log
output, see "Configuring flow log."
• Information center.
If you configure multiple log output methods for a module, only the method with the highest priority
takes effect.
Procedure
1. Enter system view.
system-view
2. (Optional.) Configure a log output filter or a log output rule. Choose one option as needed:
Configure a log output filter.
info-center filter filter-name { module-name | default } { deny |
level severity }
You can create multiple log output filters. When specifying a log host, you can apply a log
output filter to the log host to control log output.
Configure a log output rule for the log host output destination.
info-center source { module-name | default } loghost { deny | level
severity }
For information about the default log output rules for the log host output destination, see
"Default output rules for logs."
The system chooses the settings to control log output to a log host in the following order:
a. Log output filter applied to the log host.
b. Log output rules configured for the log host output destination by using the info-center
source command.
c. Default log output rules (see "Default output rules for logs").
3. (Optional.) Specify a source IP address for logs sent to log hosts.
info-center loghost source interface-type interface-number
By default, the source IP address of logs sent to log hosts is the primary IP address of their
outgoing interfaces.
4. (Optional.) Specify the format in which logs are output to log hosts.
info-center format { cmcc | sgcc | unicom }
By default, logs are output to log hosts in non-customized format.
5. (Optional.) Configure the timestamp format.
info-center timestamp loghost { date [ with-milliseconds ] | iso
[ with-milliseconds | with-timezone ] * | no-year-date | none }
The default timestamp format is date.
6. Specify a log host and configure related parameters.
330
info-center loghost [ vpn-instance vpn-instance-name ] { hostname |
ipv4-address | ipv6 ipv6-address } [ port port-number ] [ dscp
dscp-value ] [ facility local-number ] [ filter filter-name ]
By default, no log hosts or related parameters are specified.
The value for the port-number argument must be the same as the value configured on the
log host. Otherwise, the log host cannot receive logs.
TIP:
Clean up the storage space of the device regularly to ensure sufficient storage space for the log file
feature.
Procedure
1. Enter system view.
system-view
2. (Optional.) Configure an output rule for sending logs to the log file.
info-center source { module-name | default } logfile { deny | level
severity }
For information about the default output rules, see "Default output rules for logs."
331
3. Enable the log file feature.
info-center logfile enable
By default, the log file feature is enabled.
4. (Optional.) Enable log file overwrite-protection.
info-center logfile overwrite-protection [ all-port-powerdown ]
By default, log file overwrite-protection is disabled.
Log file overwrite-protection is supported only in FIPS mode.
5. (Optional.) Set the maximum log file size.
info-center logfile size-quota size
The default maximum log file size is 10 MB.
6. (Optional.) Set the log file usage alarm threshold.
info-center logfile alarm-threshold usage
The default alarm threshold for log file usage ratio is 80%. When the usage ratio of the log file
reaches 80%, the system outputs a message to inform the user.
7. (Optional.) Specify the log file directory.
info-center logfile directory dir-name
The default log file directory is flash:/logfile.
The device uses the default log file directory after an IRF reboot or a master/subordinate
switchover.
8. Save logs in the log file buffer to the log file. Choose one option as needed:
Configure the automatic log file saving interval.
info-center logfile frequency freq-sec
The default saving interval is 86400 seconds.
Manually save logs in the log file buffer to the log file.
logfile save
This command is available in any view.
332
By default, the log minimum storage period is not set.
333
Configuring log suppression for a module
About this task
This feature suppresses output of logs. You can use this feature to filter out the logs that you are not
concerned with.
Perform this task to configure a log suppression rule to suppress output of all logs or logs with a
specific mnemonic value for a module.
Procedure
1. Enter system view.
system-view
2. Configure a log suppression rule for a module.
info-center logging suppress module module-name mnemonic { all |
mnemonic-value }
By default, the device does not suppress output of any logs from any modules.
334
To view the traps in the log trap buffer, access the MIB corresponding to the log trap buffer.
Procedure
1. Enter system view.
system-view
2. Enable SNMP notifications for system logs.
snmp-agent trap enable syslog
By default, the device does not send SNMP notifications for system logs.
3. Set the maximum number of traps that can be stored in the log trap buffer.
info-center syslog trap buffersize buffersize
By default, the log trap buffer can store a maximum of 1024 traps.
335
By default, the maximum security log file size is 10 MB.
5. (Optional.) Set the alarm threshold of the security log file usage.
info-center security-logfile alarm-threshold usage
By default, the alarm threshold of the security log file usage ratio is 80. When the usage of the
security log file reaches 80%, the system will send a message.
336
4. (Optional.) Specify the diagnostic log file directory.
info-center diagnostic-logfile directory dir-name
The default diagnostic log file directory is flash:/diagfile.
The device uses the default diagnostic log file directory after an IRF reboot or a
master/subordinate switchover.
5. Save diagnostic logs in the diagnostic log file buffer to the diagnostic log file. Choose one option
as needed:
Configure the automatic diagnostic log file saving interval.
info-center diagnostic-logfile frequency freq-sec
The default saving interval is 86400 seconds.
Manually save diagnostic logs to the diagnostic log file.
diagnostic-logfile save
This command is available in any view.
Task Command
Display the diagnostic log file
display diagnostic-logfile summary
configuration.
337
Task Command
Display the content of the log file
display logfile buffer
buffer.
PC Device
Procedure
# Enable the information center.
<Device> system-view
[Device] info-center enable
To avoid output of unnecessary information, disable all modules from outputting log information to
the specified destination (console in this example) before you configure the output rule.
# Configure an output rule to output to the console FTP logs that have a minimum severity level of
warning.
[Device] info-center source ftp console level warning
[Device] quit
Now, if the FTP module generates logs, the information center automatically sends the logs to the
console, and the console displays the logs.
338
Example: Outputting logs to a UNIX log host
Network configuration
Configure the device to output to the UNIX log host FTP logs that have a minimum severity level of
informational.
Figure 102 Network diagram
1.1.0.1/16 1.2.0.1/16
Internet
Device Host
Procedure
1. Make sure the device and the log host can reach each other. (Details not shown.)
2. Configure the device:
# Enable the information center.
<Device> system-view
[Device] info-center enable
# Specify log host 1.2.0.1/16 with local4 as the logging facility.
[Device] info-center loghost 1.2.0.1 facility local4
# Disable log output to the log host.
[Device] info-center source default loghost deny
To avoid output of unnecessary information, disable all modules from outputting logs to the
specified destination (loghost in this example) before you configure an output rule.
# Configure an output rule to output to the log host FTP logs that have a minimum severity level
of informational.
[Device] info-center source ftp loghost level informational
3. Configure the log host:
The log host configuration procedure varies by the vendor of the UNIX operating system. The
following shows an example:
a. Log in to the log host as a root user.
b. Create a subdirectory named Device in directory /var/log/, and then create file info.log in
the Device directory to save logs from the device.
# mkdir /var/log/Device
# touch /var/log/Device/info.log
c. Edit file syslog.conf in directory /etc/ and add the following contents:
# Device configuration messages
local4.info /var/log/Device/info.log
In this configuration, local4 is the name of the logging facility that the log host uses to
receive logs. The value info indicates the informational severity level. The UNIX system
records the log information that has a minimum severity level of informational to file
/var/log/Device/info.log.
339
NOTE:
Follow these guidelines while editing file /etc/syslog.conf:
• Comments must be on a separate line and must begin with a pound sign (#).
• No redundant spaces are allowed after the file name.
• The logging facility name and the severity level specified in the /etc/syslog.conf file must
be identical to those configured on the device by using the info-center loghost and
info-center source commands. Otherwise, the log information might not be output
to the log host correctly.
d. Display the process ID of syslogd, kill the syslogd process, and then restart syslogd by
using the –r option to validate the configuration.
# ps -ae | grep syslogd
147
# kill -HUP 147
# syslogd -r &
Now, the device can output FTP logs to the log host, which stores the logs to the specified file.
1.1.0.1/16 1.2.0.1/16
Internet
Device Host
Procedure
1. Make sure the device and the log host can reach each other. (Details not shown.)
2. Configure the device:
# Enable the information center.
<Device> system-view
[Device] info-center enable
# Specify log host 1.2.0.1/16 with local5 as the logging facility.
[Device] info-center loghost 1.2.0.1 facility local5
# Disable log output to the log host.
[Device] info-center source default loghost deny
To avoid outputting unnecessary information, disable all modules from outputting log
information to the specified destination (loghost in this example) before you configure an
output rule.
# Configure an output rule to enable output to the log host FTP logs that have a minimum
severity level of informational.
[Device] info-center source ftp loghost level informational
3. Configure the log host:
The log host configuration procedure varies by the vendor of the Linux operating system. The
following shows an example:
340
a. Log in to the log host as a root user.
b. Create a subdirectory named Device in directory /var/log/, and create file info.log in the
Device directory to save logs from the device.
# mkdir /var/log/Device
# touch /var/log/Device/info.log
c. Edit file syslog.conf in directory /etc/ and add the following contents:
# Device configuration messages
local5.info /var/log/Device/info.log
In this configuration, local5 is the name of the logging facility that the log host uses to
receive logs. The value info indicates the informational severity level. The Linux system
will store the log information with a severity level equal to or higher than informational to
file /var/log/Device/info.log.
NOTE:
Follow these guidelines while editing file /etc/syslog.conf:
• Comments must be on a separate line and must begin with a pound sign (#).
• No redundant spaces are allowed after the file name.
• The logging facility name and the severity level specified in the /etc/syslog.conf file must
be identical to those configured on the device by using the info-center loghost and
info-center source commands. Otherwise, the log information might not be output
to the log host correctly.
d. Display the process ID of syslogd, kill the syslogd process, and then restart syslogd by
using the -r option to validate the configuration.
Make sure the syslogd process is started with the -r option on the Linux log host.
# ps -ae | grep syslogd
147
# kill -9 147
# syslogd -r &
Now, the device can output FTP logs to the log host, which stores the logs to the specified file.
341
Configuring the packet capture
About packet capture
The packet capture feature captures incoming packets on the device. The feature displays the
captured packets on the terminal in real time, and allows you to save the captured packets to a .pcap
file for future analysis. Packet capture can read both .pcap and .pcapng files.
Variables
A capture filter variable must be modified by one or more qualifiers.
342
The broadcast, multicast, and all protocol qualifiers cannot modify variables. The other qualifiers
must be followed by variables.
Table 30 Variable types for capture filter rules
IPv6 network Represented by an IPv6 address The dst net 1::/64 expression matches traffic sent
segment with a prefix length. to the IPv6 network 1::/64.
Nonalphanumeric Alphanumeric
Description
symbol symbol
Reverses the result of a condition.
Use this operator to capture traffic that matches the opposite
! not
value of a condition.
For example, to capture non-HTTP traffic, use not port 80.
Joins two conditions.
Use this operator to capture traffic that matches both
&& and conditions.
For example, to capture non-HTTP traffic that is sent to or from
1.1.1.1, use host 1.1.1.1 and not port 80.
Joins two conditions.
Use this operator to capture traffic that matches either of the
|| or conditions.
For example, to capture traffic that is sent to or from 1.1.1.1 or
2.2.2.2, use host 1.1.1.1 or host 2.2.2.2.
343
Arithmetic operators
Table 32 Arithmetic operators for capture filter rules
Nonalphanumeric
Description
symbol
+ Adds two values.
Returns the result of the bitwise AND operation on two integral values in binary
&
form.
| Returns the result of the bitwise OR operation on two integral values in binary form.
Performs the bitwise left shift operation on the operand to the left of the operator.
<<
The right-hand operand specifies the number of bits to shift.
Performs the bitwise right shift operation on the operand to the left of the operator.
>>
The right-hand operand specifies the number of bits to shift.
Specifies a byte offset relative to a protocol layer. This offset indicates the byte
where the matching begins.
[] You must enclose the offset value in the brackets and specify a protocol qualifier.
For example, ip[6] matches the seventh byte of payload in IPv4 packets (the byte
that is six bytes away from the beginning of the IPv4 payload).
Relational operators
Table 33 Relational operators for capture filter rules
Nonalphanumeric
Description
symbol
Equal to.
= For example, ip[6]=0x1c matches an IPv4 packet if its seventh byte of payload
is equal to 0x1c.
Greater than.
>
For example, len>100 matches a packet if its length is greater than 100 bytes.
Less than.
<
For example, len<100 matches a packet if its length is less than 100 bytes.
Greater than or equal to.
>= For example, len>=100 matches a packet if its length is greater than or equal to
100 bytes.
Less than or equal to.
<= For example, len<=100 matches a packet if its length is less than or equal to 100
bytes.
344
Capture filter rule expressions
Logical expression
Use this type of expression to capture packets that match the result of logical operations.
Logical expressions contain keywords and logical operators. For example:
• not port 23 and not port 22—Captures packets with a port number that is not 23 or 22.
• port 23 or icmp—Captures packets with a port number 23 or ICMP packets.
In a logical expression, a qualifier can modify more than one variable connected by its nearest logical
operator. For example, to capture packets sourced from IPv4 address 192.168.56.1 or IPv4 network
192.168.27, use either of the following expressions:
• src 192.168.56.1 or 192.168.27.
• src 192.168.56.1 or src 192.168.27.
The expr relop expr expression
Use this type of expression to capture packets that match the result of arithmetic operations.
This expression contains keywords, arithmetic operators (expr), and relational operators (relop). For
example, len+100>=200 captures packets that are greater than or equal to 100 bytes.
The proto [ expr:size ] expression
Use this type of expression to capture packets that match the result of arithmetic operations on a
number of bytes relative to a protocol layer.
This type of expression contains the following elements:
• proto—Specifies a protocol layer.
• []—Performs arithmetic operations on a number of bytes relative to the protocol layer.
• expr—Specifies the arithmetic expression.
• size—Specifies the byte offset. This offset indicates the number of bytes relative to the
protocol layer. The operation is performed on the specified bytes. The offset is set to 1 byte if
you do not specify an offset.
For example, ip[0]&0xf !=5 captures an IP packet if the result of ANDing the first byte with 0x0f is
not 5.
To match a field, you can specify a field name for expr:size. For example,
icmp[icmptype]=0x08 captures ICMP packets that contain a value of 0x08 in the Type field.
The vlan vlan_id expression
Use this type of expression to capture 802.1Q tagged VLAN traffic.
This type of expression contains the vlan vlan_id keywords and logical operators. The vlan_id
variable is an integer that specifies a VLAN ID. For example, vlan 1 and ip captures IPv4 packets in
VLAN 1.
To capture packets of a VLAN, set a capture filter as follows:
• To capture tagged packets that are permitted on the interface, you must use the vlan
vlan_id expression prior to any other expressions. For example, use the vlan 3 and src
192.168.1.10 and dst 192.168.1.1 expression to capture packets of VLAN 3 that are sent from
192.168.1.10 to 192.168.1.1.
• After receiving an untagged packet, the device adds a VLAN tag to the packet header. To
capture the packet, add "vlan xx" to the capture filter expression. For Layer 3 packets, the xx
represents the default VLAN ID of the outgoing interface. For Layer 2 packets, the xx
represents the default VLAN ID of the incoming interface.
345
Building a display filter rule
A display filter rule only identifies the packets to display. It does not affect which packets to save in a
file.
Variables
A packet field qualifier requires a variable.
Table 35 Variable types for display filter rules
346
Variable type Description
Uses colons (:), dots (.), or hyphens (-) to break up the MAC address into two or four
segments.
For example, to display packets that contain a destination MAC address of ffff.ffff.ffff, use
MAC address
one of the following expressions:
(6 bytes) • eth.dst==ff:ff:ff:ff:ff:ff.
• eth.dst==ff-ff-ff-ff-ff-ff.
• eth.dst ==ffff.ffff.ffff.
Character string.
String For example, to display HTTP packets that contain the string HTTP/1.1 for the request
version field, use http.request version=="HTTP/1.1".
Nonalphanumeric Alphanumeric
Description
symbol symbol
No alphanumeric
Used with protocol qualifiers. For more information, see "The
[] symbol is
proto[…] expression."
available.
347
Relational operators
Table 37 Relational operators for display filter rules
Nonalphanumeric Alphanumeric
Description
symbol symbol
Equal to.
== eq For example, ip.src==10.0.0.5 displays packets with the
source IP address as 10.0.0.5.
Not equal to.
!= ne For example, ip.src!=10.0.0.5 displays packets whose source
IP address is not 10.0.0.5.
Greater than.
> gt For example, frame.len>100 displays frames with a length
greater than 100 bytes.
Less than.
< lt For example, frame.len<100 displays frames with a length less
than 100 bytes.
Greater than or equal to.
>= ge For example, frame.len ge 0x100 displays frames with a
length greater than or equal to 256 bytes.
Less than or equal to.
<= le For example, frame.len le 0x100 displays frames with a length
less than or equal to 256 bytes.
348
[n:m]—Matches a total of m bytes after an offset of n bytes from the beginning of the
specified protocol layer or field. To match only 1 byte, you can use both [n] and [n:1]
formats. For example, eth.src[0:3]==00:00:83 matches an Ethernet frame if the first three
bytes of its source MAC address are 0x00, 0x00, and 0x83. The eth.src[2] == 83
expression matches an Ethernet frame if the third byte of its source MAC address is 0x83.
[n-m]—Matches a total of (m-n+1) bytes, starting from the (n+1)th byte relative to the
beginning of the specified protocol layer or packet field. For example, eth.src[1-2]==00:83
matches an Ethernet frame if the second and third bytes of its source MAC address are
0x00 and 0x83, respectively.
349
limit-captured-frames limit | limit-frame-size bytes | autostop duration
seconds ] * [ raw | { brief | verbose } ] *
350
Figure 104 Network diagram
VLAN 3
GE1/0/1
192.168.1.1/24
VLAN 3
192.168.1.10/24 192.168.1.11/24
Procedure
1. Install the packet capture feature.
# Display the device version information.
<Device> display version
HPE Comware Software, Version 7.1.070, Demo 01
Copyright (c) 2004-2019 Hewlett Packard Enterprise development LP. All rights
reserved.
HPE 5520 24G 4SFP+ HI Switch (R8M25A) is 0 weeks, 0 days, 5 hours, 33 minutes
Last reboot reason : Cold reboot
Boot image: flash:/boot-01.bin
Boot image version: 7.1.070, Demo 01
Compiled Oct 20 2019 16:00:00
System image: flash:/system-01.bin
System image version: 7.1.070, Demo 01
Compiled Oct 20 2019 16:00:00
...
# Prepare a packet capture feature image that is compatible with the current boot and system
images.
# Download the packet capture feature image to the device. In this example, the image is stored
on the TFTP server at 192.168.1.1.
<Device> tftp 192.168.1.1 get packet-capture-01.bin
Press CTRL+C to abort.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 11.3M 0 11.3M 0 0 155k 0 --:--:-- 0:01:14 --:--:-- 194k
Writing file...Done.
# Install the packet capture feature image on all IRF member devices and commit the software
change. In this example, there are two IRF member devices.
<Device> install activate feature flash:/packet-capture-01.bin slot 1
Verifying the file flash:/packet-capture-01.bin on slot 1....Done.
Identifying the upgrade methods....Done.
Upgrade summary according to following table:
351
flash:/packet-capture-01.bin
Running Version New Version
None Demo 01
flash:/packet-capture-01.bin
Running Version New Version
None Demo 01
352
# Apply the QoS policy to the incoming direction of GigabitEthernet 1/0/1.
[Device] interface gigabitethernet 1/0/1
[Device-GigabitEthernet1/0/1] qos apply policy user1 inbound
[Device-GigabitEthernet1/0/1] quit
[Device] quit
3. Enable packet capture.
# Capture incoming traffic on GigabitEthernet 1/0/1. Set the maximum number of captured
packets to 10. Save the captured packets to the flash:/a.pcap file.
<Device> packet-capture interface gigabitethernet 1/0/1 capture-filter "vlan 3 and
src 192.168.1.10 or 192.168.1.11 and dst 192.168.1.1" limit-captured-frames 10 write
flash:/a.pcap
Capturing on 'GigabitEthernet1/0/1'
10
353
Configuring VCF fabric
About VCF fabric
Based on OpenStack Networking (Neutron), the Virtual Converged Framework (VCF) solution
provides virtual network services from Layer 2 to Layer 7 for cloud tenants. This solution breaks the
boundaries between the network, cloud management, and terminal platforms and transforms the IT
infrastructure to a converged framework to accommodate all applications. It also implements
automated topology discovery and automated deployment of underlay networks and overlay
networks to reduce the administrators' workload and speed up network deployment and upgrade.
VXLAN/VLAN
vSwitch vSwitch
VM VM VM VM
354
• Border node—Located at the border of a VCF fabric to provide access to the external network.
OSPF runs on the Layer 3 networks between the spine and aggregate nodes and between the
aggregate and leaf nodes. VXLAN is used to set up the Layer 2 overlay network.
Figure 106 Topology for a Layer 3 data center VCF fabric
Spine Spine
VXLAN
Aggr Aggr Border
355
Figure 107 VCF fabric topology for a campus network
Spine Spine
VXLAN/VLAN
AC
AP
Neutron overview
Neutron concepts and components
Neutron is a component in OpenStack architecture. It provides networking services for VMs,
manages virtual network resources (including networks, subnets, DHCP, virtual routers), and creates
an isolated virtual network for each tenant. Neutron provides a unified network resource model,
based on which VCF fabric is implemented.
The following are basic concepts in Neutron:
• Network—A virtual object that can be created. It provides an independent network for each
tenant in a multitenant environment. A network is equivalent to a switch with virtual ports which
can be dynamically created and deleted.
• Subnet—An address pool that contains a group of IP addresses. Two different subnets
communicate with each other through a router.
• Port—A connection port. A router or a VM connects to a network through a port.
• Router—A virtual router that can be created and deleted. It performs routing selection and data
forwarding.
Neutron has the following components:
• Neutron server—Includes the daemon process neutron-server and multiple plug-ins
(neutron-*-plugin). The Neutron server provides an API and forwards the API calls to the
configured plugin. The plug-in maintains configuration data and relationships between routers,
networks, subnets, and ports in the Neutron database.
• Plugin agent (neutron-*-agent)—Processes data packets on virtual networks. The choice of
plug-in agents depends on Neutron plug-ins. A plug-in agent interacts with the Neutron server
and the configured Neutron plug-in through a message queue.
356
• DHCP agent (neutron-dhcp-agent)—Provides DHCP services for tenant networks.
• L3 agent (neutron-l3-agent)—Provides Layer 3 forwarding services to enable inter-tenant
communication and external network access.
Neutron deployment
Neutron needs to be deployed on servers and network devices.
Table 38 shows Neutron deployment on a server.
Table 38 Neutron deployment on a server
357
Figure 108 Example of Neutron deployment for centralized gateway deployment
Vswitch Vswitch
Neutron Server
Neutron
Type Driver DB
L3 Service
Mesh Driver V V V V V V V V V V
(My SQL) M M M M M M M M M M
Spine Spine
Spine Spine
Vswitch Vswitch
Neutron Server
Neutron
Type Driver DB
L3 Service
Mesh Driver V V V V V V V V V V
(My SQL) M M M M M M M M M M
358
spine nodes exist in a VCF fabric, the master spine node collects the topology for the entire
network.
• Automated underlay network deployment.
Automated underlay network deployment sets up a Layer 3 underlay network (a physical Layer
3 network) for users. It is implemented by automatically executing configurations (such as IRF
configuration and Layer 3 reachability configurations) in user-defined template files.
• Automated overlay network deployment.
Automated overlay network deployment sets up an on-demand and application-oriented
overlay network (a virtual network built on top of the underlay network). It is implemented by
automatically obtaining the overlay network configuration (including VXLAN and EVPN
configuration) from the Neutron server.
Template file
A template file contains the following contents:
• System-predefined variables—The variable names cannot be edited, and the variable values
are set by the VCF topology discovery feature.
• User-defined variables—The variable names and values are defined by the user. These
variables include the username and password used to establish a connection with the
RabbitMQ server, network type, and so on. The following are examples of user-defined
variables:
#USERDEF
_underlayIPRange = 10.100.0.0/16
_master_spine_mac = 1122-3344-5566
_backup_spine_mac = aabb-ccdd-eeff
_username = aaa
_password = aaa
_rbacUserRole = network-admin
_neutron_username = openstack
_neutron_password = 12345678
_neutron_ip = 172.16.1.136
_loghost_ip = 172.16.1.136
_network_type = centralized-vxlan
……
359
• Static configurations—Static configurations are independent from the VCF fabric topology
and can be directly executed. The following are examples of static configurations:
#STATICCFG
#
clock timezone beijing add 08:00:00
#
lldp global enable
#
stp global enable
#
• Dynamic configurations—Dynamic configurations are dependent on the VCF fabric topology.
The device first obtains the topology information through LLDP and then executes dynamic
configurations. The following are examples of dynamic configurations:
#
interface $$_underlayIntfDown
port link-mode route
ip address unnumbered interface LoopBack0
ospf 1 area 0.0.0.0
ospf network-type p2p
lldp management-address arp-learning
lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0
#
360
Do not perform link migration when devices in the VCF fabric are in the process of coming online or
powering down after the automated VCF fabric deployment finishes. A violation might cause
link-related configuration fails to update.
The version format of a template file for automated VCF fabric deployment is x.y. Only the x part is
examined during a version compatibility check. For successful automated deployment, make sure x
in the version of the template file to be used is not greater than x in the supported version. To display
the supported version of the template file for automated VCF fabric deployment, use the display
vcf-fabric underlay template-version command.
If the template file does not include IRF configurations, the device does not save the configurations
after executing all configurations in the template file. To save the configurations, use the save
command.
Two devices with the same role can automatically set up an IRF fabric only when the IRF physical
interfaces on the devices are connected.
Two IRF member devices in an IRF fabric use the following rules to elect the IRF master during
automated VCF fabric deployment:
• If the uptime of both devices is shorter than two hours, the device with the higher bridge MAC
address becomes the IRF master.
• If the uptime of one device is equal to or longer than two hours, that device becomes the IRF
master.
• If the uptime of both devices are equal to or longer than two hours, the IRF fabric cannot be set
up. You must manually reboot one of the member devices. The rebooted device will become the
IRF subordinate.
If the IRF member ID of a device is not 1, the IRF master might reboot during automatic IRF fabric
setup.
Procedure
1. Finish the underlay network planning (such as IP address assignment, reliability design, and
routing deployment) based on user requirements.
2. Configure the DHCP server.
Configure the IP address of the device, the IP address of the TFTP server, and names of
template files saved on the TFTP server. For more information, see the user manual of the
DHCP server.
3. Configure the TFTP server.
Create template files and save the template files to the TFTP server.
4. (Optional.) Configure the NTP server.
5. Connect the device to the VCF fabric and start the device.
After startup, the device uses a management Ethernet interface or VLAN-interface 1 to connect
to the fabric management network. Then, it downloads the template file corresponding to its
device role and parses the template file to complete automated VCF fabric deployment.
6. (Optional.) Save the deployed configuration.
If the template file does not include IRF configurations, the device will not save the
configurations after executing all configurations in the template file. To save the configurations,
use the save command. For more information about this command, see configuration file
management commands in Fundamentals Command Reference.
361
2. Enable LLDP globally.
lldp global enable
By default, LLDP is disabled globally.
You must enable LLDP globally before you enable VCF fabric topology discovery, because the
device needs LLDP to collect topology data of directly-connected devices.
3. Enable VCF fabric topology discovery.
vcf-fabric topology enable
By default, VCF fabric topology discovery is disabled.
362
Configuring the device as a master spine node
About this task
If multiple spine nodes exist on a VCF fabric, you must configure a device as the master spine node
to collect the topology for the entire VCF fabric network.
Procedure
1. Enter system view.
system-view
2. Configure the device as a master spine node.
vcf-fabric spine-role master
By default, the device is not a master spine node.
363
Perform this task if all devices in the VCF fabric complete automated deployment and new devices
are to be added to the VCF fabric.
Procedure
1. Enter system view.
system-view
2. Pause automated underlay network deployment.
vcf-fabric underlay pause
By default, automated underlay network deployment is not paused.
364
Prerequisites for automated overlay network deployment
Before you configure automated overlay network deployment, you must complete the following
tasks:
1. Install OpenStack Neutron components and plugins on the controller node in the VCF fabric.
2. Install OpenStack Nova components, openvswitch, and neutron-ovs-agent on compute nodes
in the VCF fabric.
3. Make sure LLDP and automated VCF fabric topology discovery are enabled.
365
rabbit user username
By default, the device uses username guest to establish a connection with a RabbitMQ server.
7. Configure the password for the device to establish a connection with a RabbitMQ server.
rabbit password { cipher | plain } string
By default, the device uses plaintext password guest to establish a connection with a
RabbitMQ server.
8. Specify a virtual host to provide RabbitMQ services.
rabbit virtual-host hostname
By default, the virtual host / provides RabbitMQ services for the device.
9. Specify the username and password for the device to deploy configurations through RESTful.
restful user username password { cipher | plain } password
By default, no username or password is configured for the device to deploy configurations
through RESTful.
Enabling L2 agent
About this task
Layer 2 agent (L2 agent) responds to OpenStack events such as network creation, subnet creation,
and port creation. It deploys Layer 2 networking to provide Layer 2 connectivity within a virtual
network and Layer 2 isolation between different virtual networks
Restrictions and guidelines
On a VLAN network or a VXLAN network with a centralized IP gateway, perform this task on both
spine nodes and leaf nodes.
On a VXLAN network with distribute IP gateways, perform this task only on leaf nodes.
Procedure
1. Enter system view.
system-view
2. Enter Neutron view.
neutron
3. Enable the L2 agent.
366
l2agent enable
By default, the L2 agent is disabled.
Enabling L3 agent
About this task
Layer 3 agent (L3 agent) responds to OpenStack events such as virtual router creation, interface
creation, and gateway configuration. It deploys the IP gateways to provide Layer 3 forwarding
services for VMs.
Restrictions and guidelines
On a VLAN network or a VXLAN network with a centralized IP gateway, perform this task only on
spine nodes.
On a VXLAN network with distribute IP gateways, perform this task only on leaf nodes.
Procedure
1. Enter system view.
system-view
2. Enter Neutron view.
neutron
3. Enable the L3 agent.
L3agent enable
By default, the L3 agent is disabled.
367
neutron
3. Enable the border node service.
border enable
By default, the device is not a border node.
4. (Optional.) Specify the IPv4 address of the border gateway.
gateway ip ipv4-address
By default, the IPv4 address of the border gateway is not specified.
5. Configure export route targets for a tenant VPN instance.
vpn-target target export-extcommunity
By default, no export route targets are configured for a tenant VPN instance.
6. (Optional.) Configure import route targets for a tenant VPN instance.
vpn-target target import-extcommunity
By default, no import route targets are configured for a tenant VPN instance.
368
2. Enter Neutron view.
neutron
3. Configure the MAC address of VSI interfaces.
vsi-mac mac-address
By default, no MAC address is configured for VSI interfaces.
Task Command
Display the role of the device in the VCF fabric. display vcf-fabric role
Display VCF fabric topology information. display vcf-fabric topology
Display information about automated underlay display vcf-fabric underlay
network deployment. autoconfigure
Display the supported version and the current version display vcf-fabric underlay
of the template file for automated VCF fabric
provisioning.
template-version
369
Using Ansible for automated
configuration management
About Ansible
Ansible is a configuration tool programmed in Python. It uses SSH to connect to devices.
Manager
Network
370
Configuring the device for management with
Ansible
Before you use Ansible to configure the device, complete the following tasks:
• Configure a time protocol (NTP or PTP) or manually configure the system time on the Ansible
server and the device to synchronize their system time. For more information about NTP and
PTP configuration, see Network Management and Monitoring Configuration Guide.
• Configure the device as an SSH server. For more information about SSH configuration, see
Security Configuration Guide.
Device Ansible
Manager
Prerequisites
Assign IP addresses to the device and manager so you can access the device from the manager.
(Details not shown.)
Procedure
1. Configure a time protocol (NTP or PTP) or manually configure the system time on both the
device and manager so they use the same system time. (Details not shown.)
2. Configure the device as an SSH server:
# Create local key pairs. (Details not shown.)
# Create a local user named abc and set the password to 123456 in plain text.
<Device> system-view
[Device]local-user abc
[Device-luser-manage-abc] password simple 123456
# Assign the network-admin user role to the user and authorize the user to use SSH, HTTP, and
HTTPS services.
[Device-luser-manage-abc] authorization-attribute user-role network-admin
[Device-luser-manage-abc] service-type ssh http https
[Device-luser-manage-abc] quit
# Enable NETCONF over SSH.
[Device] netconf ssh server enable
371
# Enable scheme authentication for SSH login and assign the network-admin user role to the
login users.
[Device] line vty 0 63
[Device-line-vty0-63] authentication-mode scheme
[Device-line-vty0-63] user-role network-admin
[Device-line-vty0-63] quit
# Enable the SSH server.
[Device] ssh server enable
# Authorize SSH user abc to use all service types, including SCP, SFTP, Stelnet, and
NETCONF. Set the authentication method to password.
[Device] ssh user abc service-type all authentication-type password
# Enable the SFTP server or SCP server.
If the device supports SFTP, enable the SFTP server.
[Device] sftp server enable
If the device does not support SFTP, enable the SCP server.
[Device] scp server enable
Procedure
Install Ansible on the manager. Create a configuration script and deploy the script. For more
information, see the relevant documents.
372
Configuring Chef
About Chef
Chef is an open-source configuration management tool. It uses the Ruby language. You can use the
Ruby language to create cookbooks and save them to a server, and then use the server for
centralized configuration enforcement and management.
As shown in Figure 112, Chef operates in a client/server network framework. Basic Chef network
components include the Chef server, Chef clients, and workstations.
Chef server
The Chef server is used to centrally manage Chef clients. It has the following functions:
• Creates and deploys cookbooks to Chef clients on demand.
• Creates .pem key files for Chef clients and workstations. Key files include the following two
types:
User key file—Stores user authentication information for a Chef client or a workstation. The
Chef server uses this file to verify the validity of a Chef client or workstation. Before the Chef
client or workstation initiates a connection to the Chef server, make sure the user key file is
downloaded to the Chef client or workstation.
Organization key file—Stores authentication information for an organization. For
management convenience, you can classify Chef clients or workstations that have the same
type of attributes into organizations. The Chef server uses organization key files to verify the
validity of organizations. Before a Chef client or workstation initiates a connection to the
Chef server, make sure the organization key file is downloaded to the Chef client or
workstation.
For information about installing and configuring the Chef server, see the official Chef website at
https://www.chef.io/.
Workstation
Workstations provide the interface for you to interact with the Chef server. You can create or modify
cookbooks on a workstation and then upload the cookbooks to the Chef server.
373
A workstation can be hosted by the same host as the Chef server. For information about installing
and configuring the workstation, see the official Chef website at
https://www.chef.io/.
Chef client
Chef clients are network devices managed by the Chef server. Chef clients download cookbooks
from the Chef server and use the settings in the cookbooks.
The device supports Chef 12.3.0 client.
Chef resources
Chef uses Ruby to define configuration items. A configuration item is defined as a resource. A
cookbook contains a set of resources for one feature.
Chef manages types of resources. Each resource has a type, a name, one or more properties, and
one action. Every property has a value. The value specifies the state desired for the resource. You
can specify the state of a device by setting values for properties regardless of how the device enters
the state. The following resource example shows how to configure a device to create VLAN 2 and
configure the description for VLAN 2.
netdev_vlan 'vlan2' do
vlan_id 2
description 'chef-vlan2'
action :create
end
The following are the resource type, resource name, properties, and actions:
• netdev_vlan—Type of the resource.
• vlan2—Name of the resource. The name is the unique identifier of the resource.
• do/end—Indicates the beginning and end of a Ruby block that contains properties and actions.
All Chef resources must be written by using the do/end syntax.
• vlan_id—Property for specifying a VLAN. In this example, VLAN 2 is specified.
• description—Property for configuring the description. In this example, the description for
VLAN 2 is chef-vlan2.
• create—Action for creating or modifying a resource. If the resource does not exist, this action
creates the resource. If the resource already exists, this action modifies the resource with the
new settings. This action is the default action for Chef. If you do not specify an action for a
resource, the create action is used.
• delete—Action for deleting a resource.
Chef supports only the create and delete actions.
For more information about resource types supported by Chef, see "Chef resources."
374
the received key file. If the two files are consistent, the Chef client passes the authentication. The
Chef client then downloads the resource file to the directory specified in the Chef configuration file,
loads the settings in the resource file, and outputs log messages as specified.
Table 40 Chef configuration file description
Item Description
Severity level for log messages.
Available values include :auto, :debug, :info, :warn, :error,
and :fatal. The severity levels in ascending order are listed as follows:
• :debug
(Optional.) log_level • :info
• :warn (:auto)
• :error
• :fatal
The default severity level is :auto, which is the same as :warn.
Log output mode:
• STDOUT—Outputs standard Chef success log messages to a
file. With this mode, you can specify the destination file for
outputting standard Chef success log messages when you
execute the third-part-process start command. The
standard Chef error log messages are output to the configuration
terminal.
• STDERR—Outputs standard Chef error log messages to a file.
log_location With this mode, you can specify the destination file for outputting
standard Chef error log messages when you execute the
third-part-process start command. The standard
Chef success log messages are output to the configuration
terminal.
• logfilepath—Outputs all log messages to a file, for example,
flash:/cheflog/a.log.
If you specify none of the options, all log messages are output to the
configuration terminal.
Chef client name.
node_name A Chef client name is used to identify a Chef client. It is different from
the device name configured by using the sysname command.
URL of the Chef server and name of the organization created on the
Chef server, in the format of
https://localhost:port/organizations/ORG_NAME.
chef_server_url The localhost argument represents the name or IP address of the Chef
server. The port argument represents the port number of the Chef
server.
The ORG_NAME argument represents the name of the organization.
Path and name of the local organization key file, in the format of
validation_key
flash:/chef/validator.pem.
Path and name of the local user key file, in the format of
client_key
flash:/chef/client.pem.
375
Prerequisites for Chef
Before configuring Chef on the device, complete the following tasks on the device:
• Enable NETCONF over SSH. The Chef server sends configuration information to Chef clients
through NETCONF over SSH. For information about NETCONF over SSH, see "Configuring
NETCONF."
• Configure SSH login. Chef clients communicate with the Chef server through SSH. For
information about SSH login, see Fundamentals Configuration Guide.
Starting Chef
Configuring the Chef server
1. Create key files for the workstation and the Chef client.
2. Create a Chef configuration file for the Chef client.
For more information about configuring the Chef server, see the Chef server installation and
configuration guides.
Configuring a workstation
1. Create the working path for the workstation.
2. Create the directory for storing the Chef configuration file for the workstation.
3. Create a Chef configuration file for the workstation.
4. Download the key file for the workstation from the Chef server to the directory specified in the
workstation configuration file.
5. Create a Chef resource file.
6. Upload the resource file to the Chef server.
For more information about configuring a workstation, see the workstation installation and
configuration guides.
376
Parameter Description
Specifies the path and name of the Chef
--config=filepath
configuration file.
Specifies the name of the directory that contains
--runlist recipe[Directory] files and subdirectories associated with the
resource.
Procedure
1. Configure the Chef server:
# Create user key file admin.pem for the workstation. Specify the workstation username as
Herbert George Wells, the Email address as [email protected], and the password as 123456.
377
$ chef-server-ctl user-create Herbert George Wells [email protected] 123456
–filename=/etc/chef/admin.pem
# Create organization key file admin_org.pem for the workstation. Specify the abbreviated
organization name as ABC and the organization name as ABC Technologies Co., Limited.
Associate the organization with the user Herbert.
$ chef-server-ctl org-create ABC_org "ABC Technologies Co., Limited"
–association_user Herbert –filename =/etc/chef/admin_org.pem
# Create user key file client.pem for the Chef client. Specify the Chef client username as
Herbert George Wells, the Email address as [email protected], and the password as 123456.
$ chef-server-ctl user-create Herbert George Wells [email protected] 123456
–filename=/etc/chef/client.pem
# Create organization key file validator.pem for the Chef client. Specify the abbreviated
organization name as ABC and the organization name as ABC Technologies Co., Limited.
Associate the organization with the user Herbert.
$ chef-server-ctl org-create ABC "ABC Technologies Co., Limited" –association_user
Herbert –filename =/etc/chef/validator.pem
# Create Chef configuration file chefclient.rb for the Chef client.
log_level :info
log_location STDOUT
node_name 'Herbert'
chef_server_url 'https://1.1.1.2:443/organizations/abc'
validation_key 'flash:/chef/validator.pem'
client_key 'flash:/chef/client.pem'
cookbook_path [ 'flash:/chef-repo/cookbooks' ]
2. Configure the workstation:
# Create the chef-repo directory on the workstation. This directory will be used as the working
path.
$ mkdir /chef-repo
# Create the .chef directory. This directory will be used to store the Chef configuration file for
the workstation.
$ mkdir –p /chef-repo/.chef
# Create Chef configuration file knife.rb in the /chef-repo/.chef0 directory.
log_level :info
log_location STDOUT
node_name 'admin'
client_key '/root/chef-repo/.chef/admin.pem'
validation_key '/root/chef-repo/.chef/admin_org.pem'
chef_server_url 'https://chef-server:443/organizations/abc'
# Use TFTP or FTP to download the key files for the workstation from the Chef server to the
/chef-repo/.chef directory on the workstation. (Details not shown.)
# Create resource directory netdev.
$ knife cookbook create netdev
After the command is executed, the netdev directory is created in the current directory. The
directory contains files and subdirectories for the resource. The recipes directory stores the
resource file.
# Create resource file default.rb in the recipes directory.
netdev_vlan 'vlan3' do
vlan_id 3
action :create
end
378
# Upload the resource file to the Chef server.
$ knife cookbook upload –all
3. Configure the Chef client:
# Configure SSH login and enable NETCONF over SSH on the device. (Details not shown.)
# Use TFTP or FTP to download Chef configuration file chefclient.rb from the Chef server to
the root directory of the Flash memory on the Chef client. Make sure this directory is the same
as the directory specified by using the --config=filepath option in the
third-part-process start command.
# Use TFTP or FTP to download key files validator.pem and client.pem from the Chef server
to the flash:/chef/ directory.
# Start Chef. Specify the Chef configuration file name and path as flash:/chefclient.rb and the
resource file name as netdev.
<ChefClient> system-view
[ChefClient] third-part-process start name chef-client arg
--config=flash:/chefclient.rb --runlist recipe[netdev]
After the command is executed, the Chef client downloads the resource file from the Chef
server and loads the settings in the resource file.
379
Chef resources
netdev_device
Use this resource to specify a device name for a Chef client, and specify the SSH username and
password used by the client to connect to the Chef server.
Properties and action
Table 41 Properties and action for netdev_device
Property/Action
Description Value type and restrictions
name
String, case insensitive.
hostname Specifies the device name.
Length: 1 to 64 characters.
Resource example
# Configure the device name as ChefClient, and set the SSH username and password to user and
123456 for the Chef client.
netdev_device 'device' do
hostname "ChefClient"
user "user"
passwd "123456"
end
netdev_interface
Use this resource to configure attributes for an interface.
Properties
Table 42 Properties for netdev_interface
380
Property name Description Property type Value type and restrictions
Configures the String, case sensitive.
description description for the N/A
interface. Length: 1 to 255 characters.
Symbol:
• auto—Autonegotiation.
• 10m—10 Mbps.
Specifies the interface • 100m—100 Mbps.
speed N/A
rate. • 1g—1 Gbps.
• 2.5g—2.5 Gbps.
• 10g—10 Gbps.
• 40g—40 Gbps.
Symbol:
• full—Full-duplex mode.
• half—Half-duplex mode.
duplex Sets the duplex mode. N/A
• auto—Autonegotiation.
This attribute applies only to Ethernet
interfaces.
Symbol:
• access—Sets the link type of the
interface to Access.
• trunk—Sets the link type of the
Sets the link type for the interface to Trunk.
linktype N/A
interface.
• hybrid—Sets the link type of the
interface to Hybrid.
This attribute applies only to Layer 2
Ethernet interfaces.
Symbol:
Sets the operation mode
portlayer N/A • bridge—Layer 2 mode.
for the interface.
• route—Layer 3 mode.
Resource example
# Configure the following attributes for Ethernet interface 2:
• Interface description—ifindex2.
• Management state—Up.
• Interface rate—Autonegotiation.
• Duplex mode—Autonegotiation.
• Link type—Hybrid.
• Operation mode—Layer 2.
• MTU—1500 bytes.
netdev_interface 'ifindex2' do
381
ifindex 2
description 'ifindex2'
admin 'up'
speed 'auto'
duplex 'auto'
linktype 'hybrid'
portlayer 'bridge'
mtu 1500
end
netdev_l2_interface
Use this resource to configure VLAN attributes for a Layer 2 Ethernet interface.
Properties
Table 43 Properties for netdev_l2_interface
382
Resource example
# Specify the PVID as 2 for interface 5, and configure the interface to permit packets from VLANs 2
through 6. Configure the interface to forward packets from VLAN 3 after removing VLAN tags and
forward packets from VLANs 2, 4, 5, and 6 without removing VLAN tags.
netdev_l2_interface 'ifindex5' do
ifindex 5
pvid 2
permit_vlan_list '2-6'
tagged_vlan_list '2,4-6'
untagged_vlan_list '3'
end
netdev_l2vpn
Use this resource to enable or disable L2VPN.
Properties
Table 44 Properties for netdev_l2vpn
Resource example
# Enable L2VPN.
netdev_l2vpn 'l2vpn' do
enable true
end
netdev_lagg
Use this resource to create, modify, or delete an aggregation group.
Properties and action
Table 45 Properties and action for netdev_lagg
Property/Action Property
Description Value type and restrictions
name type
Unsigned integer.
The value range for a Layer 2
Specifies an
group_id Index aggregation group is 1 to 1024.
aggregation group ID.
The value range for a Layer 3
aggregation group is 16385 to 17408.
Symbol:
Specifies the • static—Static.
linkmode N/A
aggregation mode.
• dynamic—Dynamic.
383
Property/Action Property
Description Value type and restrictions
name type
String, a comma separated list of
interface indexes or interface index
Specifies the indexes ranges, for example, 1,2,3,5-8,10-20.
of the interfaces that The string cannot end with a comma
addports N/A
you want to add to the (,), hyphen (-), or space.
aggregation group. An interface index cannot be on the
list of adding interfaces and the list of
removing interfaces at the same time.
String, a comma separated list of
interface indexes or interface index
Specifies the indexes ranges, for example, 1,2,3,5-8,10-20.
of the interfaces that
The string cannot end with a comma
deleteports you want to remove N/A
(,), hyphen (-), or space.
from the aggregation
group. An interface index cannot be on the
list of adding interfaces and the list of
removing interfaces at the same time.
Symbol:
• create—Creates or modifies an
Specifies the action aggregation group.
action N/A
for the resource. • delete—Deletes an aggregation
group.
The default action is create.
Resource example
# Create aggregation group 16386 and set the aggregation mode to static. Add interfaces 1 through
3 to the group, and remove interface 8 from the group.
netdev_lag 'lagg16386' do
group_id 16386
linkmode 'static'
addports '1-3'
deleteports '8'
end
netdev_vlan
Use this resource to create, modify, or delete a VLAN, or configure the name and description for the
VLAN.
Properties and action
Table 46 Properties and action for netdev_vlan
Property/Action Property
Description Value type and restrictions
name type
Unsigned integer.
vlan_id Specifies a VLAN ID. Index
Value range: 1 to 4094.
384
Property/Action Property
Description Value type and restrictions
name type
String, case sensitive.
vlan_name Configures the VLAN name. N/A
Length: 1 to 32 characters.
Symbol:
• create—Creates or modifies a
Specifies the action for the VLAN.
action N/A
resource.
• delete—Deletes a VLAN.
The default action is create.
Resource example
# Create VLAN 2, configure the description as vlan2, and configure the VLAN name as vlan2.
netdev_vlan 'vlan2' do
vlan_id 2
description 'vlan2'
vlan_name ‘vlan2’
end
netdev_vsi
Use this resource to create, modify, or delete a Virtual Switch Instance (VSI).
Properties and action
Table 47 Properties and action for netdev_vsi
Property/Action Property
Description Value type and restrictions
name type
String, case sensitive.
vsiname Specifies a VSI name. Index
Length: 1 to 31 characters.
Symbol:
Enable or disable the • up—Enables the VSI.
admin N/A
VSI. • down—Disables the VSI.
The default value is up.
Symbol:
Specifies the action • create—Creates or modifies a VSI.
action N/A
for the resource. • delete—Deletes a VSI.
The default action is create.
Resource example
# Create the VSI vsia and enable the VSI.
netdev_vsi 'vsia' do
vsiname 'vsia'
admin 'up'
end
385
netdev_vte
Use this resource to create or delete a tunnel.
Properties and action
Table 48 Properties and action for netdev_vte
Property/Action
Description Property type Value type and restrictions
name
Specifies a tunnel
vte_id Index Unsigned integer.
ID.
Unsigned integer:
• 1—IPv4 GRE tunnel mode.
• 2—IPv6 GRE tunnel mode.
• 3—IPv4 over IPv4 tunnel mode.
• 4—Manual IPv6 over IPv4 tunnel
mode.
• 8—IPv6 over IPv6 or IPv4 tunnel
mode.
Sets the tunnel
mode
mode.
N/A • 16—IPv4 IPsec tunnel mode.
• 17—IPv6 IPsec tunnel mode.
• 24—UDP-encapsulated IPv4 VXLAN
tunnel mode.
• 25—UDP-encapsulated IPv6 VXLAN
tunnel mode.
You must specify the tunnel mode when
creating a tunnel. After the tunnel is created,
you cannot change the tunnel mode.
Symbol:
Specifies the • create—Creates a tunnel.
action action for the N/A
• delete—Deletes a tunnel.
resource.
The default action is create.
Resource example
# Create UDP-encapsulated IPv4 VXLAN tunnel 2.
netdev_vte 'vte2' do
vte_id 2
mode 24
end
netdev_vxlan
Use this resource to create, modify, or delete a VXLAN.
386
Properties and action
Table 49 Properties and action for netdev_vxlan
Property/Action Property
Description Value type and restrictions
name type
Unsigned integer.
vxlan_id Specifies a VXLAN ID. Index
Value range: 1 to 16777215.
String, case sensitive.
Length: 1 to 31 characters.
vsiname Specifies the VSI name. N/A You must specify the VSI name when
creating a VSI. After the VSI is created,
you cannot change its name.
String, a comma separated list of tunnel
interface IDs or tunnel interface ID
Specifies the tunnel ranges, for example, 1,2,3,5-8,10-20.
interfaces to be The string cannot end with a comma (,),
add_tunnels N/A
associated with the hyphen (-), or space.
VXLAN. A tunnel interface ID cannot be on the
list of adding interfaces and the list of
removing interfaces at the same time.
String, a comma separated list of tunnel
interface IDs or tunnel interface ID
Removes the association ranges, for example, 1,2,3,5-8,10-20.
between the specified The string cannot end with a comma (,),
delete_tunnels N/A
tunnel interfaces and the hyphen (-), or space.
VXLAN. A tunnel interface ID cannot be on the
list of adding interfaces and the list of
removing interfaces at the same time.
Symbol:
• create—Creates or modifies a
Specifies the action for VXLAN.
action N/A
the resource.
• delete—Deletes a VXLAN.
The default action is create.
Resource example
# Create VXLAN 10, configure the VSI name as vsia, add tunnel interfaces 2 and 4 to the VXLAN,
and remove tunnel interfaces 1 and 3 from the VXLAN.
netdev_vxlan 'vxlan10' do
vxlan_id 10
visname 'vsia'
add_tunnels '2,4'
delete_tunnels '1,3'
end
387
Configuring Puppet
About Puppet
Puppet is an open-source configuration management tool. It provides the Puppet language. You can
use the Puppet language to create configuration manifests and save them to a server. You can then
use the server for centralized configuration enforcement and management.
Puppet master
As shown in Figure 114, Puppet operates in a client/server network framework. In the framework, the
Puppet master (server) stores configuration manifests for Puppet agents (clients). The Puppet
agents establish SSL connections to the Puppet master to obtain their respective latest
configurations.
Puppet master
The Puppet master runs the Puppet daemon process to listen to requests from Puppet agents,
authenticates Puppet agents, and sends configurations to Puppet agents on demand.
For information about installing and configuring a Puppet master, see the official Puppet website at
https://puppetlabs.com/.
Puppet agent
HPE devices support Puppet 3.7.3 agent. The following is the communication process between a
Puppet agent and the Puppet master:
1. The Puppet agent sends an authentication request to the Puppet master.
2. The Puppet agent checks with the Puppet master for the authentication result periodically
(every two minutes by default). Once the Puppet agent passes the authentication, a connection
is established to the Puppet master.
3. After the connection is established, the Puppet agent sends a request to the Puppet master
periodically (every 30 minutes by default) to obtain the latest configuration.
4. After obtaining the latest configuration, the Puppet agent compares the configuration with its
running configuration. If a difference exists, the Puppet agent overwrites its running
configuration with the newly obtained configuration.
5. After overwriting the running configuration, the Puppet agent sends a feedback to the Puppet
master.
388
Puppet resources
A Puppet resource is a unit of configuration. Puppet uses manifests to store resources.
Puppet manages types of resources. Each resource has a type, a title, and one or more attributes.
Every attribute has a value. The value specifies the state desired for the resource. You can specify
the state of a device by setting values for attributes regardless of how the device enters the state.
The following resource example shows how to configure a device to create VLAN 2 and configure
the description for VLAN 2.
netdev_vlan{'vlan2':
ensure => undo_shutdown,
id => 2,
description => 'sales-private',
require => Netdev_device['device'],
}
389
Starting Puppet
Configuring resources
1. Install and configure the Puppet master.
2. Create manifests for Puppet agents on the Puppet master.
For more information, see the Puppet master installation and configuration guides.
Parameter Description
--certname=certname Specifies the address of the Puppet agent.
After the Puppet process starts up, the Puppet agent sends an authentication request to the
Puppet master. For more information about the third-part-process start command,
see "Monitoring and maintaining processes".
390
For more information about the third-part-process stop command, see "Monitoring
and maintaining processes".
Procedure
1. Configure SSH login and enable NETCONF over SSH on the device. (Details not shown.)
2. On the Puppet master, create the modules/custom/manifests directory in the /etc/puppet/
directory for storing configuration manifests.
$ mkdir -p /etc/puppet/modules/custom/manifests
3. Create configuration manifest init.pp in the /etc/puppet/modules/custom/manifests directory
as follows:
netdev_device{'device':
ensure => undo_shutdown,
username => 'user',
password => 'passwd',
ipaddr => '1.1.1.1',
}
netdev_vlan{'vlan3':
ensure => undo_shutdown,
id => 3,
require => Netdev_device['device'],
}
4. Start Puppet on the device.
<PuppetAgent> system-view
[PuppetAgent] third-part-process start name puppet arg agent --certname=1.1.1.1
--server=1.1.1.2
5. Configure the Puppet master to authenticate the request from the Puppet agent.
$ puppet cert sign 1.1.1.1
After passing the authentication, the Puppet agent requests the latest configuration for it from the
Puppet master.
391
Puppet resources
netdev_device
Use this resource to specify the following items:
• Name for a Puppet agent.
• IP address, SSH username, and SSH password used by the agent to connect to a Puppet
master.
Attributes
Table 50 Attributes for netdev_device
Resource example
# Configure the device name as PuppetAgent. Specify the IP address, SSH username, and SSH
password for the agent to connect to the Puppet master as 1.1.1.1, user, and 123456, respectively.
netdev_device{'device':
ensure => undo_shutdown,
username => 'user',
password => '123456',
ipaddr => '1.1.1.1',
hostname => 'PuppetAgent'
}
392
netdev_interface
Use this resource to configure attributes for an interface.
Attributes
Table 51 Attributes for netdev_interface
Attribute
Attribute name Description Value type and restrictions
type
Specifies an
ifindex interface by its Index Unsigned integer.
index.
Symbol:
• auto—Autonegotiation.
• 10m—10 Mbps.
Specifies the • 100m—100 Mbps.
speed N/A
interface rate. • 1g—1 Gbps.
• 2.5g—2.5 Gbps.
• 10g—10 Gbps.
• 40g—40 Gbps.
Symbol:
• full—Full-duplex mode.
Sets the duplex • half—Half-duplex mode.
duplex N/A
mode. • auto—Autonegotiation.
This attribute applies only to Ethernet
interfaces.
Symbol:
• access—Sets the link type of the
interface to Access.
• trunk—Sets the link type of the interface
Sets the link type for to Trunk.
linktype N/A
the interface.
• hybrid—Sets the link type of the
interface to Hybrid.
This attribute applies only to Layer 2 Ethernet
interfaces.
393
Attribute
Attribute name Description Value type and restrictions
type
Unsigned integer in bytes. The value range
Sets the MTU depends on the interface type.
mtu permitted by the N/A
interface. This attribute applies only to Layer 3 Ethernet
interface.
Resource example
# Configure the following attributes for Ethernet interface 2:
• Interface description—puppet interface 2.
• Management state—Up.
• Interface rate—Autonegotiation.
• Duplex mode—Autonegotiation.
• Link type—Hybrid.
• Operation mode—Layer 2.
• MTU—1500 bytes.
netdev_interface{'ifindex2':
ifindex => 2,
ensure => undo_shutdown,
description => 'puppet interface 2',
admin => up,
speed => auto,
duplex => auto,
linktype => hybrid,
portlayer => bridge,
mut => 1500,
require => Netdev _device['device'],
}
netdev_l2_interface
Use this resource to configure the VLAN attributes for a Layer 2 Ethernet interface.
Attributes
Table 52 Attributes for netdev_l2_interface
Attribute
Attribute name Description Value type and restrictions
type
Specifies a Layer 2
ifindex Ethernet interface by its Index Unsigned integer.
index.
394
Attribute
Attribute name Description Value type and restrictions
type
String, a comma separated list of VLAN
IDs or VLAN ID ranges, for example,
1,2,3,5-8,10-20.
Specifies the VLANs
permit_vlan_list N/A Value range for each VLAN ID: 1 to
permitted by the interface.
4094.
The string cannot end with a comma (,),
hyphen (-), or space.
String, a comma separated list of VLAN
IDs or VLAN ID ranges, for example,
1,2,3,5-8,10-20.
Specifies the VLANs from Value range for each VLAN ID: 1 to
which the interface sends 4094.
untagged_vlan_list N/A
packets after removing
VLAN tags. The string cannot end with a comma (,),
hyphen (-), or space.
A VLAN cannot be on the untagged list
and the tagged list at the same time.
String, a comma separated list of VLAN
IDs or VLAN ID ranges, for example,
1,2,3,5-8,10-20.
Specifies the VLANs from Value range for each VLAN ID: 1 to
which the interface sends 4094.
tagged_vlan_list N/A
packets without removing
VLAN tags. The string cannot end with a comma (,),
hyphen (-), or space.
A VLAN cannot be on the untagged list
and the tagged list at the same time.
Resource example
# Specify the PVID as 2 for interface 3, and configure the interface to permit packets from VLANs 1
through 6. Configure the interface to forward packets from VLANs 1 through 3 after removing VLAN
tags and forward packets from VLANs 4 through 6 without removing VLAN tags.
netdev_l2_interface{'ifindex3':
ifindex => 3,
ensure => undo_shutdown,
pvid => 2,
permit_vlan_list => '1-6',
untagged_vlan_list => '1-3',
tagged_vlan_list => '4,6'
require => Netdev _device['device'],
}
netdev_l2vpn
Use this resource to enable or disable L2VPN.
395
Attributes
Table 53 Attributes for netdev_l2vpn
Resource example
# Enable L2VPN.
netdev_l2vpn{'l2vpn':
ensure => enable,
require => Netdev_device['device'],
}
netdev_lagg
Use this resource to create, modify, or delete an aggregation group.
Attributes
Table 54 Attributes for netdev_lagg
Attribute
Attribute name Description Value type and restrictions
type
Unsigned integer.
The value range for a Layer 2
Specifies an aggregation group is 1 to 1024.
group_id Index
aggregation group ID. The value range for a Layer 3
aggregation group is 16385 to
17408.
Symbol:
Creates, modifies, or • present—Creates or modifies
ensure deletes the N/A the aggregation group.
aggregation group. • absent—Deletes the
aggregation group.
Symbol:
Specifies the • static—Static.
linkmode N/A
aggregation mode.
• dynamic—Dynamic.
396
Attribute
Attribute name Description Value type and restrictions
type
String, a comma separated list of
interface indexes or interface index
ranges, for example,
Specifies the indexes 1,2,3,5-8,10-20.
of the interfaces that
The string cannot end with a comma
deleteports you want to remove N/A
(,), hyphen (-), or space.
from the aggregation
group. An interface index cannot be on the
list of adding interfaces and the list of
removing interfaces at the same
time.
Resource example
# Add interfaces 1 and 2 to aggregation group 2, and remove interfaces 3 and 4 from the group.
netdev_lagg{ 'lagg2':
group_id => 2,
ensure => present,
addports => '1,2',
deleteports => '3,4',
require => Netdev _device['device'],
}
netdev_vlan
Use this resource to create, modify, or delete a VLAN or configure the description for the VLAN.
Attributes
Table 55 Attributes for netdev_vlan
Attribute
Attribute name Description Value type and restrictions
type
Symbol:
• undo_shutdown—Creates or
modifies a VLAN.
Creates, modifies, or • shutdown—Deletes a VLAN.
ensure N/A
deletes a VLAN.
• present—Creates or modifies a
VLAN.
• absent—Deletes a VLAN.
Unsigned integer.
id Specifies the VLAN ID. Index
Value range: 1 to 4094.
Configures the String, case sensitive.
description description for the N/A
VLAN. Length: 1 to 255 characters.
Resource example
# Create VLAN 2, and configure the description as sales-private for VLAN 2.
netdev_vlan{'vlan2':
ensure => undo_shutdown,
id => 2,
397
description => 'sales-private',
require => Netdev_device['device'],
}
netdev_vsi
Use this resource to create, modify, or delete a Virtual Switch Instance (VSI).
Attributes
Table 56 Attributes for netdev_vsi
Attribute
Attribute name Description Value type and restrictions
type
String, case sensitive.
vsiname Specifies a VSI name. Index
Length: 1 to 31 characters.
Symbol:
Creates, modifies, or • present—Creates or modifies
ensure N/A
deletes the VSI. the VSI.
• absent—Deletes the VSI.
Resource example
# Create the VSI vsia.
netdev_vsi{'vsia':
ensure => present,
vsiname => 'vsia',
require => Netdev_device['device'],
}
netdev_vte
Use this resource to create or delete a tunnel.
Attributes
Table 57 Attributes for netdev_vte
Attribute
Attribute name Description Value type and restrictions
type
Specifies a tunnel
id Index Unsigned integer.
ID.
Symbol:
Creates or deletes • present—Creates the tunnel.
ensure N/A
the tunnel.
• absent—Deletes the tunnel.
398
Attribute
Attribute name Description Value type and restrictions
type
Unsigned integer:
• 1—IPv4 GRE tunnel mode.
• 2—IPv6 GRE tunnel mode.
• 3—IPv4 over IPv4 tunnel mode.
• 4—Manual IPv6 over IPv4 tunnel mode.
• 8—IPv6 or IPv4 over IPv6 tunnel mode.
Sets the tunnel • 16—IPv4 IPsec tunnel mode.
mode N/A
mode. • 17—IPv6 IPsec tunnel mode.
• 24—UDP-encapsulated IPv4 VXLAN
tunnel mode.
• 25—UDP-encapsulated IPv6 VXLAN
tunnel mode.
You must specify the tunnel mode when
creating a tunnel. After the tunnel is created,
you cannot change the tunnel mode.
Resource example
# Create UDP-encapsulated IPv4 VXLAN tunnel 2.
netdev_vte{'vte2':
ensure => present,
id => 2,
mode => 24,
require => Netdev_device['device'],
}
netdev_vxlan
Use this resource to create, modify, or delete a VXLAN.
399
Attributes
Table 58 Attributes for netdev_vxlan
Attribute
Attribute name Description Value type and restrictions
type
Unsigned integer.
vxlan_id Specifies a VXLAN ID. Index
Value range: 1 to 16777215.
Symbol:
Creates or deletes the • present—Creates or modifies the
ensure N/A
VXLAN. VXLAN.
• absent—Deletes the VXLAN.
Resource example
# Create VXLAN 10, configure the VSI name as vsia, and associate tunnel interfaces 7 and 8 with
VXLAN 10.
netdev_vxlan{'vxlan10':
ensure => present,
vxlan_id => 10,
vsiname => 'vsia',
add_tunnels => '7-8',
require=>Netdev_device['device'],
}
400
Configuring EPS agent
About EPS and EPS agent
HPE iMC Endpoints Profiling System (EPS) detects, identifies, and monitors network-wide endpoints,
including cameras, PCs, servers, switches, routers, firewalls, APs, printers, and ATMs. New
endpoints and abnormal endpoints can be timely discovered and identified.
As shown in Figure 116, an EPS system contains the EPS server, scanners, and endpoints.
Figure 116 EPS network architecture
EPS server
The EPS server is a server installed with the iMC EPS component. It acts as the console to assign
instructions to scanners, receive scan results, compare scan results with baselines, and support
change auditing.
Scanner
A scanner is a device installed with iMC EScan and enabled with EPS agent. The scanner scans and
manages endpoints by following the instruction of the EPS server and reports scan results to the
EPS server.
EPS agent is based on the Comware platform and integrates the EPS scanner function. This chapter
describes how to configure EPS agent.
Endpoint
An endpoint refers to a device scanned by a scanner. EPS supports user-defined endpoint types and
the following system-defined endpoint types: camera, PC, server, switch, router, firewall, AP, printe,
ATM, unknown, and other.
401
Configuring EPS agent
1. Enter system view.
system-view
2. Configure EPS server parameters
eps server ip-address port port-number [ key { cipher | simple }
key-value ] [ log-level level-value ] [ use-server-setting ]
By default, EPS server parameters are not configured.
3. Enable EPS agent.
eps agent enable
By default, EPS agent is disabled.
Task Command
Display scanned endpoint information. display eps agent discovery
402
Figure 117 Network diagram
EPS server
20.1.1.2/24
Internet
GE1/0/2
20.1.1.1/24
Scanner
GE1/0/1
10.1.1.1/24
Endpoints
Procedure
# Specify 20.1.1.2 and 4500 as the IP address and port number of the EPS server, respectively.
<Sysname> system-view
[Sysname] eps server 20.1.1.2 port 4500
403
Configuring eMDI
About eMDI
Enhanced Media Delivery Index (eMDI) is a solution to audio and video service quality monitoring
and fault locating. It is intended to solve problems caused by packet loss, packet sequence errors,
and jitters.
eMDI monitors and analyzes specific TCP or RTP packets on each node of an IP network in real time,
providing data for you to quickly locate network faults.
eMDI implementation
The eMDI feature is implemented on the basis of eMDI instances. Each eMDI instance monitors one
data flow as defined by its parameters. It inspects fields of packets, periodically collects statistics,
and reports to the NMS.
Figure 118 shows the eMDI deployment and operating process.
1. Upon service quality deterioration, you can obtain the data flow characteristics from the
headers of interested packets, such as the source and destination IP addresses.
2. You deploy eMDI on each device that the data flow traverses by using the NMS or CLI.
3. The devices monitor the data flows, collect statistics, and report monitoring data to the NMS.
4. The NMS collects and analyzes the monitoring data and displays the analysis results
graphically.
5. Based on the analysis results, you can quickly locate the network faults.
404
Figure 118 eMDI network diagram
IP network
Aggregation
device
Access
device
User A User B
Monitored items
Table 59 and Table 60 list the monitored items for TCP packets and UDP packets. RTP packets are
carried in UDP packets.
Table 59 Monitored items for TCP packets
405
Item Description Calculation formula Remarks
If no packets are lost, the
sequence number of the
current packet plus the length
of the current packet is the
expected sequence number
of the next packet. If the
UPLR = (Number of lost sequence number of the next
Upstream packet loss rate, in upstream packets / (Number packet is greater than the
UPLR units of one hundred of received packets + expected sequence number,
thousandth. Number of lost upstream packet loss has occurred on
packets)) / 100000 an upstream device. The
number of lost packets can be
calculated based on the
average packet size.
This item is intended for
unidirectional flows.
If no packets are lost, the
sequence number of the
current packet plus the length
of the current packet is the
• Number of lost expected sequence number
downstream packets = of the next packet. If the
Total number of lost sequence number of the next
Downstream packet loss rate, packets – Number of lost packet is smaller than the
DPLR in units of one hundred upstream packets expected sequence number,
thousandth. • DPLR = (Number of lost the packet is considered to be
downstream packets / a retransmitted packet. The
Number of received number of retransmitted
packets) / 100000 packets is considered to be
the total number of lost
packets.
This item is intended for
unidirectional flows.
The timestamp for a received
non-retransmitted packet is
recorded as T1.
The sequence number of the
packet plus the length of the
packet is the expected
sequence number of the next
packet.
If the sequence number of an
ACK packet received from a
Average downstream round
DRTT DRTT = T2 – T1 downstream device is greater
trip time, in microseconds.
than or equal to the expected
number, the timestamp of the
ACK packet is recorded as
T2.
DRTT is equal to the average
value of the differences
between the T2 values and
the corresponding T1 values.
This item is intended for
bidirectional flows.
406
Item Description Calculation formula Remarks
T1 indicates the timestamp
when the device finds a SYN
packet sent by the client for a
connection.
T2 indicates the timestamp
when the device finds a SYN
ACK packet returned by the
Average upstream round trip server.
URTT URTT = T2 – T1
time, in microseconds.
Because URTT calculation
depends on TCP control
packets, it might not be
calculated within a monitoring
period.
This item is intended for
bidirectional flows.
407
Item Description Calculation formula Remarks
On a service provider network
with the retransmission
threshold set, you can
compare the RTP-LP value
with the retransmission
threshold to identify the
The RTP-LP value uses the existence of faults.
Maximum number of maximum number of
The retransmission
RTP-LP continuously lost RTP continuously lost RTP
mechanism can address loss
packets. packets within a monitoring
of a small number of packets.
period.
The greater the RTP-LP
value is, the less the
retransmission mechanism
can help.
This item is intended for
unidirectional flows.
The RTP-ELF value is
calculated based on the FEC
sliding window and packet
loss threshold. After each
window slide, the system
RTP effective loss factor for If the total number of packet
identifies whether the number
Forward Error Correct (FEC), loss windows is greater than
RTP-ELF of lost data packets in the
in units of one hundred 0 within a monitoring period, a
window within the monitoring
thousandth. video fault exists.
period is greater than the
threshold. If yes, the system
increases RTP-ELF by 1. The
window sliding stops when
the monitoring period expires.
408
• All packets of the monitored data flow pass through the device, but not all of the packets are
forwarded by the same member device.
Prerequisites
Identify the characteristics of the data flows to be monitored so you can specify data flows for eMDI
instances.
Configure eMDI
Configuring an eMDI instance
1. Enter system view.
system-view
2. Enable eMDI and enter its view.
emdi
By default, eMDI is disabled.
3. Create an eMDI instance and enter its view.
instance instance-name
By default, no eMDI instance exists.
4. (Optional.) Configure a description for the eMDI instance.
description text
By default, an eMDI instance does not have a description.
5. Specify the data flow to be monitored by using the eMDI instance.
Choose one option as needed:
Specify a TCP data flow.
flow ipv4 tcp source source-ip-address destination
destination-ip-address [ destination-port
destination-port-number | source-port source-port-number | vlan
vlan-id | vni vxlan-id ] *
Specify a UDP data flow.
flow ipv4 udp source source-ip-address destination
destination-ip-address [ destination-port
destination-port-number | source-port source-port-number | vlan
vlan-id | vni vxlan-id | pt pt-value | clock-rate clock-rate-value ]
*
By default, no data flow is specified for an eMDI instance.
Specifying more options defines a more granular data flow and provides more helpful
monitoring data and analysis results. For example, to monitor the data from 10.0.0.1 to
10.0.1.1/100, specify the destination port number. If you do not specify the destination port
number, the instance monitors the flow that the first received matching packet represents,
which might cause deviations on the monitoring results.
6. (Optional.) Set the alarm thresholds for the eMDI instance.
alarm { dplr | rtp-lr | rtp-ser | uplr } threshold threshold-value
By default, the alarm thresholds for an eMDI instance are all 100.
A TCP data flow supports only the dplr and uplr keywords and a UDP data flow supports
only the rtp-lr and rtp-ser keywords.
409
7. (Optional.) Set the eMDI alarm logging threshold, that is, the number of consecutive alarms to
be suppressed before logging the event.
alarm suppression times times-value
By default, the eMDI alarm logging threshold is 3.
8. (Optional.) Specify the sliding window size and packet loss threshold for the UDP data flow to
be monitored.
fec { window window-size | threshold threshold-value } *
By default, the sliding window size and packet loss threshold for the UDP data flow is 100 and 5,
respectively.
9. (Optional.) Specify the eMDI instance lifetime.
lifetime { seconds seconds | minutes minutes | hours hours | days days }
By default, the eMDI instance lifetime is 1 hour.
10. (Optional.) Specify the monitoring period for the eMDI instance.
monitor-period period-value
By default, the monitoring period for an eMDI instance is 60 seconds.
410
Display and maintenance commands for eMDI
Execute display commands in any view and reset commands in user view.
Task Command
display emdi instance [ name instance-name |
Display eMDI instance information.
id instance-id ] [ verbose ]
Display eMDI instance resource
display emdi resource
information.
192.168.1.2/24 10.1.1.2/24
Prerequisites
Assign IP addresses to the devices. Make sure all devices can reach each other.
Procedure
1. Configure Device A:
# Enable eMDI.
<DeviceA> system-view
[DeviceA] emdi
# Create an eMDI instance named as test.
[DeviceA-emdi] instance test
# Specify the UDP data flow for the instance to monitor. Set the source and destination IP
addresses to 192.168.1.2 and 10.1.1.2, respectively.
[DeviceA-emdi-instance-test] flow ipv4 udp source 192.168.1.2 destination 10.1.1.2
# Set the monitoring period to 10 seconds.
[DeviceA-emdi-instance-test] monitor-period 10
411
# Set the eMDI instance lifetime to 30 minutes.
[DeviceA-emdi-instance-test] lifetime minutes 30
# Start the eMDI instance.
[DeviceA-emdi-instance-test] start
2. Configure Device B and Device C as Device A is configured. (Details not shown.)
Verifying the configuration
After the eMDI instance runs for a while, display brief monitoring statistics for the eMDI instance on
each device to locate the fault.
# Display brief monitoring statistics for eMDI instance test on Device A.
<DeviceA> display emdi statistics instance-name test
Instance name : test
Instance ID : 1
Monitoring period: 10 sec
Protocol : UDP
412
Configuring SQA
About SQA
Service quality analysis (SQA) enables the device to identify SIP-based multimedia traffic and
preferentially forward this traffic to ensure multimedia service quality. Use this feature to decrease
multimedia stuttering and improve user experience.
Configuring SQA
1. Enter system view.
system-view
2. Enter SQA view.
sqa
3. Enable SQA.
sqa-sip enable
By default, SQA is disabled.
4. Specify the SIP listening port number.
sqa-sip port port-number
By default, the SIP listening port number is 5060.
Make sure the SIP listening port number on the device is the same as that on the SIP server.
5. (Optional.) Specify an IP address range for SQA.
sqa-sip filter address start-address end-address
By default, no IP address range is specified for SQA. The device performs SQA on all SIP
packets.
After this command is executed, the device performs SQA only on SIP calls in the specified IP
address range.
If you execute this command multiple times, the most recent configuration takes effect.
Task Command
display sqa sip call [ [ call-id call-id ]
Display SIP call information.
verbose ]
Display SIP call statistics. display sqa sip call-statistics
413
SQA configuration examples
Example: Configuring SQA
Network configuration
As shown in Figure 120, the SIP server installed with third-party VoIP server software acts as both
the SIP proxy server and SIP registrar to manage SIP UA registration and SIP calls. Host A and Host
B installed with third-party client software can place calls to each other.
Configure SQA on Device A, Device B, and Device C to optimize multimedia traffic of SIP calls to
provide high qualified multimedia services.
Figure 120 Network diagram
Device A
IP network
SIP server
Device B Device C
PC A PC B
6.159.107.200/16 6.159.108.44/16
Prerequisites
Assign IP addresses to interfaces, and make sure the devices can reach each other.
Procedure
1. Configure the SIP server:
Install third-party VoIP server software, register UAs, and specify the usernames (phone
numbers) and passwords for the clients. (Details not shown.)
2. Configure PC A and PC B:
Install third-party VoIP client software, specify the IP address of the SIP server, and configure
the username (phone number), password, and other parameters on each PC. (Details not
shown.)
3. Configure Device A:
# Enable SQA.
<DeviceA> system-view
[DeviceA] sqa
[DeviceA-sqa] sqa-sip enable
# Specify 5066 as the SIP listening port number. (Make sure this SIP listening port number is
the same as that on the SIP server.)
[DeviceA-sqa] sqa-sip port 5066
# Specify an IP address range for SQA. (Make sure Host A's IP address is included in the
range.)
[DeviceA-sqa] sqa-sip filter address 6.159.107.200 6.159.107.200
[DeviceA-sqa] quit
414
4. Configure Device B and Device C:
# Configure Device B and Device C in the same way as Device A is configured. (Details not
shown.)
Verifying the configuration
# Place a call from Host A to Host B. (Details not shown.)
# Display brief information about all SIP calls on Device A.
<DeviceA> display sqa sip call
Caller Callee CallId
6.159.107.207:49172 6.159.108.51:52410 3101326658
6.159.107.203:49172 6.159.108.47:52410 4530332933
6.159.107.208:49172 6.159.108.52:52410 4445702693
6.159.107.206:49172 6.159.108.50:52410 8263542841
6.159.107.201:49172 6.159.108.45:52410 4752123310
6.159.107.200:49172 6.159.108.44:52410 99462146
The output shows that Device A has performed SQA on the call that Host A placed to Host B.
# Display detailed information about the SIP call that Host A placed to Host B.
<DeviceA> display sqa sip call call-id 99462146 verbose
Call ID: 99462146
Caller information:
IP: 6.159.107.200 Port: 49172 MAC: a036-9fd4-b5bd
Tag: [email protected]
Callee information:
IP: 6.159.108.44 Port: 52410 MAC: a036-9fd4-b5bc
Tag: [email protected]
Type: Audio VLAN: 0 StartTime: 2019-7-14 15:40:39
MOS(F): 88 MOS(R): 86
Forward flow (octet): 4793344 Forward flow (packet): 4681
Reverse flow (octet): 3810304 Reverse flow (packet): 3721
The output shows that the MOS value for both forward and reverse traffic is high, which indicates the
quality of the audio service is high.
415
Document conventions and icons
Conventions
This section describes the conventions used in the documentation.
Command conventions
Convention Description
Boldface Bold text represents commands and keywords that you enter literally as shown.
Italic Italic text represents arguments that you replace with actual values.
[] Square brackets enclose syntax choices (keywords or arguments) that are optional.
Braces enclose a set of required syntax choices separated by vertical bars, from which
{ x | y | ... }
you select one.
Square brackets enclose a set of optional syntax choices separated by vertical bars,
[ x | y | ... ]
from which you select one or none.
Asterisk marked braces enclose a set of required syntax choices separated by vertical
{ x | y | ... } *
bars, from which you select at least one.
Asterisk marked square brackets enclose optional syntax choices separated by vertical
[ x | y | ... ] *
bars, from which you select one choice, multiple choices, or none.
The argument or keyword and argument combination before the ampersand (&) sign
&<1-n>
can be entered 1 to n times.
GUI conventions
Convention Description
Window names, button names, field names, and menu items are in Boldface. For
Boldface
example, the New User window opens; click OK.
Multi-level menus are separated by angle brackets. For example, File > Create >
>
Folder.
Symbols
Convention Description
An alert that calls attention to important information that if not understood or followed
WARNING! can result in personal injury.
An alert that calls attention to important information that if not understood or followed
CAUTION: can result in data loss, data corruption, or damage to hardware or software.
416
Network topology icons
Convention Description
417
Support and other resources
Accessing Hewlett Packard Enterprise Support
• For live assistance, go to the Contact Hewlett Packard Enterprise Worldwide website:
www.hpe.com/assistance
• To access documentation and support services, go to the Hewlett Packard Enterprise Support
Center website:
www.hpe.com/support/hpesc
Information to collect
• Technical support registration number (if applicable)
• Product name, model or version, and serial number
• Operating system name and version
• Firmware version
• Error messages
• Product-specific reports and logs
• Add-on products or components
• Third-party products or components
Accessing updates
• Some software products provide a mechanism for accessing software updates through the
product interface. Review your product documentation to identify the recommended software
update method.
• To download product updates, go to either of the following:
Hewlett Packard Enterprise Support Center Get connected with updates page:
www.hpe.com/support/e-updates
Software Depot website:
www.hpe.com/support/softwaredepot
• To view and update your entitlements, and to link your contracts, Care Packs, and warranties
with your profile, go to the Hewlett Packard Enterprise Support Center More Information on
Access to Support Materials page:
www.hpe.com/support/AccessToSupportMaterials
IMPORTANT:
Access to some updates might require product entitlement when accessed through the Hewlett
Packard Enterprise Support Center. You must have an HP Passport set up with relevant
entitlements.
418
Websites
Website Link
Networking websites
Remote support
Remote support is available with supported devices as part of your warranty, Care Pack Service, or
contractual support agreement. It provides intelligent event diagnosis, and automatic, secure
submission of hardware event notifications to Hewlett Packard Enterprise, which will initiate a fast
and accurate resolution based on your product’s service level. Hewlett Packard Enterprise strongly
recommends that you register your device for remote support.
For more information and device support details, go to the following website:
www.hpe.com/info/insightremotesupport/docs
Documentation feedback
Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To help
us improve the documentation, send any errors, suggestions, or comments to Documentation
Feedback ([email protected]). When submitting your feedback, include the document title,
419
part number, edition, and publication date located on the front cover of the document. For online help
content, include the product name, product version, help edition, and publication date located on the
legal notices page.
420
Index
A port mirroring monitor port to remote probe VLAN,
298
access control
associating
SNMP MIB, 155
IPv6 NTP client/server association mode, 121
SNMP view-based MIB, 155
IPv6 NTP multicast association mode, 130
accessing
IPv6 NTP symmetric active/passive association
NTP access control, 105
mode, 124
SNMP access control mode, 156
NTP association mode, 108
ACS
NTP broadcast association mode, 104, 109, 125
CWMP ACS-CPE autoconnect, 253
NTP broadcast association mode+authentication,
action 135
Event MIB notification, 181 NTP client/server association mode, 104, 108,
Event MIB set, 181 120
address NTP client/server association
ping address reachability determination, 2 mode+authentication, 133
agent NTP multicast association mode, 104, 110, 127
sFlow agent+collector information NTP symmetric active/passive association mode,
configuration, 316 104, 108, 123
aggregation group attribute
Chef resources (netdev_lagg), 383 NETCONF session attribute, 200
Puppet resources (netdev_lagg), 396 authenticating
alarm CWMP CPE ACS authentication, 258
RMON alarm configuration, 174, 177 NTP, 106
RMON alarm group sample types, 173 NTP broadcast authentication, 114
RMON configuration, 171, 176 NTP broadcast mode+authentication, 135
RMON group, 172 NTP client/server mode authentication, 111
RMON private group, 172 NTP client/server mode+authentication, 133
application scenarios NTP configuration, 111
iNQA, 81 NTP multicast authentication, 116
applying NTP security, 105
flow mirroring QoS policy, 313 NTP symmetric active/passive mode
flow mirroring QoS policy (global), 313 authentication, 113
flow mirroring QoS policy (interface), 313 SNTP authentication, 139
flow mirroring QoS policy (VLAN), 313 auto
PoE profile, 150 CWMP ACS-CPE autoconnect, 253
architecture VCF fabric automated deployment, 358
NTP, 103 VCF fabric automated deployment process, 359
arithmetic VCF fabric automated underlay network
packet capture filter configuration (expr relop deployment configuration, 362, 364
expr expression), 345 autoconfiguration server (ACS)
packet capture filter configuration (proto CWMP, 251
[ exprsize ] expression), 345 CWMP ACS authentication parameters, 258
packet capture filter operator, 343 CWMP attribute configuration, 256
assigning CWMP attribute type (default)(CLI), 257
CWMP ACS attribute (preferred)(CLI), 257 CWMP attributes (preferred), 256
CWMP ACS attribute (preferred)(DHCP CWMP autoconnect parameters, 260
server), 256 CWMP CPE ACS provision code, 259
CWMP CPE connection interface, 259
421
HTTPS SSL client policy, 258 resources (netdev_lagg), 383
automated overlay network deployment resources (netdev_vlan), 384
border node configuration, 367 resources (netdev_vsi), 385
L2 agent, 366 resources (netdev_vte), 386
L3 agent, 367 resources (netdev_vxlan), 386
local proxy ARP, 368 server configuration, 376
MAC address of VSI interfaces, 368 shutdown, 377
network type specifying, 366 start, 376
RabbiMQ server communication parameters, workstation configuration, 376
365 classifying
automated underlay network deployment port mirroring classification, 290
pausing deployment, 363 CLI
automtaed underlay network deploying EAA configuration, 270, 277
template file, 359 EAA event monitor policy configuration, 278
B EAA monitor policy configuration
(CLI-defined+environment variables), 281
basic concepts
NETCONF CLI operations, 231, 232
iNQA, 79
NETCONF return to CLI, 239
benefits
client
iNQA, 79
Chef client configuration, 376
bidirectional
NQA client history record save, 30
port mirroring, 289
NQA client operation (DHCP), 12
Boolean
NQA client operation (DLSw), 24
Event MIB trigger test, 180
NQA client operation (DNS), 13
Event MIB trigger test configuration, 191
NQA client operation (FTP), 14
broadcast
NQA client operation (HTTP), 15
NTP association mode, 125
NQA client operation (ICMP echo), 10
NTP broadcast association mode, 104, 109,
NQA client operation (ICMP jitter), 11
114
NQA client operation (path jitter), 24
NTP broadcast association
mode+authentication, 135 NQA client operation (SNMP), 18
NTP broadcast mode dynamic associations NQA client operation (TCP), 18
max, 118 NQA client operation (UDP echo), 19
building NQA client operation (UDP jitter), 16
packet capture display filter, 346, 348 NQA client operation (UDP tracert), 20
packet capture filter, 342, 345 NQA client operation (voice), 22
NQA client operation scheduling, 31
C
NQA client statistics collection, 29
capturing NQA client template, 31
packet capture configuration, 342, 350 NQA client template (DNS), 33
packet capture configuration (feature NQA client template (FTP), 41
image-based), 350
NQA client template (HTTP), 38
Chef
NQA client template (HTTPS), 39
client configuration, 376
NQA client template (ICMP), 32
configuration, 373, 377, 377
NQA client template (RADIUS), 42
configuration file, 374
NQA client template (SSL), 43
network framework, 373
NQA client template (TCP half open), 35
resources, 374, 380
NQA client template (TCP), 34
resources (netdev_device), 380
NQA client template (UDP), 36
resources (netdev_interface), 380
NQA client template optional parameters, 44
resources (netdev_l2_interface), 382
NQA client threshold monitoring, 8, 27
resources (netdev_l2vpn), 383
422
NQA client+Track collaboration, 27 packet capture display filter operator, 347
NQA collaboration configuration, 68 packet capture filter operator, 343
NQA enable, 9 conditional match
NQA operation, 9 NETCONF data filtering, 219
NQA operation configuration (DHCP), 50 NETCONF data filtering (column-based), 216
NQA operation configuration (DLSw), 65 configuration
NQA operation configuration (DNS), 51 NETCONF configuration modification, 223
NQA operation configuration (FTP), 52 configuration file
NQA operation configuration (HTTP), 53 Chef configuration file, 374
NQA operation configuration (ICMP echo), 46 configuration management
NQA operation configuration (ICMP jitter), 47 Chef configuration, 373, 377, 377
NQA operation configuration (path jitter), 66 Puppet configuration, 388, 391, 391
NQA operation configuration (SNMP), 57 configure
NQA operation configuration (TCP), 58 RabbiMQ server communication parameters, 365
NQA operation configuration (UDP echo), 59 VCF fabric overlay network border node, 367
NQA operation configuration (UDP jitter), 54 configuring
NQA operation configuration (UDP tracert), 61 AMS, 90
NQA operation configuration (voice), 62 analyzer, 89
SNTP configuration, 107, 138, 141, 141 analyzer instance, 89
client/server analyzer parameters, 89
IPv6 NTP client/server association mode, 121 Chef, 373, 377, 377
NTP association mode, 104, 108 Chef client, 376
NTP client/server association mode, 111, 120 Chef server, 376
NTP client/server association Chef workstation, 376
mode+authentication, 133 collector, 86
NTP client/server mode dynamic associations collector instance, 87
max, 118 collector parameters, 86
clock CWMP, 251, 255, 261
NTP local clock as reference source, 110 CWMP ACS attribute, 256
close-wait timer (CWMP ACS), 261 CWMP ACS attribute (default)(CLI), 257
collaborating CWMP ACS attribute (preferred), 256
NQA client+Track function, 27 CWMP ACS autoconnect parameters, 260
NQA+Track collaboration, 7 CWMP ACS close-wait timer, 261
collecting CWMP ACS connection retry max number, 260
sFlow agent+collector information CWMP ACS periodic Inform feature, 260
configuration, 316
CWMP CPE ACS authentication parameters, 258
troubleshooting sFlow remote collector cannot
CWMP CPE ACS connection interface, 259
receive packets, 320
CWMP CPE ACS provision code, 259
common
CWMP CPE attribute, 258
information center standard system logs, 321
EAA, 270, 277
community
EAA environment variable (user-defined), 273
SNMPv1 community direct configuration, 159
EAA event monitor policy (CLI), 278
SNMPv1 community indirect configuration,
160 EAA event monitor policy (Track), 279
SNMPv1 configuration, 159, 159 EAA monitor policy, 274
SNMPv2c community direct configuration by EAA monitor policy (CLI-defined+environment
community name, 159 variables), 281
SNMPv2c community indirect configuration by EAA monitor policy (Tcl-defined), 277
creating SNMPv2c user, 160 eMDI, 404, 409, 409, 411
SNMPv2c configuration, 159, 159 end-to-end iNQA packet loss measurement, 92
comparing EPS agent, 401, 402, 402
423
Event MIB, 180, 182, 189 NQA client operation (path jitter), 24
Event MIB event, 183 NQA client operation (SNMP), 18
Event MIB trigger test, 185 NQA client operation (TCP), 18
Event MIB trigger test (Boolean), 191 NQA client operation (UDP echo), 19
Event MIB trigger test (existence), 189 NQA client operation (UDP jitter), 16
Event MIB trigger test (threshold), 187, 194 NQA client operation (UDP tracert), 20
feature image-based packet capture, 349 NQA client operation (voice), 22
flow mirroring, 309, 314 NQA client operation optional parameters, 26
flow mirroring traffic behavior, 312 NQA client statistics collection, 29
flow mirroring traffic class, 311 NQA client template, 31
information center, 321, 326, 338 NQA client template (DNS), 33
information center log output (console), 338 NQA client template (FTP), 41
information center log output (Linux log host), NQA client template (HTTP), 38
340 NQA client template (HTTPS), 39
information center log output (UNIX log host), NQA client template (ICMP), 32
339 NQA client template (RADIUS), 42
information center log suppression, 333 NQA client template (SSL), 43
information center log suppression for module, NQA client template (TCP half open), 35
334
NQA client template (TCP), 34
information center trace log file max size, 337
NQA client template (UDP), 36
iNQA, 79, 86, 92
NQA client template optional parameters, 44
IPv6 NTP client/server association mode, 121
NQA client threshold monitoring, 27
IPv6 NTP multicast association mode, 130
NQA client+Track collaboration, 27
IPv6 NTP symmetric active/passive
NQA collaboration, 68
association mode, 124
NQA operation (DHCP), 50
Layer 2 remote port mirroring, 295
NQA operation (DLSw), 65
Layer 2 remote port mirroring (egress port),
306 NQA operation (DNS), 51
Layer 2 remote port mirroring (reflector port NQA operation (FTP), 52
configurable), 303 NQA operation (HTTP), 53
local port mirroring, 292 NQA operation (ICMP echo), 46
local port mirroring (multi-monitoring-devices), NQA operation (ICMP jitter), 47
294 NQA operation (path jitter), 66
local port mirroring (multiple monitoring NQA operation (SNMP), 57
devices), 302 NQA operation (TCP), 58
local port mirroring (source port mode), 301 NQA operation (UDP echo), 59
local port mirroring group monitor port, 293 NQA operation (UDP jitter), 54
local port mirroring group source ports, 293 NQA operation (UDP tracert), 61
mirroring sources, 293 NQA operation (voice), 62
MP, 88 NQA server, 9
NETCONF, 197, 199 NQA template (DNS), 71
NQA, 7, 8, 46 NQA template (FTP), 75
NQA client history record save, 30 NQA template (HTTP), 74
NQA client operation, 9 NQA template (HTTPS), 75
NQA client operation (DHCP), 12 NQA template (ICMP), 70
NQA client operation (DLSw), 24 NQA template (RADIUS), 76
NQA client operation (DNS), 13 NQA template (SSL), 77
NQA client operation (FTP), 14 NQA template (TCP half open), 72
NQA client operation (HTTP), 15 NQA template (TCP), 72
NQA client operation (ICMP echo), 10 NQA template (UDP), 73
NQA client operation (ICMP jitter), 11 NTP, 102, 107, 120
424
NTP association mode, 108 SNMP, 155, 167
NTP broadcast association mode, 109, 125 SNMP common parameters, 158
NTP broadcast mode authentication, 114 SNMP logging, 165
NTP broadcast mode+authentication, 135 SNMP notification, 162
NTP client/server association mode, 108, 120 SNMPv1, 167
NTP client/server mode authentication, 111 SNMPv1 community, 159, 159
NTP client/server mode+authentication, 133 SNMPv1 community by community name, 159
NTP dynamic associations max, 118 SNMPv1 community by creating SNMPv1 user,
NTP local clock as reference source, 110 160
NTP multicast association mode, 110, 127 SNMPv1 host notification send, 163
NTP multicast mode authentication, 116 SNMPv2c, 167
NTP optional parameters, 117 SNMPv2c community, 159, 159
NTP symmetric active/passive association SNMPv2c community by community name, 159
mode, 108, 123 SNMPv2c community by creating SNMPv2c user,
NTP symmetric active/passive mode 160
authentication, 113 SNMPv2c host notification send, 163
packet capture, 342, 350 SNMPv3, 168
packet capture (feature image-based), 350 SNMPv3 group and user, 160
packet loss notification, 91 SNMPv3 group and user in FIPS mode, 162
PMM kernel thread deadloop detection, 286 SNMPv3 group and user in non-FIPS mode, 161
PMM kernel thread starvation detection, 287 SNMPv3 host notification send, 163
PoE, 143, 145 SNTP, 107, 138, 141, 141
PoE (uni-PSE device), 152, 152 SNTP authentication, 139
PoE PI power management, 149 SQA, 413, 413
PoE PI power max, 148 UDP data flow, 411
PoE power, 148 VCF fabric, 354, 360
PoE profile PI, 150 VCF fabric automated underlay network
PoE PSE power max, 148 deployment, 362, 364
PoE PSE power monitoring, 149 VCF fabric MAC address of VSI interfaces, 368
point-to-point iNQA packet loss measurement, connecting
96 CWMP ACS connection initiation, 260
port mirroring, 301 CWMP ACS connection retry max number, 260
port mirroring remote destination group CWMP CPE ACS connection interface, 259
monitor port, 297 console
port mirroring remote probe VLAN, 298 information center log output, 328
Puppet, 388, 391, 391 information center log output configuration, 338
remote port mirroring source group egress NETCONF over console session establishment,
port, 300 203
remote port mirroring source group reflector content
port, 299 packet file content display, 350
remote port mirroring source group source controlling
ports, 299
RMON history control entry, 173
RMON, 171, 176
converging
RMON alarm, 174, 177
VCF fabric configuration, 354, 360
RMON Ethernet statistics group, 176
cookbook
RMON history group, 176
Chef resources, 374
RMON statistics, 173
CPE
sFlow, 316, 318, 318
CWMP ACS-CPE autoconnect, 253
sFlow agent+collector information, 316
CPU
sFlow counter sampling, 318
flow mirroring configuration, 309, 314
sFlow flow sampling, 317
creating
425
local port mirroring group, 293 deadloop detection (Linux kernel PMM), 286
remote port mirroring destination group, 297 debugging
remote port mirroring source group, 298 feature module, 6
RMON Ethernet statistics entry, 173 system, 5
RMON history control entry, 173 system maintenance, 1
customer premise equipment (CPE) default
CPE WAN Management Protocol. Use CWMP information center log default output rules, 322
CWMP NETCONF non-default settings retrieval, 207
ACS attribute (default)(CLI), 257 system information default output rules
ACS attribute (preferred), 256 (diagnostic log), 322
ACS attribute configuration, 256 system information default output rules (hidden
ACS autoconnect parameters, 260 log), 323
ACS HTTPS SSL client policy, 258 system information default output rules (security
log), 322
ACS-CPE autoconnect, 253
system information default output rules (trace log),
autoconfiguration server (ACS), 251
323
basic functions, 251
deploying
configuration, 251, 255, 261
VCF fabric automated deployment, 358
connection establishment, 254
VCF fabric automated underlay network
CPE ACS authentication parameters, 258 deployment configuration, 364
CPE ACS connection interface, 259 deployment
CPE ACS provision code, 259 VCF fabric automated underlay network
CPE attribute configuration, 258 deployment configuration, 362
customer premise equpment (CPE), 251 destination
DHCP server, 251 information center system logs, 322
DNS server, 251 port mirroring, 289
enable, 256 port mirroring destination device, 289
how it works, 253 detecting
main/backup ACS switchover, 254 PMM kernel thread deadloop detection, 286
network framework, 251 PMM kernel thread starvation detection, 287
RPC methods, 253 PoE PD (nonstandard), 147
settings display, 261 determining
D ping address reachability, 2
device
data
Chef configuration, 373, 377, 377
feature image-based packet capture data
display filter, 349 Chef resources (netdev_device), 380
NETCONF configuration data retrieval (all configuration information retrieval, 204
modules), 211 CWMP configuration, 251, 255, 261
NETCONF configuration data retrieval feature image-based packet capture configuration,
(Syslog module), 212 349
NETCONF data entry retrieval (interface feature image-based packet capture file save,
table), 209 349
NETCONF filtering (column-based), 215 information center configuration, 321, 326, 338
NETCONF filtering (column-based) information center log output configuration
(conditional match), 216 (console), 338, 338
NETCONF filtering (column-based) (full information center log output configuration (Linux
match), 215 log host), 340
NETCONF filtering (column-based) (regex information center log output configuration (UNIX
match), 216 log host), 339
NETCONF filtering (conditional match), 219 information center system log types, 321
NETCONF filtering (regex match), 217 IPv6 NTP multicast association mode, 130
NETCONF filtering (table-based), 214 Layer 2 remote port mirroring (egress port), 306
426
Layer 2 remote port mirroring (reflector port SNMP common parameter configuration, 158
configurable), 303 SNMP configuration, 155, 167
Layer 2 remote port mirroring configuration, SNMP MIB, 155
295 SNMP notification, 162
local port mirroring (multi-monitoring-devices), SNMP view-based MIB access control, 155
294
SNMPv1 community configuration, 159, 159
local port mirroring (source port mode), 301
SNMPv1 community configuration by community
local port mirroring configuration, 292 name, 159
local port mirroring configuration (multiple SNMPv1 community configuration by creating
monitoring devices), 302 SNMPv1 user, 160
local port mirroring group monitor port, 293 SNMPv1 configuration, 167
NETCONF capability exchange, 204 SNMPv2c community configuration, 159, 159
NETCONF CLI operations, 231, 232 SNMPv2c community configuration by community
NETCONF configuration, 197, 199, 199 name, 159
NETCONF configuration modification, 222 SNMPv2c community configuration by creating
NETCONF device configuration+state SNMPv2c user, 160
information retrieval, 205 SNMPv2c configuration, 167
NETCONF information retrieval, 208 SNMPv3 configuration, 168
NETCONF management, 199 SNMPv3 group and user configuration, 160
NETCONF non-default settings retrieval, 207 SNMPv3 group and user configuration in FIPS
NETCONF running configuration lock/unlock, mode, 162
220, 221 SNMPv3 group and user configuration in
NETCONF session information retrieval, 209, non-FIPS mode, 161
213 device role
NETCONF session termination, 238 master spine node configuration, 363
NETCONF YANG file content retrieval, 208 VCF fabric automated underlay network device
NQA client operation, 9 role configuration, 362
NQA collaboration configuration, 68 DHCP
NQA operation configuration (DHCP), 50 CWMP DHCP server, 251
NQA operation configuration (DNS), 51 NQA client operation, 12
NQA server, 9 NQA operation configuration, 50
NTP architecture, 103 diagnosing
NTP broadcast association mode, 125 information center diagnostic log, 321
NTP broadcast mode+authentication, 135 information center diagnostic log save (log file),
NTP multicast association mode, 127 336
packet capture configuration (feature direction
image-based), 350 port mirroring (bidirectional), 289
PoE configuration, 143, 145 port mirroring (inbound), 289
PoE profile PI configuration, 150 port mirroring (outbound), 289
port mirroring configuration, 289, 301 disabling
port mirroring remote destination group, 297 information center interface link up/link down log
port mirroring remote source group, 298 generation, 334
port mirroring remote source group egress NTP message receiving, 118
port, 300 display
port mirroring remote source group reflector EPS agent, 402
port, 299 displaying
port mirroring remote source group source CWMP settings, 261
ports, 299 EAA settings, 277
port mirroring source device, 289 eMDI, 411
Puppet configuration, 388, 391, 391 Event MIB, 189
Puppet resources (netdev_device), 392 feature image-based packet capture data display
Puppet shutdown, 390 filter, 349
427
information center, 337 event monitor policy environment variable, 272
iNQA analyzer, 91 event monitor policy runtime, 272
iNQA collector, 91 event monitor policy user role, 272
NQA, 45 event source, 270
NTP, 120 how it works, 270
packet capture display filter configuration, 346, monitor policy, 271
348 monitor policy configuration, 274
packet file content, 350 monitor policy configuration
PMM, 284 (CLI-defined+environment variables), 281
PMM kernel threads, 287 monitor policy configuration (Tcl-defined), 277
PMM user processes, 286 monitor policy configuration restrictions, 274
PoE, 152 monitor policy configuration restrictions (Tcl), 276
port mirroring, 301 monitor policy suspension, 276
RMON settings, 175 RTM, 270
sFlow, 318 settings display, 277
SNMP settings, 166 echo
SNTP, 140 NQA client operation (ICMP echo), 10
SQA, 413 NQA operation configuration (ICMP echo), 46
user PMM, 285 NQA operation configuration (UDP echo), 59
VCF fabric, 369 egress port
DLSw Layer 2 remote port mirroring, 289
NQA client operation, 24 Layer 2 remote port mirroring (egress port), 306
NQA operation configuration, 65 port mirroring remote source group egress port,
DNS 300
CWMP DNS server, 251 electricity
NQA client operation, 13 PoE configuration, 143, 145
NQA client template, 33 Embedded Automation Architecture. Use EAA
NQA operation configuration, 51 eMDI
NQA template configuration, 71 configuration, 404, 411
domain display, 411
name system. Use DNS implementation, 404
DSCP monitoring indicators, 405
NTP packet value setting, 119 prerequisites, 409
DSL network restrictions and guidelines, 408
CWMP configuration, 251, 255 starting the instance, 410
duplicate log suppression, 333 stopping the instance, 410
dynamic enable
Dynamic Host Configuration Protocol. Use VCF fabric local proxy ARP, 368
DHCP VCF fabric overlay network L2 agent, 366
NTP dynamic associations max, 118 VCF fabric overlay network L3 agent, 367
E enabling
CWMP, 256
EAA
Event MIB SNMP notification, 188
configuration, 270, 277
information center, 328
environment variable configuration
information center duplicate log suppression, 333
(user-defined), 273
information center synchronous output, 333
event monitor, 270
information center system log SNMP notification,
event monitor policy action, 272
334
event monitor policy configuration (CLI), 278
measurement functionality, 90
event monitor policy configuration (Track), 279
NQA client, 9
event monitor policy element, 271
packet loss measurement, 88
428
PoE over-temperature protection, 151 NETCONF module report event subscription, 235
PoE PD detection (nonstandard), 147 NETCONF module report event subscription
PoE PI, 145 cancel, 236
SNMP agent, 157 NETCONF monitoring event subscription, 234
SNMP notification, 162 NETCONF syslog event subscription, 233
SNMP version, 157 RMON event group, 171
SNTP, 138 Event Management Information Base. See Event MIB
VCF fabric topology discovery, 361 Event MIB
environment configuration, 180, 182, 189
EAA environment variable configuration display, 189
(user-defined), 273 event actions, 181
EAA event monitor policy environment event configuration, 183
variable, 272 monitored object, 180
PoE over-temperature protection enable, 151 object owner, 182
EPS agent SNMP notification enable, 188
configuration, 401, 402, 402 trigger test configuration, 185
display, 402 trigger test configuration (Boolean), 191
establishing trigger test configuration (existence), 189
NETCONF over console sessions, 203 trigger test configuration (threshold), 187, 194
NETCONF over SOAP sessions, 202 exchanging
NETCONF over SSH sessions, 203 NETCONF capabilities, 204
NETCONF over Telnet sessions, 203 existence
NETCONF session, 200 Event MIB trigger test, 180
Ethernet Event MIB trigger test configuration, 189
CWMP configuration, 251, 255, 261
F
Layer 2 remote port mirroring configuration,
295 field
PoE configuration, 143, 145 packet capture display filter keyword, 346
port mirroring configuration, 289, 301 file
power over Ethernet. Use PoE Chef configuration file, 374
RMON Ethernet statistics group configuration, information center diagnostic log output
176 destination, 336
RMON statistics configuration, 173 information center log save (log file), 331
RMON statistics entry, 173 information center security log file management,
RMON statistics group, 171 336
sFlow configuration, 316, 318, 318 information center security log save (log file), 335
Ethernet interface NETCONF YANG file content retrieval, 208
Chef resources (netdev_l2_interface), 382 packet file content display, 350
Puppet resources (netdev_l2_interface), 394 filtering
event feature image-based packet capture data display,
349
EAA configuration, 270, 277
NETCONF column-based filtering, 214
EAA environment variable configuration
(user-defined), 273 NETCONF data (conditional match), 219
EAA event monitor, 270 NETCONF data (regex match), 217
EAA event monitor policy element, 271 NETCONF data filtering (column-based), 215
EAA event monitor policy environment NETCONF data filtering (table-based), 214
variable, 272 NETCONF table-based filtering, 214
EAA event source, 270 packet capture display filter configuration, 346,
EAA monitor policy, 271 348
NETCONF event subscription, 233, 237 packet capture filter configuration, 342, 345
FIPS compliance
429
information center, 326 H
NETCONF, 199 hidden log (information center), 321
SNMP, 156 history
FIPS mode NQA client history record save, 30
SNMPv3 group and user configuration, 162 RMON group, 171
firmware RMON history control entry, 173
PoE PSE firmware upgrade, 151 RMON history group configuration, 176
flow host
mirroring. See flow mirroring information center log output (log host), 330
Sampled Flow. Use sFlow HTTP
flow mirroring NQA client operation, 15
configuration, 309, 314 NQA client template, 38
QoS policy application, 313 NQA operation configuration, 53
QoS policy application (global), 313 NQA template configuration, 74
QoS policy application (interface), 313 HTTPS
QoS policy application (VLAN), 313 CWMP ACS HTTPS SSL client policy, 258
traffic behavior configuration, 312 NQA client template, 39
traffic class configuration, 311 NQA template configuration, 75
format
I
information center system logs, 323
NETCONF message, 197 ICMP
FTP NQA client operation (ICMP echo), 10
NQA client operation, 14 NQA client operation (ICMP jitter), 11
NQA client template, 41 NQA client template, 32
NQA operation configuration, 52 NQA collaboration configuration, 68
NQA template configuration, 75 NQA operation configuration (ICMP echo), 46
full match NQA operation configuration (ICMP jitter), 47
NETCONF data filtering (column-based), 215 NQA template configuration, 70
ping command, 1
G
identifying
generating tracert node failure, 4, 4
information center interface link up/link down image
log generation, 334
packet capture configuration (feature
get operation image-based), 350
SNMP, 156 packet capture feature image-based configuration,
SNMP logging, 165 349
group implementation
Chef resources (netdev_lagg), 383 eMDI, 404
local port mirroring group monitor port, 293 inbound
local port mirroring group source port, 293 port mirroring, 289
port mirroring group, 289 information
Puppet resources (netdev_lagg), 396 device configuration information retrieval, 204
RMON, 171 information center
RMON alarm, 172 configuration, 321, 326, 338
RMON Ethernet statistics, 171 default output rules (diagnostic log), 322
RMON event, 171 default output rules (hidden log), 323
RMON history, 171 default output rules (security log), 322
RMON private alarm, 172 default output rules (trace log), 323
SNMPv3 configuration in non-FIPS mode, 161 diagnostic log save (log file), 336
group and user display, 337
SNMPv3 configuration, 160 duplicate log suppression, 333
430
enable, 328 SNMP common parameter configuration, 158
FIPS compliance, 326 SNMP configuration, 155, 167
interface link up/link down log generation, 334 SNMP MIB, 155
log default output rules, 322 SNMP2c community configuration by community
log output (console), 328 name, 159
log output (log host), 330 SNMP2c community configuration by creating
log output (monitor terminal), 329 SNMPv2c user, 160
log output configuration (console), 338 SNMPv1 community configuration, 159, 159
log output configuration (Linux log host), 340 SNMPv1 community configuration by community
name, 159
log output configuration (UNIX log host), 339
SNMPv1 community configuration by creating
log output destinations, 328
SNMPv1 user, 160
log save (log file), 331
SNMPv2c community configuration, 159, 159
log storage period, 332
SNMPv3 group and user configuration, 160
log suppression configuration, 333
SNMPv3 group and user configuration in FIPS
log suppression for module, 334 mode, 162
maintain, 337 SNMPv3 group and user configuration in
security log file management, 336 non-FIPS mode, 161
security log management, 335 interval
security log save (log file), 335 CWMP ACS periodic Inform feature, 260
synchronous log output, 333 IP addressing
system information log types, 321 tracert, 2
system log destinations, 322 tracert node failure identification, 4, 4
system log formats and field descriptions, 323 IP services
system log levels, 321 NQA client history record save, 30
system log SNMP notification, 334 NQA client operation (DHCP), 12
trace log file max size, 337 NQA client operation (DLSw), 24
initiating NQA client operation (DNS), 13
CWMP ACS connection initiation, 260 NQA client operation (FTP), 14
iNQA NQA client operation (HTTP), 15
application scenarios, 81 NQA client operation (ICMP echo), 10
basic concepts, 79 NQA client operation (ICMP jitter), 11
benefits, 79 NQA client operation (path jitter), 24
compatibility, 85 NQA client operation (SNMP), 18
configuration, 79, 86, 92 NQA client operation (TCP), 18
guidelines, 85 NQA client operation (UDP echo), 19
instance configuration requirements, 85 NQA client operation (UDP jitter), 16
network configuration requirements, 85 NQA client operation (UDP tracert), 20
operating mechanism, 83 NQA client operation (voice), 22
prerequisites, 86 NQA client operation optional parameters, 26
restrictions, 85 NQA client operation scheduling, 31
iNQA analyzer NQA client statistics collection, 29
display, 91 NQA client template (DNS), 33
iNQA collector NQA client template (FTP), 41
display, 91 NQA client template (HTTP), 38
interface NQA client template (HTTPS), 39
Chef resources (netdev_interface), 380 NQA client template (ICMP), 32
Puppet resources (netdev_interface), 393 NQA client template (RADIUS), 42
Puppet resources (netdev_l2_interface), 394 NQA client template (SSL), 43
Internet NQA client template (TCP half open), 35
NQA configuration, 7, 8, 46 NQA client template (TCP), 34
431
NQA client template (UDP), 36 Puppet resources (netdev_l2vpn), 395
NQA client template optional parameters, 44 language
NQA client threshold monitoring, 27 Puppet configuration, 388, 391, 391
NQA client+Track collaboration, 27 Layer 2
NQA collaboration configuration, 68 port mirroring configuration, 289, 301
NQA configuration, 7, 8, 46 remote port mirroring, 290
NQA operation configuration (DHCP), 50 remote port mirroring (egress port), 306
NQA operation configuration (DLSw), 65 remote port mirroring (reflector port configurable),
NQA operation configuration (DNS), 51 303
NQA operation configuration (FTP), 52 remote port mirroring configuration, 295
NQA operation configuration (HTTP), 53 Layer 3
NQA operation configuration (ICMP echo), 46 port mirroring configuration, 289, 301
NQA operation configuration (ICMP jitter), 47 tracert, 2
NQA operation configuration (path jitter), 66 tracert node failure identification, 4, 4
NQA operation configuration (SNMP), 57 level
NQA operation configuration (TCP), 58 information center system logs, 321
NQA operation configuration (UDP echo), 59 link
NQA operation configuration (UDP jitter), 54 information center interface link up/link down log
NQA operation configuration (UDP tracert), 61 generation, 334
NQA operation configuration (voice), 62 Linux
NQA template configuration (DNS), 71 information center log host output configuration,
340
NQA template configuration (FTP), 75
kernel thread, 283
NQA template configuration (HTTP), 74
PMM, 283
NQA template configuration (HTTPS), 75
PMM kernel thread, 286
NQA template configuration (ICMP), 70
PMM kernel thread deadloop detection, 286
NQA template configuration (RADIUS), 76
PMM kernel thread display, 287
NQA template configuration (SSL), 77
PMM kernel thread maintain, 287
NQA template configuration (TCP half open),
72 PMM kernel thread starvation detection, 287
NQA template configuration (TCP), 72 PMM user process display, 286
NQA template configuration (UDP), 73 PMM user process maintain, 286
IPv6 Puppet configuration, 388, 391, 391
NTP client/server association mode, 121 loading
NTP multicast association mode, 130 NETCONF configuration, 226
NTP symmetric active/passive association local
mode, 124 NTP local clock as reference source, 110
port mirroring, 290
K
port mirroring configuration, 292
kernel thread port mirroring configuration
display, 287 (multi-monitoring-devices), 294
Linux process, 283 port mirroring group creation, 293
maintain, 287 port mirroring group monitor port, 293
PMM, 286 port mirroring group source port, 293
PMM deadloop detection, 286 locking
PMM starvation detection, 287 NETCONF running configuration, 220, 221
keyword log field description
packet capture filter, 342 information center system logs, 323
L logging
information center configuration, 321, 326, 338
L2VPN
information center diagnostic log save (log file),
Chef resources (netdev_l2vpn), 383 336
432
information center diagnostic logs, 321 packet capture filter operator, 343
information center duplicate log suppression, M
333
information center hidden logs, 321 maintaining
information center interface link up/link down information center, 337
log generation, 334 PMM kernel thread, 286
information center log default output rules, PMM kernel threads, 287
322 PMM Linux, 283
information center log output (console), 328 PMM user processes, 286
information center log output (log host), 330 process monitoring and maintenance. See PMM
information center log output (monitor user PMM, 285
terminal), 329 maintenance
information center log output configuration displaying EPS agent, 402
(console), 338 Management Information Base. Use MIB
information center log output configuration managing
(Linux log host), 340
information center security log file, 336
information center log output configuration
information center security logs, 335
(UNIX log host), 339
PoE PI power management configuration, 149
information center log save (log file), 331
manifest
information center log storage period, 332
Puppet resources, 389, 392
information center security log file
management, 336 matching
information center security log management, NETCONF data filtering (column-based), 215
335 NETCONF data filtering (column-based)
information center security log save (log file), (conditional match), 216
335 NETCONF data filtering (column-based) (full
information center security logs, 321 match), 215
information center standard system logs, 321 NETCONF data filtering (column-based) (regex
match), 216
information center synchronous log output,
333 NETCONF data filtering (conditional match), 219
information center system log destinations, NETCONF data filtering (regex match), 217
322 NETCONF data filtering (table-based), 214
information center system log formats and packet capture display filter configuration
field descriptions, 323 (proto[…] expression), 348
information center system log levels, 321 message
information center system log SNMP NETCONF format, 197
notification, 334 NTP message receiving disable, 118
information center trace log file max size, 337 NTP message source address, 117
SNMP configuration, 165 MIB
system information default output rules Event MIB configuration, 180, 182, 189
(diagnostic log), 322 Event MIB event actions, 181
system information default output rules Event MIB event configuration, 183
(hidden log), 323 Event MIB monitored object, 180
system information default output rules Event MIB object owner, 182
(security log), 322
Event MIB trigger test configuration, 185
system information default output rules (trace
log), 323 Event MIB trigger test configuration (Boolean),
191
logical
Event MIB trigger test configuration (existence),
packet capture display filter configuration 189
(logical expression), 348
Event MIB trigger test configuration (threshold),
packet capture display filter operator, 347 187, 194
packet capture filter configuration (logical SNMP, 155, 155
expression), 345
SNMP Get operation, 156
433
SNMP Set operation, 156 PMM kernel thread, 286
SNMP view-based access control, 155 PMM Linux, 283
mirroring PoE PSE power monitoring configuration, 149
flow. See flow mirroring process monitoring and maintenance. See PMM
port. See port mirroring user PMM, 285
mode multicast
NTP association, 108 IPv6 NTP multicast association mode, 130
NTP broadcast association, 104, 109 NTP multicast association mode, 104, 110, 127
NTP client/server association, 104, 108 NTP multicast mode authentication, 116
NTP multicast association, 104, 110 NTP multicast mode dynamic associations max,
NTP symmetric active/passive association, 118
104, 108 multimedia
PoE PSE firmware upgrade (full mode), 151 SQA configuration, 413
PoE PSE firmware upgrade (refresh mode), N
151
SNMP access control (rule-based), 156 NETCONF
SNMP access control (view-based), 156 capability exchange, 204
modifying Chef configuration, 373, 377, 377
NETCONF configuration, 222, 223 CLI operations, 231, 232
module CLI return, 239
feature module debug, 6 configuration, 197, 199
information center configuration, 321, 326, configuration data retrieval (all modules), 211
338 configuration data retrieval (Syslog module), 212
information center log suppression for module, configuration load, 226
334 configuration modification, 222, 223
NETCONF configuration data retrieval (all configuration rollback, 226
modules), 211 configuration rollback (configuration file-based),
NETCONF configuration data retrieval 227
(Syslog module), 212 configuration rollback (rollback point-based), 227
NETCONF module report event subscription, configuration save, 224
235 data entry retrieval (interface table), 209
NETCONF module report event subscription data filtering, 214
cancel, 236
data filtering (conditional match), 219
monitor terminal
data filtering (regex match), 217
information center log output, 329
device configuration, 199
monitoring
device configuration information retrieval, 204
EAA configuration, 270
device configuration+state information retrieval,
EAA environment variable configuration 205
(user-defined), 273
device management, 199
Event MIB configuration, 180, 182, 189
event subscription, 233, 237
Event MIB trigger test configuration (Boolean),
FIPS compliance, 199
191
information retrieval, 208
Event MIB trigger test configuration
(existence), 189 message format, 197
Event MIB trigger test configuration module report event subscription, 235
(threshold), 194 module report event subscription cancel, 236
local port mirroring configuration (multiple monitoring event subscription, 234
monitoring devices), 302 NETCONF over console session establishment,
NETCONF monitoring event subscription, 234 203
NQA client threshold monitoring, 27 NETCONF over SOAP session establishment,
NQA threshold monitoring, 8 202
PMM, 284 NETCONF over SSH session establishment, 203
434
NETCONF over Telnet session establishment, information center security log save (log file), 335
203 information center synchronous log output, 333
non-default settings retrieval, 207 information center system log SNMP notification,
over SOAP, 197 334
protocols and standards, 199 information center system log types, 321
Puppet configuration, 388, 391, 391 information center trace log file max size, 337
running configuration lock/unlock, 220, 221 Layer 2 remote port mirroring (egress port), 306
running configuration save, 225 Layer 2 remote port mirroring (reflector port
session attribute set, 200 configurable), 303
session establishment, 200 Layer 2 remote port mirroring configuration, 295
session establishment restrictions, 200 local port mirroring (multi-monitoring-devices),
session information retrieval, 209, 213 294
session termination, 238 local port mirroring (source port mode), 301
structure, 197 local port mirroring configuration, 292
supported operations, 240 local port mirroring configuration (multiple
monitoring devices), 302
syslog event subscription, 233
local port mirroring group monitor port, 293
VCF fabric automated underlay NETCONF
username and password setting, 363 local port mirroring group source port, 293
YANG file content retrieval, 208 Network Configuration Protocol. Use NETCONF
network Network Time Protocol. Use NTP
analyzer configuration, 89 NQA client history record save, 30
Chef network framework, 373 NQA client operation, 9
Chef resources, 374, 380 NQA client operation (DHCP), 12
collector configuration, 86 NQA client operation (DLSw), 24
eMDI configuration, 409 NQA client operation (DNS), 13
eMDI implementation, 404 NQA client operation (FTP), 14
Event MIB SNMP notification enable, 188 NQA client operation (HTTP), 15
Event MIB trigger test configuration (Boolean), NQA client operation (ICMP echo), 10
191 NQA client operation (ICMP jitter), 11
Event MIB trigger test configuration NQA client operation (path jitter), 24
(existence), 189 NQA client operation (SNMP), 18
Event MIB trigger test configuration NQA client operation (TCP), 18
(threshold), 194 NQA client operation (UDP echo), 19
feature module debug, 6 NQA client operation (UDP jitter), 16
flow mirroring configuration, 309, 314 NQA client operation (UDP tracert), 20
flow mirroring traffic behavior, 312 NQA client operation (voice), 22
information center diagnostic log save (log NQA client operation optional parameters, 26
file), 336 NQA client operation scheduling, 31
information center duplicate log suppression, NQA client statistics collection, 29
333
NQA client template, 31
information center interface link up/link down
NQA client threshold monitoring, 27
log generation, 334
NQA client+Track collaboration, 27
information center log output configuration
(console), 338 NQA collaboration configuration, 68
information center log output configuration NQA operation configuration (DHCP), 50
(Linux log host), 340 NQA operation configuration (DLSw), 65
information center log output configuration NQA operation configuration (DNS), 51
(UNIX log host), 339 NQA operation configuration (FTP), 52
information center log storage period, 332 NQA operation configuration (HTTP), 53
information center security log file NQA operation configuration (ICMP echo), 46
management, 336 NQA operation configuration (ICMP jitter), 47
NQA operation configuration (path jitter), 66
435
NQA operation configuration (SNMP), 57 SNMPv2c community configuration by community
NQA operation configuration (TCP), 58 name, 159
NQA operation configuration (UDP echo), 59 SNMPv2c community configuration by creating
NQA operation configuration (UDP jitter), 54 SNMPv2c user, 160
NQA operation configuration (UDP tracert), 61 SNMPv3 group and user configuration, 160
NQA operation configuration (voice), 62 SNMPv3 group and user configuration in FIPS
mode, 162
NQA server, 9
SNMPv3 group and user configuration in
NQA template configuration (DNS), 71
non-FIPS mode, 161
NQA template configuration (FTP), 75
tracert node failure identification, 4, 4
NQA template configuration (HTTP), 74
VCF fabric automated deployment, 358
NQA template configuration (HTTPS), 75
VCF fabric automated underlay network
NQA template configuration (ICMP), 70 deployment configuration, 362, 364
NQA template configuration (RADIUS), 76 VCF fabric Neutron deployment, 357
NQA template configuration (SSL), 77 VCF fabric topology, 354
NQA template configuration (TCP half open), network management
72
Chef configuration, 373, 377, 377
NQA template configuration (TCP), 72
CWMP basic functions, 251
NQA template configuration (UDP), 73
CWMP configuration, 251, 255, 261
NTP association mode, 108
EAA configuration, 270, 277
NTP message receiving disable, 118
EPS agent configuration, 401, 402, 402
ping network connectivity test, 1
Event MIB configuration, 180, 182, 189
PMM 3rd party process start, 284
information center configuration, 321, 326, 338
PMM 3rd party process stop, 284
iNQA configuration, 86
port mirroring remote destination group, 297
NETCONF configuration, 197
port mirroring remote source group, 298
NQA configuration, 7, 8, 46
port mirroring remote source group egress
NTP configuration, 102, 107, 120
port, 300
packet capture configuration, 342, 350
port mirroring remote source group reflector
port, 299 PMM Linux network, 283
port mirroring remote source group source PoE configuration (uni-PSE device), 152, 152
ports, 299 port mirroring configuration, 289, 301
Puppet network framework, 388 Puppet configuration, 388, 391, 391
Puppet resources, 389, 392 RMON configuration, 171, 176
quality analyzer. See NQA sFlow configuration, 316, 318, 318
RMON alarm configuration, 174, 177 SNMP configuration, 155, 167
RMON alarm group sample types, 173 SNMPv1 configuration, 167
RMON Ethernet statistics group configuration, SNMPv2c configuration, 167
176 SNMPv3 configuration, 168
RMON history group configuration, 176 SQA configuration, 413, 413
RMON statistics configuration, 173 stopping packet capture configuration, 350
RMON statistics function, 173 VCF fabric configuration, 354, 360
sFlow counter sampling configuration, 318 network monitoring
sFlow flow sampling configuration, 317 SQA configuration, 413
SNMP common parameter configuration, 158 Neutron
SNMPv1 community configuration, 159, 159 VCF fabric, 356
SNMPv1 community configuration by VCF fabric Neutron deployment, 357
community name, 159 NMM
SNMPv1 community configuration by creating CWMP ACS attributes, 256
SNMPv1 user, 160 CWMP ACS attributes (default)(CLI), 257
SNMPv2c community configuration, 159, 159 CWMP ACS attributes (preferred), 256
CWMP ACS autoconnect parameters, 260
436
CWMP ACS HTTPS SSL client policy, 258 information center log formats and field
CWMP basic functions, 251 descriptions, 323
CWMP configuration, 251, 255, 261 information center log levels, 321
CWMP CPE ACS authentication parameters, information center log output (console), 328
258 information center log output (log host), 330
CWMP CPE ACS connection interface, 259 information center log output (monitor terminal),
CWMP CPE ACS provision code, 259 329
CWMP CPE attributes, 258 information center log output configuration
CWMP framework, 251 (console), 338
CWMP settings display, 261 information center log output configuration (Linux
log host), 340
device configuration information retrieval, 204
information center log output configuration (UNIX
EAA configuration, 270, 277
log host), 339
EAA environment variable configuration
information center log output destinations, 328
(user-defined), 273
information center log save (log file), 331
EAA event monitor, 270
information center log storage period, 332
EAA event monitor policy configuration (CLI),
278 information center log suppression for module,
334
EAA event monitor policy configuration (Track),
279 information center maintain, 337
EAA event monitor policy element, 271 information center security log file management,
336
EAA event monitor policy environment
variable, 272 information center security log management, 335
EAA event source, 270 information center security log save (log file), 335
EAA monitor policy, 271 information center synchronous log output, 333
EAA monitor policy configuration, 274 information center system log SNMP notification,
334
EAA monitor policy configuration
(CLI-defined+environment variables), 281 information center system log types, 321
EAA monitor policy configuration (Tcl-defined), information center trace log file max size, 337
277 iNQA configuration, 86
EAA monitor policy suspension, 276 IPv6 NTP client/server association mode
EAA RTM, 270 configuration, 121
EAA settings display, 277 IPv6 NTP multicast association mode
configuration, 130
EPS agent configuration, 401, 402, 402
IPv6 NTP symmetric active/passive association
EPS agent display, 402
mode configuration, 124
feature image-based packet capture
Layer 2 remote port mirroring (egress port), 306
configuration, 349
Layer 2 remote port mirroring (reflector port
feature module debug, 6
configurable), 303
flow mirroring configuration, 309, 314
Layer 2 remote port mirroring configuration, 295
flow mirroring QoS policy application, 313
local port mirroring (multi-monitoring-devices),
flow mirroring traffic behavior, 312 294
information center configuration, 321, 326, local port mirroring (source port mode), 301
338
local port mirroring configuration, 292
information center diagnostic log save (log
local port mirroring configuration (multiple
file), 336
monitoring devices), 302
information center display, 337
local port mirroring configuration restrictions
information center duplicate log suppression, (multi-monitoring-devices), 294
333
local port mirroring group, 293
information center interface link up/link down
local port mirroring group monitor port, 293
log generation, 334
local port mirroring group source port, 293
information center log default output rules,
322 NETCONF capability exchange, 204
information center log destinations, 322 NETCONF CLI operations, 231, 232
437
NETCONF CLI return, 239 NQA client operation (TCP), 18
NETCONF configuration, 197, 199 NQA client operation (UDP echo), 19
NETCONF configuration data retrieval (all NQA client operation (UDP jitter), 16
modules), 211 NQA client operation (UDP tracert), 20
NETCONF configuration data retrieval NQA client operation (voice), 22
(Syslog module), 212 NQA client operation optional parameter
NETCONF configuration modification, 222, configuration restrictions, 26
223 NQA client operation optional parameters, 26
NETCONF data entry retrieval (interface NQA client operation restrictions (FTP), 14
table), 209
NQA client operation restrictions (ICMP jitter), 12
NETCONF data filtering, 214
NQA client operation restrictions (UDP jitter), 16
NETCONF device configuration+state
NQA client operation restrictions (UDP tracert), 20
information retrieval, 205
NQA client operation restrictions (voice), 22
NETCONF event subscription, 233, 237
NQA client operation scheduling, 31
NETCONF information retrieval, 208
NQA client statistics collection, 29
NETCONF module report event subscription,
235 NQA client statistics collection restrictions, 29
NETCONF module report event subscription NQA client template, 31
cancel, 236 NQA client template (DNS), 33
NETCONF monitoring event subscription, 234 NQA client template (FTP), 41
NETCONF non-default settings retrieval, 207 NQA client template (HTTP), 38
NETCONF over console session NQA client template (HTTPS), 39
establishment, 203 NQA client template (ICMP), 32
NETCONF over SOAP session establishment, NQA client template (RADIUS), 42
202 NQA client template (SSL), 43
NETCONF over SSH session establishment, NQA client template (TCP half open), 35
203 NQA client template (TCP), 34
NETCONF over Telnet session establishment, NQA client template (UDP), 36
203
NQA client template configuration restrictions, 31
NETCONF protocols and standards, 199
NQA client template optional parameter
NETCONF running configuration lock/unlock, configuration restrictions, 44
220, 221
NQA client template optional parameters, 44
NETCONF session establishment, 200
NQA client threshold monitoring, 27
NETCONF session information retrieval, 209,
NQA client threshold monitoring configuration
213
restrictions, 28
NETCONF session termination, 238
NQA client+Track collaboration, 27
NETCONF structure, 197
NQA client+Track collaboration restrictions, 27
NETCONF supported operations, 240
NQA collaboration configuration, 68
NETCONF syslog event subscription, 233
NQA configuration, 7, 8, 46
NETCONF YANG file content retrieval, 208
NQA display, 45
NQA client history record save, 30
NQA operation configuration (DHCP), 50
NQA client history record save restrictions, 30
NQA operation configuration (DLSw), 65
NQA client operation, 9
NQA operation configuration (DNS), 51
NQA client operation (DHCP), 12
NQA operation configuration (FTP), 52
NQA client operation (DLSw), 24
NQA operation configuration (HTTP), 53
NQA client operation (DNS), 13
NQA operation configuration (ICMP echo), 46
NQA client operation (FTP), 14
NQA operation configuration (ICMP jitter), 47
NQA client operation (HTTP), 15
NQA operation configuration (path jitter), 66
NQA client operation (ICMP echo), 10
NQA operation configuration (SNMP), 57
NQA client operation (ICMP jitter), 11
NQA operation configuration (TCP), 58
NQA client operation (path jitter), 24
NQA operation configuration (UDP echo), 59
NQA client operation (SNMP), 18
NQA operation configuration (UDP jitter), 54
438
NQA operation configuration (UDP tracert), 61 packet capture configuration (feature
NQA operation configuration (voice), 62 image-based), 350
NQA server, 9 packet capture display filter configuration, 346,
NQA server configuration restrictions, 9 348
NQA template configuration (DNS), 71 packet capture filter configuration, 342, 345
NQA template configuration (FTP), 75 packet file content display, 350
NQA template configuration (HTTP), 74 ping address reachability determination, 2
NQA template configuration (HTTPS), 75 ping command, 1
NQA template configuration (ICMP), 70 ping network connectivity test, 1
NQA template configuration (RADIUS), 76 PoE configuration, 143, 145
NQA template configuration (SSL), 77 PoE configuration (uni-PSE device), 152, 152
NQA template configuration (TCP half open), PoE display, 152
72 PoE over-temperature protection enable, 151
NQA template configuration (TCP), 72 PoE PD detection (nonstandard), 147
NQA template configuration (UDP), 73 PoE PI power management configuration, 149
NQA threshold monitoring, 8 PoE PI power max configuration, 148
NQA+Track collaboration, 7 PoE power configuration, 148
NTP architecture, 103 PoE profile application, 150
NTP association mode, 108 PoE profile PI configuration, 150
NTP authentication configuration, 111 PoE PSE firmware upgrade, 151
NTP broadcast association mode PoE PSE power max configuration, 148
configuration, 109, 125 PoE PSE power monitoring configuration, 149
NTP broadcast mode authentication PoE troubleshooting, 153
configuration, 114 port mirroring classification, 290
NTP broadcast mode+authentication, 135 port mirroring configuration, 289, 301
NTP client/server association mode port mirroring display, 301
configuration, 120 port mirroring remote destination group, 297
NTP client/server mode authentication port mirroring remote source group, 298
configuration, 111
RMON alarm configuration, 177
NTP client/server mode+authentication, 133
RMON configuration, 171, 176
NTP configuration, 102, 107, 120
RMON Ethernet statistics group configuration,
NTP display, 120 176
NTP dynamic associations max, 118 RMON group, 171
NTP local clock as reference source, 110 RMON history group configuration, 176
NTP message receiving disable, 118 RMON protocols and standards, 173
NTP message source address specification, RMON settings display, 175
117
sFlow agent+collector information configuration,
NTP multicast association mode, 110 316
NTP multicast association mode configuration, sFlow configuration, 316, 318, 318
127
sFlow counter sampling configuration, 318
NTP multicast mode authentication
sFlow display, 318
configuration, 116
sFlow flow sampling configuration, 317
NTP optional parameter configuration, 117
sFlow protocols and standards, 316
NTP packet DSCP value setting, 119
SNMP access control mode, 156
NTP protocols and standards, 106, 138
SNMP configuration, 155, 167
NTP security, 105
SNMP framework, 155
NTP symmetric active/passive association
mode configuration, 123 SNMP Get operation, 156
NTP symmetric active/passive mode SNMP host notification send, 163
authentication configuration, 113 SNMP logging configuration, 165
packet capture configuration, 342, 350 SNMP MIB, 155
SNMP notification, 162
439
SNMP protocol versions, 156 client history record save restrictions, 30
SNMP settings display, 166 client operation, 9
SNMP view-based MIB access control, 155 client operation (DHCP), 12
SNMPv1 configuration, 167 client operation (DLSw), 24
SNMPv2c configuration, 167 client operation (DNS), 13
SNMPv3 configuration, 168 client operation (FTP), 14
SNTP authentication, 139 client operation (HTTP), 15
SNTP configuration, 107, 138, 141, 141 client operation (ICMP echo), 10
SNTP display, 140 client operation (ICMP jitter), 11
SNTP enable, 138 client operation (path jitter), 24
system debugging, 1, 5 client operation (SNMP), 18
system information default output rules client operation (TCP), 18
(diagnostic log), 322 client operation (UDP echo), 19
system information default output rules client operation (UDP jitter), 16
(hidden log), 323 client operation (UDP tracert), 20
system information default output rules client operation (voice), 22
(security log), 322
client operation optional parameter configuration
system information default output rules (trace restrictions, 26
log), 323
client operation optional parameters, 26
system maintenance, 1
client operation restrictions (FTP), 14
tracert, 2
client operation restrictions (ICMP jitter), 12
tracert node failure identification, 4, 4
client operation restrictions (UDP jitter), 16
troubleshooting sFlow, 320
client operation restrictions (UDP tracert), 20
troubleshooting sFlow remote collector cannot
client operation restrictions (voice), 22
receive packets, 320
client operation scheduling, 31
VCF fabric configuration, 360
client operation scheduling restrictions, 31
VCF fabric topology discovery, 361
client statistics collection, 29
NMS
client statistics collection restrictions, 29
Event MIB SNMP notification enable, 188
client template (DNS), 33
RMON configuration, 171, 176
client template (FTP), 41
SNMP Notification operation, 156
client template (HTTP), 38
SNMP protocol versions, 156
client template (HTTPS), 39
SNMP Set operation, 156, 156
client template (ICMP), 32
node
client template (RADIUS), 42
Event MIB monitored object, 180
client template (SSL), 43
non-default
client template (TCP half open), 35
NETCONF non-default settings retrieval, 207
client template (TCP), 34
non-FIPS mode
client template (UDP), 36
SNMPv3 group and user configuration, 161
client template configuration, 31
notifying
client template configuration restrictions, 31
Event MIB SNMP notification enable, 188
client template optional parameter configuration
information center system log SNMP
restrictions, 44
notification, 334
client template optional parameters, 44
NETCONF syslog event subscription, 233
client threshold monitoring, 27
SNMP configuration, 155, 167
client threshold monitoring configuration
SNMP host notification send, 163
restrictions, 28
SNMP notification, 162
client+Track collaboration, 27
SNMP Notification operation, 156
client+Track collaboration restrictions, 27
NQA
collaboration configuration, 68
client enable, 9
configuration, 7, 8, 46
client history record save, 30
display, 45
440
how it works, 7 configuration, 102, 107, 120
operation configuration (DHCP), 50 configuration restrictions, 106
operation configuration (DLSw), 65 display, 120
operation configuration (DNS), 51 IPv6 client/server association mode configuration,
operation configuration (FTP), 52 121
operation configuration (HTTP), 53 IPv6 multicast association mode configuration,
operation configuration (ICMP echo), 46 130
operation configuration (ICMP jitter), 47 IPv6 symmetric active/passive association mode
configuration, 124
operation configuration (path jitter), 66
local clock as reference source, 110
operation configuration (SNMP), 57
message receiving disable, 118
operation configuration (TCP), 58
message source address specification, 117
operation configuration (UDP echo), 59
multicast association mode, 104
operation configuration (UDP jitter), 54
multicast association mode configuration, 110,
operation configuration (UDP tracert), 61
127
operation configuration (voice), 62
multicast mode authentication configuration, 116
server configuration, 9
multicast mode dynamic associations max, 118
server configuration restrictions, 9
optional parameter configuration, 117
template configuration (DNS), 71
packet DSCP value setting, 119
template configuration (FTP), 75
protocols and standards, 106, 138
template configuration (HTTP), 74
security, 105
template configuration (HTTPS), 75
SNTP authentication, 139
template configuration (ICMP), 70
SNTP configuration, 107, 138, 141, 141
template configuration (RADIUS), 76
SNTP configuration restrictions, 138
template configuration (SSL), 77
symmetric active/passive association mode, 104
template configuration (TCP half open), 72
symmetric active/passive association mode
template configuration (TCP), 72 configuration, 108, 123
template configuration (UDP), 73 symmetric active/passive mode authentication
threshold monitoring, 8 configuration, 113
Track collaboration function, 7 symmetric active/passive mode dynamic
NTP associations max, 118
access control, 105 O
architecture, 103
object
association mode configuration, 108
Event MIB monitored, 180
authentication, 106
Event MIB object owner, 182
authentication configuration, 111
operating mechanism
broadcast association mode, 104
iNQA, 83
broadcast association mode configuration,
109, 125 outbound
broadcast mode authentication configuration, port mirroring, 289
114 outputting
broadcast mode dynamic associations max, information center log configuration (console),
118 338
broadcast mode+authentication, 135 information center log configuration (Linux log
client/server association mode, 104 host), 340
client/server association mode configuration, information center log default output rules, 322
108, 120 information center logs configuration (UNIX log
client/server mode authentication host), 339
configuration, 111 information center synchronous log output, 333
client/server mode dynamic associations max, information logs (console), 328
118 information logs (log host), 330
client/server mode+authentication, 133 information logs (monitor terminal), 329
441
information logs to various destinations, 328 SNMP common parameter configuration, 158
over-temperature protection (PoE), 151 SNMPv3 group and user configuration in FIPS
mode, 162
P
path
packet NQA client operation (path jitter), 24
flow mirroring configuration, 309, 314 NQA operation configuration, 66
flow mirroring QoS policy application, 313 pause
flow mirroring traffic behavior, 312 automated underlay network deployment, 363
NTP DSCP value setting, 119 PD
packet capture display filter configuration PoE PD detection (nonstandard), 147
(packet field expression), 348
performing
port mirroring configuration, 289, 301
NETCONF CLI operations, 231, 232
SNTP configuration, 107, 138, 141, 141
PI
packet capture
PoE enable, 145
capture filter keywords, 342
PoE profile PI configuration, 150
capture filter operator, 343
power management configuration, 149
configuration, 342, 350
power max configuration, 148
display filter configuration, 346, 348
troubleshooting PoE profile, 154
display filter configuration (logical expression),
troubleshooting priority failure, 153
348
ping
display filter configuration (packet field
expression), 348 address reachability determination, 1, 2
display filter configuration (proto[…] network connectivity test, 1
expression), 348 system maintenance, 1
display filter configuration (relational PMM
expression), 348 3rd party process start, 284
display filter keyword, 346 3rd party process stop, 284
display filter operator, 347 display, 284
feature image-based configuration, 349, 350 kernel thread deadloop detection, 286
feature image-based file save, 349 kernel thread maintain, 286
feature image-based packet data display filter, kernel thread monitoring, 286
349 kernel thread starvation detection, 287
file content display, 350 Linux kernel thread, 283
filter configuration, 342, 345 Linux network, 283
filter configuration (expr relop expr Linux user, 283
expression), 345 monitor, 284
filter configuration (logical expression), 345 user PMM display, 285
filter configuration (proto [ exprsize ] user PMM maintain, 285
expression), 345
user PMM monitor, 285
filter configuration (vlan vlan_id expression),
PoE
345
configuration, 143, 145
stopping, 350
configuration (uni-PSE device), 152, 152
parameter
display, 152
CWMP CPE ACS authentication, 258
enable for PI, 145
NQA client history record save, 30
over-temperature protection enable, 151
NQA client operation optional parameters, 26
PD detection (nonstandard), 147
NQA client template optional parameters, 44
PI power management configuration, 149
NTP dynamic associations max, 118
PI power max configuration, 148
NTP local clock as reference source, 110
power configuration, 148
NTP message receiving disable, 118
profile application, 150
NTP message source address, 117
profile PI configuration, 150
NTP optional parameter configuration, 117
442
PSE firmware upgrade, 151 Layer 2 remote port mirroring configuration
PSE power max configuration, 148 (reflector port configurable), 303
PSE power monitoring configuration, 149 Layer 2 remote port mirroring configuration
troubleshoot, 153 restrictions, 295
troubleshoot PI critical priority failure, 153 Layer 2 remote port mirroring egress port
configuration restrictions, 300
troubleshoot PoE profile failure, 154
Layer 2 remote port mirroring reflector port
policy
configuration restrictions, 299
CWMP ACS HTTPS SSL client policy, 258
Layer 2 remote port mirroring remote destination
EAA configuration, 270, 277 group configuration restrictions, 297
EAA environment variable configuration Layer 2 remote port mirroring remote probe VLAN
(user-defined), 273 configuration restrictions, 298, 298, 298
EAA event monitor policy configuration (CLI), Layer 2 remote port mirroring source port
278 configuration restrictions, 299
EAA event monitor policy configuration (Track), local configuration, 292
279
local group creation, 293
EAA event monitor policy element, 271
local group monitor port, 293
EAA event monitor policy environment
local group monitor port configuration restrictions,
variable, 272
293
EAA monitor policy, 271
local group source port, 293
EAA monitor policy configuration, 274
local mirroring configuration (source port mode),
EAA monitor policy configuration 301
(CLI-defined+environment variables), 281
local port mirroring, 290
EAA monitor policy configuration (Tcl-defined),
local port mirroring configuration
277
(multi-monitoring-devices), 294
EAA monitor policy suspension, 276
local port mirroring configuration (multiple
flow mirroring QoS policy application, 313 monitoring devices), 302
port local port mirroring configuration restrictions
IPv6 NTP client/server association mode, 121 (multi-monitoring-devices), 294
IPv6 NTP multicast association mode, 130 mirroring source configuration, 293
IPv6 NTP symmetric active/passive monitor port to remote probe VLAN assignment,
association mode, 124 298
mirroring. See port mirroring remote probe VLAN, 298
NTP association mode, 108 remote destination group creation, 297
NTP broadcast association mode, 125 remote destination group monitor port, 297
NTP broadcast mode+authentication, 135 remote source group creation, 298
NTP client/server association mode, 120 terminology, 289
NTP client/server mode+authentication, 133 portal authentication
NTP configuration, 102, 107, 120 configuration, 413
NTP multicast association mode, 127 power
NTP symmetric active/passive association over Ethernet. Use PoE
mode, 123 PoE configuration, 148
SNTP configuration, 107, 138, 141, 141 PoE configuration (uni-PSE device), 152, 152
port mirroring PoE PI max configuration, 148
classification, 290 PoE PI power management configuration, 149
configuration, 289, 301 PoE PSE max configuration, 148
configuration restrictions, 292 sourcing equipment. Use PSE
display, 301 private
Layer 2 remote configuration, 295 RMON private alarm group, 172
Layer 2 remote port mirroring, 290 procedure
Layer 2 remote port mirroring configuration applying flow mirroring QoS policy, 313
(egress port), 306
applying flow mirroring QoS policy (global), 313
443
applying flow mirroring QoS policy (interface), configuring Event MIB trigger test (Boolean), 191
313 configuring Event MIB trigger test (existence),
applying flow mirroring QoS policy (VLAN), 189
313 configuring Event MIB trigger test (threshold), 187,
applying PoE profile, 150 194
assigning CWMP ACS attribute configuring feature image-based packet capture,
(preferred)(CLI), 257 349
assigning CWMP ACS attribute configuring flow mirroring, 314
(preferred)(DHCP server), 256 configuring flow mirroring traffic behavior, 312
canceling subscription to NETCONF module configuring flow mirroring traffic class, 311
report event, 236 configuring information center, 326
configuring a Puppet agent, 390 configuring information center log output
configuring border node, 367 (console), 338
configuring Chef, 377, 377 configuring information center log output (Linux
configuring Chef client, 376 log host), 340
configuring Chef server, 376 configuring information center log output (UNIX
configuring Chef workstation, 376 log host), 339
configuring CWMP, 255, 261 configuring information center log suppression,
configuring CWMP ACS attribute, 256 333
configuring CWMP ACS attribute configuring information center log suppression for
(default)(CLI), 257 module, 334
configuring CWMP ACS attribute (preferred), configuring information center trace log file max
256 size, 337
configuring CWMP ACS autoconnect configuring iNQA, 86
parameters, 260 configuring IPv6 NTP client/server association
configuring CWMP ACS close-wait timer, 261 mode, 121
configuring CWMP ACS connection retry max configuring IPv6 NTP multicast association mode,
number, 260 130
configuring CWMP ACS periodic Inform configuring IPv6 NTP symmetric active/passive
feature, 260 association mode, 124
configuring CWMP CPE ACS authentication configuring Layer 2 remote port mirroring, 295
parameters, 258 configuring Layer 2 remote port mirroring (egress
configuring CWMP CPE ACS connection port), 306
interface, 259 configuring Layer 2 remote port mirroring
configuring CWMP CPE ACS provision code, (reflector port configurable), 303
259 configuring local port mirroring, 292
configuring CWMP CPE attribute, 258 configuring local port mirroring (multiple monitor
configuring EAA environment variable ports), 294
(user-defined), 273 configuring local port mirroring (multiple
configuring EAA event monitor policy (CLI), monitoring devices), 302
278 configuring local port mirroring (source port
configuring EAA event monitor policy (Track), mode), 301
279 configuring local port mirroring group monitor port,
configuring EAA monitor policy, 274 293
configuring EAA monitor policy configuring local port mirroring group source ports,
(CLI-defined+environment variables), 281 293
configuring EAA monitor policy (Tcl-defined), configuring MAC address of VSI interfaces, 368
277 configuring master spinde node, 363
configuring EPS agent, 402, 402 configuring mirroring sources, 293
configuring Event MIB, 182 configuring NETCONF, 199
configuring Event MIB event, 183 configuring NQA, 8
configuring Event MIB trigger test, 185 configuring NQA client history record save, 30
configuring NQA client operation, 9
444
configuring NQA client operation (DHCP), 12 configuring NQA operation (UDP echo), 59
configuring NQA client operation (DLSw), 24 configuring NQA operation (UDP jitter), 54
configuring NQA client operation (DNS), 13 configuring NQA operation (UDP tracert), 61
configuring NQA client operation (FTP), 14 configuring NQA operation (voice), 62
configuring NQA client operation (HTTP), 15 configuring NQA server, 9
configuring NQA client operation (ICMP echo), configuring NQA template (DNS), 71
10 configuring NQA template (FTP), 75
configuring NQA client operation (ICMP jitter), configuring NQA template (HTTP), 74
11 configuring NQA template (HTTPS), 75
configuring NQA client operation (path jitter), configuring NQA template (ICMP), 70
24
configuring NQA template (RADIUS), 76
configuring NQA client operation (SNMP), 18
configuring NQA template (SSL), 77
configuring NQA client operation (TCP), 18
configuring NQA template (TCP half open), 72
configuring NQA client operation (UDP echo),
configuring NQA template (TCP), 72
19
configuring NQA template (UDP), 73
configuring NQA client operation (UDP jitter),
16 configuring NTP, 107
configuring NQA client operation (UDP tracert), configuring NTP association mode, 108
20 configuring NTP broadcast association mode,
configuring NQA client operation (voice), 22 109, 125
configuring NQA client operation optional configuring NTP broadcast mode authentication,
parameters, 26 114
configuring NQA client statistics collection, 29 configuring NTP broadcast mode+authentication,
135
configuring NQA client template, 31
configuring NTP client/server association mode,
configuring NQA client template (DNS), 33
108, 120
configuring NQA client template (FTP), 41
configuring NTP client/server mode
configuring NQA client template (HTTP), 38 authentication, 111
configuring NQA client template (HTTPS), 39 configuring NTP client/server
configuring NQA client template (ICMP), 32 mode+authentication, 133
configuring NQA client template (RADIUS), 42 configuring NTP dynamic associations max, 118
configuring NQA client template (SSL), 43 configuring NTP local clock as reference source,
configuring NQA client template (TCP half 110
open), 35 configuring NTP multicast association mode, 110,
configuring NQA client template (TCP), 34 127
configuring NQA client template (UDP), 36 configuring NTP multicast mode authentication,
configuring NQA client template optional 116
parameters, 44 configuring NTP optional parameters, 117
configuring NQA client threshold monitoring, configuring NTP symmetric active/passive
27 association mode, 108, 123
configuring NQA client+Track collaboration, configuring NTP symmetric active/passive mode
27 authentication, 113
configuring NQA collaboration, 68 configuring packet capture (feature image-based),
configuring NQA operation (DHCP), 50 350
configuring NQA operation (DLSw), 65 configuring PMM kernel thread deadloop
configuring NQA operation (DNS), 51 detection, 286
configuring NQA operation (FTP), 52 configuring PMM kernel thread starvation
detection, 287
configuring NQA operation (HTTP), 53
configuring PoE, 145
configuring NQA operation (ICMP echo), 46
configuring PoE (uni-PSE device), 152, 152
configuring NQA operation (ICMP jitter), 47
configuring PoE PI power management, 149
configuring NQA operation (path jitter), 66
configuring PoE PI power max, 148
configuring NQA operation (SNMP), 57
configuring PoE power, 148
configuring NQA operation (TCP), 58
445
configuring PoE profile PI, 150 configuring SNTP authentication, 139
configuring PoE PSE power max, 148 configuring SQA, 413
configuring PoE PSE power monitoring, 149 configuring VCF fabric, 360
configuring port mirroring monitor port to configuring VCF fabric automated underlay
remote probe VLAN assignment, 298 network deployment, 362, 364
configuring port mirroring remote destination creating local port mirroring group, 293
group monitor port, 297 creating port mirroring remote destination group
configuring port mirroring remote probe VLAN, on the destination device, 297
298 creating port mirroring remote source group on
configuring port mirroring remote source the source device, 298
group egress port, 300 creating RMON Ethernet statistics entry, 173
configuring port mirroring remote source creating RMON history control entry, 173
group reflector port, 299 debugging feature module, 6
configuring port mirroring remote source determining ping address reachability, 2
group source ports, 299
disabling information center interface link up/link
configuring Puppet, 391, 391 down log generation, 334
configuring RabbiMQ server communication disabling NTP message interface receiving, 118
parameters, 365
displaying CWMP settings, 261
configuring resources, 390
displaying EAA settings, 277
configuring RMON alarm, 174, 177
displaying eMDI, 411
configuring RMON Ethernet statistics group,
displaying Event MIB, 189
176
displaying information center, 337
configuring RMON history group, 176
displaying iNQA analyzer, 91
configuring RMON statistics, 173
displaying iNQA collector, 91
configuring sFlow, 318, 318
displaying NMM sFlow, 318
configuring sFlow agent+collector information,
316 displaying NQA, 45
configuring sFlow counter sampling, 318 displaying NTP, 120
configuring sFlow flow sampling, 317 displaying packet file content, 350
configuring SNMP common parameters, 158 displaying PMM, 284
configuring SNMP logging, 165 displaying PMM kernel threads, 287
configuring SNMP notification, 162 displaying PMM user processes, 286
configuring SNMPv1, 167 displaying PoE, 152
configuring SNMPv1 community, 159, 159 displaying port mirroring, 301
configuring SNMPv1 community by displaying RMON settings, 175
community name, 159 displaying SNMP settings, 166
configuring SNMPv1 host notification send, displaying SNTP, 140
163 displaying SQA, 413
configuring SNMPv2c, 167 displaying user PMM, 285
configuring SNMPv2c community, 159, 159 displaying VCF fabric, 369
configuring SNMPv2c community by enabling CWMP, 256
community name, 159 enabling Event MIB SNMP notification, 188
configuring SNMPv2c host notification send, enabling information center, 328
163 enabling information center duplicate log
configuring SNMPv3, 168 suppression, 333
configuring SNMPv3 group and user, 160 enabling information center synchronous log
configuring SNMPv3 group and user in FIPS output, 333
mode, 162 enabling information center system log SNMP
configuring SNMPv3 group and user in notification, 334
non-FIPS mode, 161 enabling L2 agent, 366
configuring SNMPv3 host notification send, enabling L3 agent, 367
163 enabling local proxy ARP, 368
configuring SNTP, 107, 141, 141
446
enabling NQA client, 9 performing NETCONF CLI operations, 231, 232
enabling PoE over-temperature protection, retrieving device configuration information, 204
151 retrieving NETCONF configuration data (all
enabling PoE PD detection (nonstandard), modules), 211
147 retrieving NETCONF configuration data (Syslog
enabling PoE PI, 145 module), 212
enabling SNMP agent, 157 retrieving NETCONF data entry (interface table),
enabling SNMP notification, 162 209
enabling SNMP version, 157 retrieving NETCONF information, 208
enabling SNTP, 138 retrieving NETCONF non-default settings, 207
enabling VCF fabric topology discovery, 361 retrieving NETCONF session information, 209,
establishing NETCONF over console 213
sessions, 203 retrieving NETCONF YANG file content
establishing NETCONF over SOAP sessions, information, 208
202 returning to NETCONF CLI, 239
establishing NETCONF over SSH sessions, rolling back NETCONF configuration, 226
203 rolling back NETCONF configuration
establishing NETCONF over Telnet sessions, (configuration file-based), 227
203 rolling back NETCONF configuration (rollback
establishing NETCONF session, 200 point-based), 227
exchanging NETCONF capabilities, 204 saving feature image-based packet capture to file,
filtering feature image-based packet capture 349
data display, 349 saving information center diagnostic logs (log file),
filtering NETCONF data, 214 336
filtering NETCONF data (conditional match), saving information center log (log file), 331
219 saving information center security logs (log file),
filtering NETCONF data (regex match), 217 335
identifying tracert node failure, 4, 4 saving NETCONF configuration, 224
loading NETCONF configuration, 226 saving NETCONF running configuration, 225
locking NETCONF running configuration, 220, scheduling CWMP ACS connection initiation, 260
221 scheduling NQA client operation, 31
maintaining information center, 337 setting information center log storage period, 332
maintaining PMM kernel thread, 286 setting NETCONF session attribute, 200
maintaining PMM kernel threads, 287 setting NTP packet DSCP value, 119
maintaining PMM user processes, 286 setting VCF fabric automated underlay network
maintaining user PMM, 285 NETCONF username and password, 363
managing information center security log, 335 shutting down Chef, 377
managing information center security log file, shutting down Puppet (on device), 390
336 signing a certificate for Puppet agent, 390
modifying NETCONF configuration, 222, 223 specifying automated underlay network
monitoring PMM, 284 deployment template file, 362
monitoring PMM kernel thread, 286 specifying CWMP ACS HTTPS SSL client policy,
258
monitoring user PMM, 285
specifying NTP message source address, 117
outputting information center logs (console),
328 specifying overlay network type, 366
outputting information center logs (log host), specifying VCF fabric automated underlay
330 network device role, 362
outputting information center logs (monitor starting Chef, 376
terminal), 329 starting PMM 3rd party process, 284
outputting information center logs to various starting Puppet, 390
destinations, 328 stopping PMM 3rd party process, 284
pausing underlay network deployment, 363 subscribing to NETCONF events, 233, 237
447
subscribing to NETCONF module report event, QoS
235 flow mirroring configuration, 309, 314
subscribing to NETCONF monitoring event, flow mirroring QoS policy application, 313
234 quality
subscribing to NETCONF syslog event, 233 SQA configuration, 413
suspending EAA monitor policy, 276
R
terminating NETCONF session, 238
testing network connectivity with ping, 1 RADIUS
troubleshooting sFlow remote collector cannot NQA client template, 42
receive packets, 320 NQA template configuration, 76
unlocking NETCONF running configuration, real-time
220, 221 event manager. See RTM
upgrading PSE firmware, 151 reflector port
process Layer 2 remote port mirroring, 289
monitoring and maintenance. See PMM port mirroring remote source group reflector port,
profile 299
PoE profile application, 150 regex match
PoE profile PI configuration, 150 NETCONF data filtering, 217
protocols and standards NETCONF data filtering (column-based), 216
NETCONF, 197, 199 regular expression. Use regex
NTP, 106, 138 relational
packet capture display filter keyword, 346 packet capture display filter configuration
RMON, 173 (relational expression), 348
sFlow, 316 remote
SNMP configuration, 155, 167 Layer 2 remote port mirroring, 295
SNMP versions, 156 port mirroring destination group, 297
provision code (ACS), 259 port mirroring destination group monitor port, 297
PSE port mirroring destination group remote probe
PoE configuration, 143, 145 VLAN, 298
PoE configuration (uni-PSE device), 152, 152 port mirroring monitor port to remote probe VLAN
assignment, 298
power max configuration, 148
port mirroring source group, 298
power monitoring configuration, 149
port mirroring source group egress port, 300
Puppet
port mirroring source group reflector port, 299
configuration, 388, 391, 391
port mirroring source group remote probe VLAN,
configuring a Puppet agent, 390
298
configuring resources, 390
port mirroring source group source ports, 299
network framework, 388
Remote Network Monitoring. Use RMON
Puppet agent certificate signing, 390
remote probe VLAN
resources, 389, 392
Layer 2 remote port mirroring, 289
resources (netdev_device), 392
local port mirroring (multi-monitoring-devices),
resources (netdev_interface), 393 294
resources (netdev_l2_interface), 394 port mirroring monitor port to remote probe VLAN
resources (netdev_l2vpn), 395 assignment, 298
resources (netdev_lagg), 396 port mirroring remote destination group, 298
resources (netdev_vlan), 397 port mirroring remote source group, 298
resources (netdev_vsi), 398 reporting
resources (netdev_vte), 398 NETCONF module report event subscription, 235
resources (netdev_vxlan), 399 NETCONF module report event subscription
shutting down (on device), 390 cancel, 236
start, 390 resource
Q Chef, 374, 380
448
Chef netdev_device, 380 NQA server configuration, 9
Chef netdev_interface, 380 NTP configuration, 106
Chef netdev_l2_interface, 382 port mirroring configuration, 292
Chef netdev_l2vpn, 383 port mirroringlocal port mirroring configuration
Chef netdev_lagg, 383 (multi-monitoring-devices), 294
Chef netdev_vlan, 384 RMON alarm configuration, 174
Chef netdev_vsi, 385 RMON history control entry creation, 173
Chef netdev_vte, 386 SNMPv1 community configuration, 159
Chef netdev_vxlan, 386 SNMPv2 community configuration, 159
Puppet, 389, 392 SNMPv3 group and user configuration, 160
Puppet netdev_device, 392 SNTP configuration, 106
Puppet netdev_interface, 393 SNTP configuration restrictions, 138
Puppet netdev_l2_interface, 394 retrieivng
Puppet netdev_l2vpn, 395 device configuration information, 204
Puppet netdev_lagg, 396 retrieving
Puppet netdev_vlan, 397 NETCONF configuration data (all modules), 211
Puppet netdev_vsi, 398 NETCONF configuration data (Syslog module),
Puppet netdev_vte, 398 212
Puppet netdev_vxlan, 399 NETCONF data entry (interface table), 209
restrictions NETCONF device configuration+state
information, 205
EAA monitor policy configuration, 274
NETCONF information, 208
EAA monitor policy configuration (Tcl), 276
NETCONF non-default settings, 207
Layer 2 remote port configuration, 295
NETCONF session information, 209, 213
Layer 2 remote port mirroring egress port
configuration, 300 NETCONF YANG file content, 208
Layer 2 remote port mirroring reflector port returning
configuration, 299 NETCONF CLI return, 239
Layer 2 remote port mirroring remote RMON
destination group configuration, 297 alarm configuration, 174, 177
Layer 2 remote port mirroring remote probe alarm configuration restrictions, 174
VLAN configuration, 298, 298, 298 alarm group, 172
Layer 2 remote port mirroring source port alarm group sample types, 173
configuration, 299 configuration, 171, 176
local port mirroring group monitor port Ethernet statistics entry creation, 173
configuration, 293
Ethernet statistics group, 171
NETCONF session establishment, 200
Ethernet statistics group configuration, 176
NQA client history record save, 30
event group, 171
NQA client operation (FTP), 14
Event MIB configuration, 180, 182, 189
NQA client operation (ICMP jitter), 12
Event MIB event configuration, 183
NQA client operation (UDP jitter), 16
Event MIB trigger test configuration (Boolean),
NQA client operation (UDP tracert), 20 191
NQA client operation (voice), 22 Event MIB trigger test configuration (existence),
NQA client operation optional parameter 189
configuration, 26 Event MIB trigger test configuration (threshold),
NQA client operation scheduling, 31 194
NQA client statistics collection, 29 group, 171
NQA client template configuration, 31 history control entry creation, 173
NQA client template optional parameter history control entry creation restrictions, 173
configuration, 44 history group, 171
NQA client threshold monitoring configuration, history group configuration, 176
28
how it works, 171
NQA client+Track collaboration, 27
449
private alarm group, 172 sFlow counter sampling, 318
protocols and standards, 173 sFlow flow sampling configuration, 317
settings display, 175 saving
statistics configuration, 173 feature image-based packet capture to file, 349
statistics function, 173 information center diagnostic logs (log file), 336
rolling back information center log (log file), 331
NETCONF configuration, 226 information center security logs (log file), 335
NETCONF configuration (configuration NETCONF configuration, 224
file-based), 227 NETCONF running configuration, 225
NETCONF configuration (rollback NQA client history records, 30
point-based), 227 scheduling
routing CWMP ACS connection initiation, 260
IPv6 NTP client/server association mode, 121 NQA client operation, 31
IPv6 NTP multicast association mode, 130 security
IPv6 NTP symmetric active/passive eMDI implementation, 404
association mode, 124
information center security log file management,
NTP association mode, 108 336
NTP broadcast association mode, 125 information center security log management, 335
NTP broadcast mode+authentication, 135 information center security log save (log file), 335
NTP client/server association mode, 120 information center security logs, 321
NTP client/server mode+authentication, 133 NTP, 105
NTP configuration, 102, 107, 120 NTP authentication, 106, 111
NTP multicast association mode, 127 NTP broadcast mode authentication, 114
NTP symmetric active/passive association NTP client/server mode authentication, 111
mode, 123
NTP multicast mode authentication, 116
SNTP configuration, 107, 138, 141, 141
NTP symmetric active/passive mode
RPC authentication, 113
CWMP RPC methods, 253 SNTP authentication, 139
RTM server
EAA, 270 Chef server configuration, 376
EAA configuration, 270, 277 NQA configuration, 9
Ruby SNTP configuration, 107, 138, 141, 141
Chef configuration, 373, 377, 377 service
Chef resources, 374 NETCONF configuration data retrieval (all
rule modules), 211
information center log default output rules, NETCONF configuration data retrieval (Syslog
322 module), 212
SNMP access control (rule-based), 156 NETCONF configuration modification, 223
system information default output rules SQA configuration, 413
(diagnostic log), 322 service quality
system information default output rules SQA configuration, 413
(hidden log), 323
session
system information default output rules
NETCONF session attribute, 200
(security log), 322
NETCONF session establishment, 200
system information default output rules (trace
log), 323 NETCONF session information retrieval, 209, 213
runtime NETCONF session termination, 238
EAA event monitor policy runtime, 272 sessions
NETCONF over console session establishment,
S 203
sampling NETCONF over SOAP session establishment,
Sampled Flow. Use sFlow 202
450
NETCONF over SSH session establishment, Event MIB trigger test configuration (existence),
203 189
NETCONF over Telnet session establishment, Event MIB trigger test configuration (threshold),
203 187, 194
set FIPS compliance, 156
VCF fabric automated underlay network framework, 155
deployment NETCONF username and get operation, 165
password, 363 Get operation, 156
set operation host notification send, 163
SNMP, 156 information center system log SNMP notification,
SNMP logging, 165 334
setting logging configuration, 165
information center log storage period, 332 manager, 155
NETCONF session attribute, 200 MIB, 155, 155
NTP packet DSCP value, 119 MIB view-based access control, 155
severity level (system information), 321 notification configuration, 162
sFlow notification enable, 162
agent+collector information configuration, 316 Notification operation, 156
configuration, 316, 318, 318 NQA client operation, 18
counter sampling configuration, 318 NQA operation configuration, 57
display, 318 protocol versions, 156
flow sampling configuration, 317 RMON configuration, 171, 176
protocols and standards, 316 Set operation, 156
troubleshoot, 320 set operation, 165
troubleshoot remote collector cannot receive settings display, 166
packets, 320 SNMPv1 community configuration, 159, 159
shutting down SNMPv1 community configuration by community
Chef, 377 name, 159
Puppet (on device), 390 SNMPv1 community configuration by creating
Simple Network Management Protocol. Use SNMPv1 user, 160
SNMP SNMPv1 configuration, 167
Simplified NTP. See SNTP SNMPv2c community configuration, 159, 159
SIP SNMPv2c community configuration by community
SQA configuration, 413, 413 name, 159
SIP call SNMPv2c community configuration by creating
display SQA, 413 SNMPv2c user, 160
SNMP SNMPv2c configuration, 167
access control mode, 156 SNMPv3 configuration, 168
agent, 155 SNMPv3 group and user configuration, 160
agent enable, 157 SNMPv3 group and user configuration in FIPS
agent notification, 162 mode, 162
common parameter configuration, 158 SNMPv3 group and user configuration in
non-FIPS mode, 161
configuration, 155, 167
version enable, 157
Event MIB configuration, 180, 182, 189
SNMPv1
Event MIB display, 189
community configuration, 159, 159
Event MIB event configuration, 183
community configuration restrictions, 159
Event MIB SNMP notification enable, 188
configuration, 167
Event MIB trigger test configuration, 185
host notification send, 163
Event MIB trigger test configuration (Boolean),
191 Notification operation, 156
protocol version, 156
SNMPv2
451
community configuration restrictions, 159 NQA client template (SSL), 43
SNMPv2c NQA template configuration, 77
community configuration, 159, 159 starting
configuration, 167 Chef, 376
host notification send, 163 PMM 3rd party process, 284
Notification operation, 156 Puppet, 390
protocol version, 156 starvation detection (Linux kernel thread PMM), 287
SNMPv3 statistics
configuration, 168 NQA client statistics collection, 29
Event MIB object owner, 182 RMON configuration, 171, 176
group and user configuration, 160 RMON Ethernet statistics entry, 173
group and user configuration in FIPS mode, RMON Ethernet statistics group, 171
162 RMON Ethernet statistics group configuration,
group and user configuration in non-FIPS 176
mode, 161 RMON history control entry, 173
group and user configuration restrictions, 160 RMON statistics configuration, 173
Notification operation, 156 RMON statistics function, 173
notification send, 163 sFlow agent+collector information configuration,
protocol version, 156 316
SNTP sFlow configuration, 316, 318, 318
authentication, 139 sFlow counter sampling configuration, 318
configuration, 107, 138, 141, 141 sFlow flow sampling configuration, 317
configuration restrictions, 106, 138 stopping
display, 140 packet capture, 350
enable, 138 PMM 3rd party process, 284
SOAP storage
NETCONF message format, 197 information center log storage period, 332
NETCONF over SOAP session establishment, subscribing
202 NETCONF event subscription, 233, 237
source NETCONF module report event subscription, 235
port mirroring source, 289 NETCONF monitoring event subscription, 234
port mirroring source device, 289 NETCONF syslog event subscription, 233
specify suppressing
master spine node, 363 information center duplicate log suppression, 333
VCF fabric automated underlay network information center log suppression for module,
deployment device role, 362 334
VCF fabric automated underlay network suspending
deployment template file, 362 EAA monitor policy, 276
VCF fabric overlay network type, 366 switch
specifying module debug, 5
CWMP ACS HTTPS SSL client policy, 258 screen output, 5
NTP message source address, 117 symmetric
SQA IPv6 NTP symmetric active/passive association
display, 413 mode, 124
SSH NTP symmetric active/passive association mode,
Chef configuration, 373, 377, 377 104, 108, 113, 123
NETCONF over SSH session establishment, NTP symmetric active/passive mode dynamic
203 associations max, 118
Puppet configuration, 388, 391, 391 synchronizing
SSL information center synchronous log output, 333
CWMP ACS HTTPS SSL client policy, 258 NTP configuration, 102, 107, 120
452
SNTP configuration, 107, 138, 141, 141 tracert node failure identification, 4, 4
syslog system debugging
NETCONF configuration data retrieval module debugging switch, 5
(Syslog module), 212 screen output switch, 5
NETCONF syslog event subscription, 233 system information
system information center configuration, 321, 326, 338
default output rules (diagnostic log), 322
T
default output rules (hidden log), 323
default output rules (security log), 322 table
default output rules (trace log), 323 NETCONF data entry retrieval (interface table),
209
information center duplicate log suppression,
333 Tcl
information center interface link up/link down EAA configuration, 270, 277
log generation, 334 EAA monitor policy configuration, 277
information center log destinations, 322 TCP
information center log levels, 321 NQA client operation, 18
information center log output (console), 328 NQA client template, 34
information center log output (log host), 330 NQA client template (TCP half open), 35
information center log output (monitor NQA operation configuration, 58
terminal), 329 NQA template configuration, 72
information center log output configuration NQA template configuration (half open), 72
(console), 338 Telnet
information center log output configuration NETCONF over Telnet session establishment,
(Linux log host), 340 203
information center log output configuration temperature
(UNIX log host), 339 PoE over-temperature protection enable, 151
information center log save (log file), 331 template
information center log types, 321 NQA client template (DNS), 33
information center security log file NQA client template (FTP), 41
management, 336
NQA client template (HTTP), 38
information center security log management,
NQA client template (HTTPS), 39
335
NQA client template (ICMP), 32
information center security log save (log file),
335 NQA client template (RADIUS), 42
information center synchronous log output, NQA client template (SSL), 43
333 NQA client template (TCP half open), 35
information center system log SNMP NQA client template (TCP), 34
notification, 334 NQA client template (UDP), 36
information log formats and field descriptions, NQA client template configuration, 31
323 NQA client template optional parameters, 44
log default output rules, 322 NQA template configuration (DNS), 71
system administration NQA template configuration (FTP), 75
Chef configuration, 373, 377, 377 NQA template configuration (HTTP), 74
debugging, 1 NQA template configuration (HTTPS), 75
feature module debug, 6 NQA template configuration (ICMP), 70
ping, 1 NQA template configuration (RADIUS), 76
ping address reachability, 2 NQA template configuration (SSL), 77
ping command, 1 NQA template configuration (TCP half open), 72
ping network connectivity test, 1 NQA template configuration (TCP), 72
Puppet configuration, 388, 391, 391 NQA template configuration (UDP), 73
system debugging, 5 template file
tracert, 1, 2 automated underlay network deployment, 359
453
VCF fabric automated underlay network information center system log SNMP notification,
deployment configuration, 362 334
terminating SNMP notification, 162
NETCONF session, 238 triggering
testing Event MIB trigger test configuration, 185
Event MIB trigger test configuration, 185 Event MIB trigger test configuration (Boolean),
Event MIB trigger test configuration (Boolean), 191
191 Event MIB trigger test configuration (existence),
Event MIB trigger test configuration 189
(existence), 189 Event MIB trigger test configuration (threshold),
Event MIB trigger test configuration 187, 194
(threshold), 187, 194 troubleshooting
ping network connectivity test, 1 PI critical priority failure, 153
threshold PoE, 153
Event MIB trigger test, 181 PoE profile failure, 154
Event MIB trigger test configuration, 187, 194 sFlow, 320
NQA client threshold monitoring, 8, 27 sFlow remote collector cannot receive packets,
time 320
NTP configuration, 102, 107, 120 tunneling
NTP local clock as reference source, 110 Chef resources (netdev_vte), 386
SNTP configuration, 107, 138, 141, 141 Puppet resources (netdev_vte), 398
timer U
CWMP ACS close-wait timer, 261
UDP
topology
IPv6 NTP client/server association mode, 121
VCF fabric, 354
IPv6 NTP multicast association mode, 130
VCF fabric topology discovery, 361
IPv6 NTP symmetric active/passive association
traceroute. See tracert mode, 124
tracert NQA client operation (UDP echo), 19
IP address retrieval, 2 NQA client operation (UDP jitter), 16
node failure detection, 2, 4, 4 NQA client operation (UDP tracert), 20
NQA client operation (UDP tracert), 20 NQA client template, 36
NQA operation configuration (UDP tracert), 61 NQA operation configuration (UDP echo), 59
system maintenance, 1 NQA operation configuration (UDP jitter), 54
tracing NQA operation configuration (UDP tracert), 61
information center trace log file max size, 337 NQA template configuration, 73
Track NTP association mode, 108
EAA event monitor policy configuration, 279 NTP broadcast association mode, 125
NQA client+Track collaboration, 27 NTP broadcast mode+authentication, 135
NQA collaboration, 7 NTP client/server association mode, 120
NQA collaboration configuration, 68 NTP client/server mode+authentication, 133
traffic NTP configuration, 102, 107, 120
NQA client operation (voice), 22 NTP multicast association mode, 127
RMON configuration, 171, 176 NTP symmetric active/passive association mode,
sFlow agent+collector information 123
configuration, 316 sFlow configuration, 316, 318, 318
sFlow configuration, 316, 318, 318 UNIX
sFlow counter sampling configuration, 318 information center log host output configuration,
sFlow flow sampling configuration, 317 339
trapping unlocking
Event MIB SNMP notification enable, 188 NETCONF running configuration, 220
upgrading
454
PoE PSE firmware, 151 automated underlay network deployment
user NETCONF username and password setting, 363
PMM Linux user, 283 view
user process SNMP access control (view-based), 156
display, 286 virtual
maintain, 286 Virtual Converged Framework. Use VCF
VLAN
V
Chef resources, 380
variable Chef resources (netdev_l2_interface), 382
EAA environment variable configuration Chef resources (netdev_vlan), 384
(user-defined), 273
flow mirroring configuration, 309, 314
EAA event monitor policy environment
flow mirroring QoS policy application, 313
(user-defined), 273
Layer 2 remote port mirroring configuration, 295
EAA event monitor policy environment
system-defined (event-specific), 272 local port mirroring configuration, 292
EAA event monitor policy environment local port mirroring group monitor port, 293
system-defined (public), 272 local port mirroring group source port, 293
EAA event monitor policy environment packet capture filter configuration (vlan vlan_id
variable, 272 expression), 345
EAA monitor policy configuration port mirroring configuration, 289, 301
(CLI-defined+environment variables), 281 port mirroring remote probe VLAN, 289
VCF fabric Puppet resources, 392
automated deployment, 358 Puppet resources (netdev_l2_interface), 394
automated deployment process, 359 Puppet resources (netdev_vlan), 397
automated underlay network delpoyment VCF fabric configuration, 354, 360
template file configuration, 362 voice
automated underlay network deployment NQA client operation, 22
configuration, 362, 364 NQA operation configuration, 62
automated underlay network deployment VPN
device role configuration, 362
Chef resources (netdev_l2vpn), 383
automated underlay network deployment
Puppet resources (netdev_l2vpn), 395
NETCONF username and password setting,
363 VSI
configuration, 354, 360 Chef resources (netdev_vsi), 385
display, 369 Puppet resources (netdev_vsi), 398
local proxy ARP, 368 VTE
MAC address of VSI interfaces, 368 Chef resources (netdev_vte), 386
master spine node configuration, 363 Puppet resources (netdev_vte), 398
Neutron components, 356 VXLAN
Neutron deployment, 357 Chef resources (netdev_vxlan), 386
overlay network border node configuration, Puppet resources (netdev_vxlan), 399
367 VCF fabric configuration, 354, 360
overlay network L2 agent, 366 W
overlay network L3 agent, 367
workstation
overlay network tyep specifying, 366
Chef workstation configuration, 376
pausing automated underlay network
deployment, 363 X
RabbitMQ server communication parameters XML
configuration, 365 NETCONF capability exchange, 204
topology, 354 NETCONF configuration, 197, 199
topology discovery enable, 361 NETCONF data filtering, 214
VCF fabric topology NETCONF data filtering (conditional match), 219
455
NETCONF data filtering (regex match), 217
NETCONF message format, 197
NETCONF structure, 197
XSD
NETCONF message format, 197
Y
YANG
NETCONF YANG file content retrieval, 208
456