Security Chassis Cluster
Security Chassis Cluster
Security Chassis Cluster
Modified: 2017-11-16
Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify,
transfer, or otherwise revise this publication without notice.
®
Junos OS Chassis Cluster Feature Guide for SRX Series Devices
Copyright © 2017 Juniper Networks, Inc. All rights reserved.
The information in this document is current as of the date on the title page.
Juniper Networks hardware and software products are Year 2000 compliant. Junos OS has no known time-related limitations through the
year 2038. However, the NTP application is known to have some difficulty in the year 2036.
The Juniper Networks product that is the subject of this technical documentation consists of (or is intended for use with) Juniper Networks
software. Use of such software is subject to the terms and conditions of the End User License Agreement (“EULA”) posted at
http://www.juniper.net/support/eula/. By downloading, installing or using such software, you agree to the terms and conditions of that
EULA.
Part 1 Overview
Chapter 1 Introduction to Chassis Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Chassis Cluster Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
High Availability Using Chassis Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
How High Availability Is Achieved by Chassis Cluster . . . . . . . . . . . . . . . . . . . . 3
Chassis Cluster Active/Active and Active/Passive Modes . . . . . . . . . . . . . . . . . 4
Chassis Cluster Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
IPv6 Clustering Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
IPsec and Chassis Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Chassis Cluster Supported Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Chassis Cluster Supported Features (SRX300, SRX320, SRX340, SRX345,
SRX550M, and SRX1500) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Chassis Cluster-Supported Features (SRX300, SRX320, SRX340, SRX345,
SRX550M, and SRX1500) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Chassis Cluster Supported Features (SRX5800, SRX5600, and
SRX5400) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Chassis Cluster-Supported Features (SRX5800, SRX5600, and
SRX5400) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Chassis Cluster Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Chapter 2 Understanding Chassis Cluster License Requirements . . . . . . . . . . . . . . . . . 55
Understanding Chassis Cluster Licensing Requirements . . . . . . . . . . . . . . . . . . . . 55
Installing Licenses on the Devices in a Chassis Cluster . . . . . . . . . . . . . . . . . . . . . 56
Verifying Licenses for an SRX Series Device in a Chassis Cluster . . . . . . . . . . . . . . 58
Chapter 3 Planning Your Chassis Cluster Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Preparing Your Equipment for Chassis Cluster Formation . . . . . . . . . . . . . . . . . . . 61
SRX Series Chassis Cluster Configuration Overview . . . . . . . . . . . . . . . . . . . . . . . 62
Part 1 Overview
Chapter 1 Introduction to Chassis Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M,
and SRX1500 in a Chassis Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Table 4: Chassis Cluster Feature Support on SRX Series Devices . . . . . . . . . . . . . 23
Table 5: Features Supported on SRX5800, SRX5600, and SRX5400 Devices
in a Chassis Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Table 6: Chassis Cluster Feature Support on SRX5800, SRX5600, and SRX5400
Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Chapter 3 Planning Your Chassis Cluster Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Table 7: Slot Numbering Offsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Table 43: show chassis cluster control-plane statistics Output Fields . . . . . . . . . 511
Table 44: show chassis cluster data-plane interfaces Output Fields . . . . . . . . . . 513
Table 45: show chassis cluster data-plane statistics Output Fields . . . . . . . . . . . 515
Table 46: show chassis cluster ethernet-switching interfaces Output Fields . . . 517
Table 47: show chassis cluster ethernet-switching status Output Fields . . . . . . 518
Table 48: show chassis cluster information Output Fields . . . . . . . . . . . . . . . . . 520
Table 49: show chassis cluster information configuration-synchronization Output
Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
Table 50: show chassis cluster information issu Output Fields . . . . . . . . . . . . . . 527
Table 51: show chassis cluster interfaces Output Fields . . . . . . . . . . . . . . . . . . . . 529
Table 52: show chassis cluster ip-monitoring status Output Fields . . . . . . . . . . 534
Table 53: show chassis cluster ip-monitoring status redundancy group Reason
Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
Table 54: show chassis cluster statistics Output Fields . . . . . . . . . . . . . . . . . . . . 537
Table 55: show chassis cluster status Output Fields . . . . . . . . . . . . . . . . . . . . . . 541
Table 56: show chassis environment Output Fields . . . . . . . . . . . . . . . . . . . . . . . 544
Table 57: show chassis environment cb Output Fields . . . . . . . . . . . . . . . . . . . . 548
Table 58: show chassis ethernet-switch Output Fields . . . . . . . . . . . . . . . . . . . . 551
Table 59: show chassis fabric plane Output Fields . . . . . . . . . . . . . . . . . . . . . . . 555
Table 60: show chassis fabric plane-location Output Fields . . . . . . . . . . . . . . . . 561
Table 61: show chassis fabric summary Output Fields . . . . . . . . . . . . . . . . . . . . . 563
Table 62: show chassis hardware Output Fields . . . . . . . . . . . . . . . . . . . . . . . . . 566
Table 63: show chassis routing-engine Output Fields . . . . . . . . . . . . . . . . . . . . . 577
Table 64: show configuration chassis cluster traceoptions Output Fields . . . . . 580
Table 65: show interfaces (Gigabit Ethernet) Output Fields . . . . . . . . . . . . . . . . 584
Table 66: Gigabit Ethernet IQ PIC Traffic and MAC Statistics by Interface
Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
Table 67: show system ntp threshold Output Fields . . . . . . . . . . . . . . . . . . . . . . 597
Table 68: show security macsec connections Output Fields . . . . . . . . . . . . . . . 598
Table 69: show security macsec statistics Output Fields . . . . . . . . . . . . . . . . . . . 601
Table 70: show security mka statistics Output Fields . . . . . . . . . . . . . . . . . . . . . 604
Table 71: show security mka sessions Output Fields . . . . . . . . . . . . . . . . . . . . . . 606
Table 72: show security internal-security-association Output Fields . . . . . . . . . 608
Table 73: show system license Output Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
If the information in the latest release notes differs from the information in the
documentation, follow the product Release Notes.
Juniper Networks Books publishes books by Juniper Networks engineers and subject
matter experts. These books go beyond the technical documentation to explore the
nuances of network architecture, deployment, and administration. The current list can
be viewed at http://www.juniper.net/books.
Supported Platforms
For the features described in this document, the following platforms are supported:
• SRX Series
• vSRX
If you want to use the examples in this manual, you can use the load merge or the load
merge relative command. These commands cause the software to merge the incoming
configuration into the current candidate configuration. The example does not become
active until you commit the candidate configuration.
If the example configuration contains the top level of the hierarchy (or multiple
hierarchies), the example is a full example. In this case, use the load merge command.
If the example configuration does not start at the top level of the hierarchy, the example
is a snippet. In this case, use the load merge relative command. These procedures are
described in the following sections.
1. From the HTML or PDF version of the manual, copy a configuration example into a
text file, save the file with a name, and copy the file to a directory on your routing
platform.
For example, copy the following configuration to a file and name the file ex-script.conf.
Copy the ex-script.conf file to the /var/tmp directory on your routing platform.
system {
scripts {
commit {
file ex-script.xsl;
}
}
}
interfaces {
fxp0 {
disable;
unit 0 {
family inet {
address 10.0.0.1/24;
}
}
}
}
2. Merge the contents of the file into your routing platform configuration by issuing the
load merge configuration mode command:
[edit]
user@host# load merge /var/tmp/ex-script.conf
load complete
Merging a Snippet
To merge a snippet, follow these steps:
1. From the HTML or PDF version of the manual, copy a configuration snippet into a text
file, save the file with a name, and copy the file to a directory on your routing platform.
For example, copy the following snippet to a file and name the file
ex-script-snippet.conf. Copy the ex-script-snippet.conf file to the /var/tmp directory
on your routing platform.
commit {
file ex-script-snippet.xsl; }
2. Move to the hierarchy level that is relevant for this snippet by issuing the following
configuration mode command:
[edit]
user@host# edit system scripts
[edit system scripts]
3. Merge the contents of the file into your routing platform configuration by issuing the
load merge relative configuration mode command:
For more information about the load command, see CLI Explorer.
Documentation Conventions
Caution Indicates a situation that might result in loss of data or hardware damage.
Laser warning Alerts you to the risk of personal injury from a laser.
Table 2 on page xxiv defines the text and syntax conventions used in this guide.
Bold text like this Represents text that you type. To enter configuration mode, type the
configure command:
user@host> configure
Fixed-width text like this Represents output that appears on the user@host> show chassis alarms
terminal screen.
No alarms currently active
Italic text like this • Introduces or emphasizes important • A policy term is a named structure
new terms. that defines match conditions and
• Identifies guide names. actions.
Italic text like this Represents variables (options for which Configure the machine’s domain name:
you substitute a value) in commands or
configuration statements. [edit]
root@# set system domain-name
domain-name
Text like this Represents names of configuration • To configure a stub area, include the
statements, commands, files, and stub statement at the [edit protocols
directories; configuration hierarchy levels; ospf area area-id] hierarchy level.
or labels on routing platform • The console port is labeled CONSOLE.
components.
< > (angle brackets) Encloses optional keywords or variables. stub <default-metric metric>;
# (pound sign) Indicates a comment specified on the rsvp { # Required for dynamic MPLS only
same line as the configuration statement
to which it applies.
[ ] (square brackets) Encloses a variable for which you can community name members [
substitute one or more values. community-ids ]
GUI Conventions
Bold text like this Represents graphical user interface (GUI) • In the Logical Interfaces box, select
items you click or select. All Interfaces.
• To cancel the configuration, click
Cancel.
> (bold right angle bracket) Separates levels in a hierarchy of menu In the configuration editor hierarchy,
selections. select Protocols>Ospf.
Documentation Feedback
• Online feedback rating system—On any page of the Juniper Networks TechLibrary site
at http://www.juniper.net/techpubs/index.html, simply click the stars to rate the content,
and use the pop-up form to provide us with information about your experience.
Alternately, you can use the online feedback form at
http://www.juniper.net/techpubs/feedback/.
Technical product support is available through the Juniper Networks Technical Assistance
Center (JTAC). If you are a customer with an active J-Care or Partner Support Service
support contract, or are covered under warranty, and need post-sales technical support,
you can access our tools and resources online or open a case with JTAC.
• JTAC hours of operation—The JTAC centers have resources available 24 hours a day,
7 days a week, 365 days a year.
• Find solutions and answer questions using our Knowledge Base: http://kb.juniper.net/
To verify service entitlement by product serial number, use our Serial Number Entitlement
(SNE) Tool: https://entitlementsearch.juniper.net/entitlementsearch/
Overview
• Introduction to Chassis Cluster on page 3
• Understanding Chassis Cluster License Requirements on page 55
• Planning Your Chassis Cluster Configuration on page 61
When configured as a chassis cluster, the two nodes back up each other, with one node
acting as the primary device and the other as the secondary device, ensuring stateful
failover of processes and services in the event of system or hardware failure. If the primary
device fails, the secondary device takes over processing of traffic.
• The devices must be running the same version of the Junos operating system (Junos
OS).
• All SPCs, network processing cards (NPCs), and input/output cards (IOCs) on
applicable SRX Series devices must have the same slot placement and hardware
revision.
• The control ports on the respective nodes are connected to form a control plane that
synchronizes the configuration and kernel state to facilitate the high availability of
interfaces and services.
• The data plane on the respective nodes is connected over the fabric ports to form a
unified data plane. The fabric link allows for the management of cross-node flow
processing and for the management of session redundancy.
The data plane software operates in active/active mode. In a chassis cluster, session
information is updated as traffic traverses either device, and this information is transmitted
between the nodes over the fabric link to guarantee that established sessions are not
dropped when a failover occurs. In active/active mode, it is possible for traffic to ingress
the cluster on one node and egress from the other node.
• Resilient system architecture, with a single active control plane for the entire cluster
and multiple Packet Forwarding Engines. This architecture presents a single device
view of the cluster.
• Monitoring of physical interfaces, and failover if the failure parameters cross a configured
threshold.
• Support for Generic Routing Encapsulation (GRE) tunnels used to route encapsulated
IPv4/IPv6 traffic by means of an internal interface, gr-0/0/0. This interface is created
by Junos OS at system bootup and is used only for processing GRE tunnels. See the
Interfaces Feature Guide for Security Devices.
At any given instant, a cluster can be in one of the following states: hold, primary,
secondary-hold, secondary, ineligible, and disabled. A state transition can be triggered
because of any event, such as interface monitoring, SPU monitoring, failures, and manual
failovers.
The internal IPsec SA requires authorization for RSH on SPU and the Routing Engine. For
telnet, authorization is only required for SPU since telnet for Routing Engine requires a
password.
You set up the IPsec internal SA using the security internal-security-association CLI
command. You can configure the security internal-security-association on a node and
then enable it to activate secure login. The security internal-security-association CLI
command does not need to be set up on each node. When you commit the configuration,
both nodes are synchronized.
When secure login is configured, the IPsec-based rlogin (for starting a terminal session
on a remote host) and rcmd (remote command) commands are enforced so an attacker
cannot gain privileged access or observe traffic that contains administrator commands
and outputs.
Chassis Cluster Supported Features (SRX300, SRX320, SRX340, SRX345, SRX550M, and
SRX1500)
Table 3 on page 6 lists the features that are supported on SRX300, SRX320, SRX340,
SRX345, SRX550M, and SRX1500 devices in a chassis cluster.
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Simple filters No No No No
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
2
DHCPv6 Yes Yes Yes Yes
Ping MPLS No No No No
3
Dynamic VPN Package dynamic VPN client – – – –
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
40/100-Gigabit Ethernet – – – –
interface MPC slots Gigabit
Promiscuous mode on No No No No
Ethernet interface
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Packet-based processing No No No No
Selective stateless No No No No
packet-based services
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Message-length filtering No No No No
Message-rate limiting No No No No
Message-type filtering No No No No
Policy-based inspection No No No No
Sequence-number and No No No No
GTP-U validation
Stateful inspection No No No No
Traffic logging No No No No
Tunnel cleanup No No No No
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
DSCP marking No No No No
Jumbo frames No No No No
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Q-in-Q tunneling No No No No
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
System Log Files System log archival Yes Yes Yes Yes
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Class of service No No No No
Antivirus–Sophos Yes No No No
ISSU No No No No
Table 3: Features Supported on SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500
in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
1
When the application ID is identified before session failover, the same action taken
before the failover is effective after the failover. That is, the action is published to
AppSecure service modules and takes place based on the application ID of the traffic. If
the application is in the process of being identified before a failover, the application ID is
not identified and the session information will be lost. The application identification
process will be applied on new sessions created on new primary node.
2
DHCPv6 is supported on SRX Series devices running Junos OS Release 12.1 and later
releases.
3
Package Dynamic VPN client is supported on SRX Series devices until Junos OS Release
12.3X48.
Table 4 on page 23 lists the chassis cluster features that are supported on SRX300,
SRX320, SRX340, SRX345, SRX550M, and SRX1500 devices.
IP monitoring Yes
HA monitoring Yes
Point-to-Point Protocol over Ethernet (PPPoE) over redundant Ethernet interface Yes
SPU monitoring No
Synchronization–configuration Yes
Synchronization–policies Yes
WAN interfaces No
Table 5 on page 26 lists the features that are supported on SRX5800, SRX5600, and
SRX5400 devices in a chassis cluster.
CX111 3G adapter No No No No
support
IEEE 802.3af / No No No No
802.3at support
Chassis cluster Not supported for Not supported for Not supported for Not supported for
SPC insert SRX5000 line SRX5000 line SRX5000 line SRX5000 line
Simple filters – – – –
2
DHCPv6 Yes Yes Yes Yes
J-Flow version 9 No No No No
Ping MPLS No No No No
LACP (port No No No No
priority) Layer 2
Mode
Static LAG No No No No
(switching)
Switching mode No No No No
Packet-based No No No No
processing
Selective stateless No No No No
packet-based
services
3
IPsec AH protocol Yes Yes Yes Yes
Dynamic IPsec No No No No
VPNs
Flexible Ethernet No No No No
services
LLDP and No No No No
LLDP-MED
Q-in-Q tunneling No No No No
Spanning Tree No No No No
Protocol
Multicast VPN No No No No
membership
discovery with
BGP
P2MP OAM to No No No No
P2MP LSP ping
Reliable multicast No No No No
VPN routing
information
exchange
Antivirus–Express – – – –
Antivirus–Full – – – –
Stateful No No No No
active/active
cluster mode
Web – – – –
filtering–Surf-control
Dual-root No No No No
partitioning
J-Web user No No No No
interface
Session and No No No No
Resource Control
(SRC) application
1
When the application ID is identified before session failover, the same action taken
before the failover is effective after the failover. That is, the action is published to
AppSecure service modules and takes place based on the application ID of the traffic. If
the application is in the process of being identified before a failover, the application ID is
not identified and the session information is lost. The application identification process
will be applied on new sessions created on new primary node.
2
DHCPv6 is supported on SRX Series devices running Junos OS Release 12.1 and later
releases.
3
IPsec in active/active chassis cluster on SRX5000 line devices has the limitation that
Z-mode traffic is not supported. This limitation pertains to Junos OS Release 12.3X48
and later and must be avoided.
Table 6 on page 48 lists the chassis cluster features that are supported on SRX5800,
SRX5600, and SRX5400 devices.
Table 6: Chassis Cluster Feature Support on SRX5800, SRX5600, and SRX5400 Devices
Features SRX5000 Line
IP monitoring Yes
HA monitoring Yes
Synchronization–configuration Yes
Synchronization–policies Yes
WAN interfaces No
The SRX Series devices have the following chassis cluster limitations:
Chassis Cluster
• On all SRX Series devices in a chassis cluster, flow monitoring for version 5 and version
8 is supported. However, flow monitoring for version 9 is not supported.
• When an SRX Series device is operating in chassis cluster mode and encounter any
IA-chip access issue in an SPC or a I/O Card (IOC), a minor FPC alarm is activated to
trigger redundancy group failover.
• On SRX5400, SRX5600, and SRX5800 devices, screen statistics data can be gathered
on the primary device only.
The product of the heartbeat-threshold and heartbeat-interval values defines the time
before failover. The default values (heartbeat-threshold of 3 beats and
heartbeat-interval of 1000 milliseconds) produce a wait time of 3 seconds.
To change the wait time, modify the option values so that the product equals the
desired setting. For example, setting the heartbeat-threshold to 8 and maintaining the
default value for the heartbeat-interval (1000 milliseconds) yields a wait time of 8
seconds. Likewise, setting the heartbeat-threshold to 4 and the heartbeat-interval to
2000 milliseconds also yields a wait time of 8 seconds.
• If you use packet capture on reth interfaces, two files are created, one for ingress
packets and the other for egress packets based on the reth interface name. These files
can be merged outside of the device using tools such as Wireshark or Mergecap.
• If you use port mirroring on reth interfaces, the reth interface cannot be configured as
the output interface. You must use a physical interface as the output interface. If you
configure the reth interface as an output interface using the set forwarding-options
port-mirroring family inet output command, the following error message is displayed.
• If you use packet capture on reth interfaces, two files are created, one for ingress
packets and the other for egress packets based on the reth interface name. These files
can be merged outside of the device using tools such as Wireshark or Mergecap.
• If you use port mirroring on reth interfaces, the reth interface cannot be configured as
the output interface. You must use a physical interface as the output interface. If you
configure the reth interface as an output interface using the set forwarding-options
port-mirroring family inet output command, the following error message is displayed.
• Any packet-based services such as MPLS and CLNS are not supported.
• On all SRX Series devices, the packet-based forwarding for MPLS and ISO protocol
families is not supported.
• On SRX Series devices in a chassis cluster, when two logical systems are configured,
the scaling limit crosses 13,000, which is very close to the standard scaling limit of
15,000, and a convergence time of 5 minutes results. This issue occurs because
multicast route learning takes more time when the number of routes is increased.
• On SRX5400, SRX5600, and SRX5800 devices in a chassis cluster, if the primary node
running the LACP process (lacpd) undergoes a graceful or ungraceful restart, the lacpd
on the new primary node might take a few seconds to start or reset interfaces and
state machines to recover unexpected synchronous results. Also, during failover, when
the system is processing traffic packets or internal high-priority packets (deleting
sessions or reestablishing tasks), medium-priority LACP packets from the peer (switch)
are pushed off in the waiting queues, causing further delay.
• For SRX300, SRX320, SRX340, SRX345, and SRX550M devices, the reboot parameter
is not available, because the devices in a cluster are automatically rebooted following
an in-band cluster upgrade (ICU).
Interfaces
• On the lsq-0/0/0 interface, Link services MLPPP, MLFR, and CRTP are not supported.
Layer 2 Switching
• On SRX Series device failover, access points on the Layer 2 switch reboot and all
wireless clients lose connectivity for 4 to 6 minutes.
MIBs
Monitoring
• The maximum number of monitoring IPs that can be configured per cluster is 64 for
SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500 devices.
• On SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500 devices, logs cannot
be sent to NSM when logging is configured in the stream mode. Logs cannot be sent
because the security log does not support configuration of the source IP address for
the fxp0 interface and the security log destination in stream mode cannot be routed
through the fxp0 interface. This implies that you cannot configure the security log
server in the same subnet as the fxp0 interface and route the log server through the
fxp0 interface.
IPv6
GPRS
MIBs
Starting with Junos OS Release 12.1X45-D10 and later, sampling features such as flow
monitoring, packet capture, and port mirroring are supported on reth interfaces.
12.1X45 Starting with Junos OS Release 12.1X45-D10 and later, sampling features
such as flow monitoring, packet capture, and port mirroring are supported
on reth interfaces.
Some Junos OS software features require a license to activate the feature. To enable a
licensed feature, you need to purchase, install, manage, and verify a license key that
corresponds to each licensed feature.
There is no separate license required for chassis cluster. However, to configure and use
the licensed feature in a chassis cluster setup, you must purchase one license per feature
per device and the license needs to be installed on both nodes of the chassis cluster.
Each license is tied to one software feature pack, and that license is valid for only one
device.
For chassis cluster, you must install licenses that are unique to each device and cannot
be shared between the devices. Both devices (which are going to form a chassis cluster)
must have the valid, identical features licenses installed on them. If both devices do not
have an identical set of licenses, then after a failover, a particular feature (that is, a feature
that is not licensed on both devices) might not work or the configuration might not
synchronize in chassis cluster formation.
Licensing is usually ordered when the device is purchased, and this information is bound
to the chassis serial number. For example, Intrusion Detection and Prevention (IDP) is a
licensed feature and the license for this specific feature is tied to the serial number of
the device.
For information about how to purchase software licenses, contact your Juniper Networks
sales representative at http://www.juniper.net/in/en/contact-us/.
You can add a license key from a file or a URL, from a terminal, or from the J-Web user
interface. Use the filename option to activate a perpetual license directly on the device.
Use the url option to send a subscription-based license key entitlement (such as unified
threat management [UTM]) to the Juniper Networks licensing server for authorization.
If authorized, the server downloads the license to the device and activates it.
• Set the chassis cluster node ID and the cluster ID. See “Example: Setting the Chassis
Cluster Node ID and Cluster ID for SRX Series Devices” on page 92.
• Ensure that your SRX Series device has a connection to the Internet (if particular feature
requires Internet or if (automatic) renewal of license through internet is to be used).
For instructions on establishing basic connectivity, see the Getting Started Guide or
Quick Start Guide for your device.
To install licenses on the primary node of an SRX Series device in a chassis cluster:
1. Run the show chassis cluster status command and identify which node is primary for
redundancy group 0 on your SRX Series device.
{primary:node0}
user@host> show chassis cluster status redundancy-group 0
Cluster ID: 9
Node Priority Status Preempt Manual failover
Output to this command indicates that node 0 is primary and node 1 is secondary.
2. From CLI operational mode, enter one of the following CLI commands:
• To add a license key from a file or a URL, enter the following command, specifying
the filename or the URL where the key is located:
• To add a license key from the terminal, enter the following command:
3. When prompted, enter the license key, separating multiple license keys with a blank
line.
If the license key you enter is invalid, an error appears in the CLI output when you press
Ctrl+d to exit license entry mode.
To install licenses on the secondary node of an SRX Series device in a chassis cluster:
{primary:node0}
NOTE: Initiating a failover to the secondary node is not required if you are
installing licenses manually on the device. However, if you are installing
the license directly from the Internet, you must initiate a failover.
NOTE: You must install the updated license on both nodes of the chassis
cluster before the existing license expires.
TIP: If you are not using any specific feature or license , you can delete the
license from both devices in a chassis cluster. You need to connect to each
node separately to delete the licenses. For details, see Example: Deleting a
License Key.
Related • Verifying Licenses for an SRX Series Device in a Chassis Cluster on page 58
Documentation
• Understanding Chassis Cluster Licensing Requirements on page 55
Purpose You can verify the licenses installed on both the devices in a chassis cluster setup by
using the show system license installed command to view license usage.
Licenses installed:
License identifier: JUNOS363684
License version: 2
Valid for device: JN111A654AGB
Features:
services-offload - services offload mode
permanent
{secondary-hold:node1}
user@host> show system license
License usage:
Licenses Licenses Licenses Expiry
Feature name used installed needed
idp-sig 0 1 0 permanent
logical-system 1 26 0 permanent
services-offload 0 1 0 permanent
Licenses installed:
License identifier: JUNOS209661
License version: 2
Valid for device: JN111AB4DAGB
Features:
idp-sig - IDP Signature
permanent
Meaning Use the fields License version and Features to make sure that licenses installed on both
the nodes are identical.
To form a chassis cluster, a pair of the same kind of supported SRX Series devices is
combined to act as a single system that enforces the same overall security.
The following are the device-specific matches required to form a chassis cluster:
• Device-specific requirements:
The following are the device-specific matches required to form a chassis cluster:
• SRX3400 and SRX3600—The placement and type of SPCs, I/O cards (IOCs), and
Network Processing Cards (NPCs) must match in the two devices.
• SRX1500—Has dedicated slots for each kind of card that cannot be interchanged.
• SRX300, SRX320, SRX340, SRX345, and SRX550M: Although the devices must be of
the same type, they can contain different Physical Interface Modules (PIMs).
To form a chassis cluster, a pair of the same kind of supported SRX Series devices is
combined to act as a single system that enforces the same overall security. SRX Series
devices must meet the following requirements:
• Junos OS requirements: Both the devices must be running the same Junos OS version
• Licensing requirements: Licenses are unique to each device and cannot be shared
between the devices. Both devices (which are going to form chassis cluster) must have
the identical features and license keys enabled or installed them. If both devices do
not have an identical set of licenses, then after a failover, that particular licensed feature
might not work or the configuration might not synchronize in chassis cluster formation.
When a device joins a cluster, it becomes a node of that cluster. With the exception of
unique node settings and management IP addresses, nodes in a cluster share the same
configuration.
The following message is displayed when you try to set a cluster ID greater than 15,
and when fabric and control link interfaces are not connected back-to-back or are not
connected on separated private VLANs:
{primary:node1}
user@host> set chassis cluster cluster-id 254 node 1 reboot
For cluster-ids greater than 15 and when deploying more than one cluster in a
single Layer 2 BROADCAST domain, it is mandatory that fabric and control links
are either connected back-to-back or are connected on separate private VLANS.
NOTE:
For SRX210 Services Gateways, the base and enhanced versions of a model
can be used to form a cluster. For example:
port should be removed. For more information, see “Understanding SRX Series Chassis
Cluster Slot Numbering and Physical Port and Logical Interface Naming” on page 79.
• Confirm that hardware and software are the same on both devices.
NOTE: For SRX5000 line chassis clusters, the placement and type of SPCs
must match in the two devices.
Figure 1 on page 64 shows a chassis cluster flow diagram for SRX300, SRX320, SRX340,
SRX345, SRX550M, and SRX1500 devices.
This section provides an overview of the basic steps to create an SRX Series chassis
cluster. To create an SRX Series chassis cluster:
1. Physically connect a pair of the same kind of supported SRX Series devices together.
For more information, see “Connecting SRX Series Devices to Create a Chassis Cluster”
on page 71.
a. Create the fabric link between two nodes in a cluster by connecting any pair of
Ethernet interfaces. For most SRX Series devices, the only requirement is that both
interfaces be Gigabit Ethernet interfaces (or 10-Gigabit Ethernet interfaces). For
SRX300, SRX320, SRX340, SRX345, and SRX550M devices, connect a pair of
Gigabit Ethernet interfaces. For SRX1500 devices, fabric child must be of a similar
type.
When using dual fabric link functionality, connect the two pairs of Ethernet
interfaces that you will use on each device. See “Understanding Chassis Cluster
Dual Fabric Links” on page 247.
b. Configure the control ports (SRX5000 line only). See “Example: Configuring Chassis
Cluster Control Ports” on page 118.
2. Connect the first device to be initialized in the cluster to the console port. This is the
node that forms the cluster. For connection instructions, see the Getting Started Guide
for your device.
b. Identify the node by giving it its own node ID and then reboot the system.
See “Example: Setting the Chassis Cluster Node ID and Cluster ID for SRX Series
Devices” on page 92.
4. Connect to the console port on the other device and use CLI operational mode
commands to enable clustering:
a. Identify the cluster that the device is joining by setting the same cluster ID you set
on the first node.
b. Identify the node by giving it its own node ID and then reboot the system.
5. Configure the management interfaces on the cluster. See “Example: Configuring the
Chassis Cluster Management Interface” on page 96.
d. Example: Configuring Chassis Cluster Redundant Ethernet Interfaces for IPv4 and
IPv6 Addresses on page 133
7. Initiate manual failover. See “Initiating a Chassis Cluster Manual Redundancy Group
Failover” on page 242.
9. Verify the configuration. See “Verifying a Chassis Cluster Configuration” on page 163.
Note:
• When using dual fabric link functionality, connect the two pairs of Ethernet interfaces
that you will use on each device. See “Understanding Chassis Cluster Dual Fabric Links”
on page 247.
• When using dual control link functionality (SRX5600 and SRX5800 devices only),
connect the two pairs of control ports that you will use on each device.
See “Connecting Dual Control Links for SRX Series Devices in a Chassis Cluster” on
page 170.
For SRX5600 and SRX5800 devices, control ports must be on corresponding slots in
the two devices. Table 7 on page 67 shows the slot numbering offsets:
• SRX3400, and SRX3600 devices, the control ports are dedicated Gigabit Ethernet
ports.
You must use the following ports to form the control link on the following SRX Series
devices:
• For SRX300 devices, connect the ge-0/0/1 on node 0 to the ge-1/0/1 on node 1.
• For SRX320 devices, connect the ge-0/0/1 on node 0 to the ge-3/0/1 on node 1.
• For SRX340 and SRX345 devices, connect the ge-0/0/1 on node 0 to the ge-5/0/1 on
node 1.
• For SRX550M devices, connect the ge-0/0/1 on node 0 to the ge-9/0/1 on node 1.
• For SRX300 and SRX320 devices, connect any interface except ge-0/0/0 and ge-0/0/1.
• For SRX340 and SRX345 devices, connect any interface except fxp0 and ge-0/0/1.
Figure 3 on page 72, Figure 4 on page 72, Figure 5 on page 72, Figure 6 on page 72,
Figure 7 on page 72, and Figure 8 on page 73 show pairs of SRX Series devices with the
fabric links and control links connected.
For a device from the SRX1500, the connection that serves as the control link must be
between the built-in control ports on each device.
NOTE: You can connect two control links (SRX1400, SRX4600, SRX5000
and SRX3000 lines only) and two fabric links between the two devices in
the cluster to reduce the chance of control link and fabric link failure. See
“Understanding Chassis Cluster Dual Control Links” on page 169 and
“Understanding Chassis Cluster Dual Fabric Links” on page 247.
Figure 9 on page 73, Figure 10 on page 73 and Figure 11 on page 74 show pairs of SRX
Series devices with the fabric links and control links connected.
Control Fabric
link link
g009483
Node 1
Node 1
g009075
Node 1
g009076
Control Por t Fabric Lin k
Figure 12 on page 74, Figure 13 on page 75, and Figure 14 on page 75 show pairs of SRX
Series devices with the fabric links and control links connected.
NOTE: SRX5000 line devices do not have built-in ports, so the control link
for these gateways must be the control ports on their Services Processing
Cards (SPCs) with a slot numbering offset of 3 for SRX5400, offset of 6 for
SRX5600 devices and 12 for SRX5800 devices.
When you connect a single control link on SRX5000 line devices, the control
link ports are a one-to-one mapping with the Routing Engine slot. If your
Routing Engine is in slot 0, you must use control port 0 to link the Routing
Engines.
NOTE: Dual control links are not supported on an SRX5400 device due to
the limited number of slots.
Figure 15 on page 76, Figure 16 on page 76 and Figure 17 on page 76 show pairs of SRX
Series devices with the fabric links and control links connected.
NOTE: For dual control links on SRX3000 line devices, the Routing Engine
must be in slot 0 and the SRX Clustering Module (SCM) in slot 1. The opposite
configuration (SCM in slot 0 and Routing Engine in slot 1) is not supported.
Figure 18 on page 77, Figure 19 on page 77, Figure 20 on page 77, Figure 21 on page 77,
Figure 22 on page 77, Figure 23 on page 78 and Figure 24 on page 78 all show pairs of
SRX Series devices with the fabric links and control links connected.
The fabric link connection for the SRX100 must be a pair of Fast Ethernet interfaces and
for the SRX210 must be a pair of either Fast Ethernet or Gigabit Ethernet interfaces. The
fabric link connection must be any pair of either Gigabit Ethernet or 10-Gigabit Ethernet
interfaces on all SRX Series devices.
• Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and
Logical Interface Naming on page 79
• Example: Setting the Chassis Cluster Node ID and Cluster ID for SRX Series
Devices on page 92
Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and
Logical Interface Naming
Normally, on SRX Series devices, the built-in interfaces are numbered as follows:
NOTE: See the hardware documentation for your particular model (SRX
Series Services Gateways) for details about SRX Series devices. See Interfaces
Feature Guide for Security Devices for a full discussion of interface naming
conventions.
Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and Logical
Interface Naming (SRX300, SRX320, SRX340, SRX345, SRX550M, and SRX1500)
For chassis clustering, all SRX Series devices have a built-in management interface
named fxp0. For most SRX Series devices, the fxp0 interface is a dedicated port.
For SRX340 and SRX345 devices, the fxp0 interface is a dedicated port. For SRX300
and SRX320 devices, after you enable chassis clustering and reboot the system, the
built-in interface named ge-0/0/0 is repurposed as the management interface and is
automatically renamed fxp0.
For SRX300, SRX320, SRX340, and SRX345 devices, after you enable chassis clustering
and reboot the system, the build-in interface named ge-0/0/1is repurposed as the control
interface and is automatically renamed fxp1.
For SRX550M devices, control interfaces are dedicated Gigabit Ethernet ports.
After the devices are connected as a cluster, the slot numbering on one device changes
and thus the interface numbering will change. The slot number for each slot in both nodes
is determined using the following formula:
cluster slot number = (node ID * maximum slots per node) + local slot number
In chassis cluster mode, all FPC related configuration is performed under edit chassis
node node-id fpc hierarchy. In non-cluster mode, the FPC related configuration is performed
under edit chassis fpc hierarchy.
Table 8 on page 80 shows the slot numbering, as well as the physical port and logical
interface numbering, for both of the SRX Series devices that become node 0 and node
1 of the chassis cluster after the cluster is formed.
Table 8: SRX Series Chassis Cluster Slot Numbering and Physical Port and Logical Interface
Naming
Management Control Fabric
Maximum Slot Physical Physical Physical
Slots Per Numbering in Port/Logical Port/Logical Port/Logical
Model Chassis Node a Cluster Interface Interface Interface
em0 fab0
em0 fab1
Table 8: SRX Series Chassis Cluster Slot Numbering and Physical Port and Logical Interface
Naming (continued)
Management Control Fabric
Maximum Slot Physical Physical Physical
Slots Per Numbering in Port/Logical Port/Logical Port/Logical
Model Chassis Node a Cluster Interface Interface Interface
340 and 345 Node 0 5 (PIM slots) 0—4 fxp0 ge-0/0/1 Any Ethernet
port
After you enable chassis clustering, the two chassis joined together cease to exist as
individuals and now represent a single system. As a single system, the cluster now has
twice as many slots. (See Figure 25 on page 81, Figure 26 on page 82, Figure 27 on page 82,
Figure 28 on page 82, Figure 29 on page 82, and Figure 30 on page 82.)
Node 0 Node 1
g009250
Slot 0 Slot 1
Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and Logical
Interface Naming (SRX4600)
The SRX4600 devices use dedicated HA control and fabric ports. The HA dedicated
interface on SRX4600 supports 10-Gigabit Ethernet Interface.
Table 9 on page 83 shows the slot numbering, as well as the physical port and logical
interface numbering, for both of the SRX Series devices that become node 0 and node
1 of the chassis cluster after the cluster is formed.
Table 9: SRX Series Chassis Cluster Slot Numbering, and Physical Port and Logical Interface
Naming (SRX4600 Devices)
Slot Management
Maximum Numbering Physical Control Physical Fabric Physical
Chassis Slots Per in a Port/Logical Port/Logical Port/Logical
Model Cluster Node Cluster Interface Interface Interface
40-Gigabit Ethernet
port (xe)
Table 10: SRX Series Chassis Cluster Fabric Interface Details for SRX4600
Used as Fabric Supports Z-Mode Supports
Interfaces Port? Traffic? MACsec?
NOTE: Mix and match of fabric ports are not supported. That is, you cannot
use one 10-Gigabit Ethernet interface and one 40-Gigabit Ethernet interface
for fabric links configuration. Dedicated fabric link supports only 10-Gigabit
Ethernet Interface.
Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and Logical
Interface Naming (SRX4100 and SRX4200)
The SRX4100 and SRX4200 devices use following HA ports:
Supported fabric interface types for SRX4100 and SRX4200 devices are 10-Gigabit
Ethernet (xe) (10-Gigabit Ethernet Interface SFP+ slots).
Table 11 on page 84 shows the slot numbering, as well as the physical port and logical
interface numbering, for both of the SRX Series devices that become node 0 and node
1 of the chassis cluster after the cluster is formed.
Table 11: SRX Series Chassis Cluster Slot Numbering, and Physical Port and Logical Interface
Naming (SRX4100 and SRX4200 Devices)
Slot Management
Maximum Numbering Physical Control Physical Fabric Physical
Chassis Slots Per in a Port/Logical Port/Logical Port/Logical
Model Cluster Node Cluster Interface Interface Interface
Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and Logical
Interface Naming (SRX5800, SRX5600, SRX5400)
For chassis clustering, all SRX Series devices have a built-in management interface
named fxp0. For most SRX Series devices, the fxp0 interface is a dedicated port.
Table 12 on page 85 shows the slot numbering, as well as the physical port and logical
interface numbering, for both of the SRX Series devices that become node 0 and node
1 of the chassis cluster after the cluster is formed.
Table 12: SRX Series Chassis Cluster Slot Numbering, and Physical Port and Logical Interface
Naming (SRX5000-Line Devices)
Slot Management
Maximum Numbering Physical Control Physical Fabric Physical
Chassis Slots Per in a Port/Logical Port/Logical Port/Logical
Model Cluster Node Cluster Interface Interface Interface
5800 Node 0 12 (FPC 0—11 Dedicated Gigabit Control port on an Any Ethernet port
slots) Ethernet port SPC
5600 Node 0 6 (FPC 0—5 Dedicated Gigabit Control port on an Any Ethernet port
slots) Ethernet port SPC
5400 Node 0 3 (FPC 0—2 Dedicated Gigabit Control port on an Any Ethernet port
slots) Ethernet port SPC
NOTE: See the hardware documentation for your particular model (SRX
Series Services Gateways) for details about SRX Series devices. See Interfaces
Feature Guide for Security Devices for a full discussion of interface naming
conventions.
Figure 32 on page 86 , Figure 33 on page 86, and Figure 34 on page 86 shows the slot
numbering for both of the SRX Series devices that become node 0 and node 1 of the
chassis cluster after the cluster is formed.
Slot 0
Node 1
g009484
Slot 1
In chassis cluster mode, all FPC related configuration is performed under edit chassis
node node-id fpc hierarchy. In non-cluster mode, the FPC related configuration is performed
under edit chassis fpc hierarchy.
cluster slot number = (node ID * maximum slots per node) + local slot number
In chassis cluster mode, the interfaces on the secondary node are renumbered internally.
The node 1 renumbers its interfaces by adding the total number of system FPCs to the
original FPC number of the interface. For example, see Table 13 on page 87 for interface
renumbering on the SRX Series devices (SRX4100 and SRX4200).
You can use these port modules to add from 4 to 16 Ethernet ports to your SRX Series
device. Port numbering for these modules is
slot/port module/port
where slot is the number of the slot in the device in which the Flex IOC is installed; port
module is 0 for the upper slot in the Flex IOC or 1 for the lower slot when the card is vertical,
as in an SRX5800 device; and port is the number of the port on the port module. When
the card is horizontal, as in an SRX5400 or SRX5600 device, port module is 0 for the
left-hand slot or 1 for the right-hand slot.
SRX5400 devices support only SRX5K-MPC cards. The SRX5K-MPC cards also have
two slots to accept the following port modules:
See the hardware guide for your specific SRX Series model (SRX Series Services Gateways).
Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and Logical
Interface Naming (SRX3600, SRX3400, and SRX1400)
Table 15 on page 88 shows the slot numbering, as well as the physical port and logical
interface numbering, for both of the SRX Series devices that become node 0 and node
1 of the chassis cluster after the cluster is formed.
Table 15: SRX Series Chassis Cluster Slot Numbering, and Physical Port and Logical Interface
Naming
Management Control Fabric
Maximum Slot Physical Physical Physical
Slots Per Numbering in Port/Logical Port/Logical Port/Logical
Model Chassis Node a Cluster Interface Interface Interface
Table 15: SRX Series Chassis Cluster Slot Numbering, and Physical Port and Logical Interface
Naming (continued)
Management Control Fabric
Maximum Slot Physical Physical Physical
Slots Per Numbering in Port/Logical Port/Logical Port/Logical
Model Chassis Node a Cluster Interface Interface Interface
Information about chassis cluster slot numbering is also provided in Figure 35 on page 90
and Figure 36 on page 90.
Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and Logical
Interface Naming (SRX650, SRX550, SRX240, SRX210, SRX110, and SRX100)
Information about chassis cluster slot numbering is also provided in Figure 37 on page 90,
Figure 38 on page 91, Figure 39 on page 91, Figure 40 on page 91, Figure 41 on page 91,
and Figure 42 on page 91.
The factory default configuration for SRX100, SRX210, and SRX220 devices
automatically enables Layer 2 Ethernet switching. Because Layer 2 Ethernet
switching is not supported in chassis cluster mode, if you use the factory
default configuration for these devices, you must delete the Ethernet
switching configuration before you enable chassis clustering. See Disabling
Switching on SRX100, SRX210, and SRX220 Devices Before Enabling Chassis
Clustering.
For SRX100, SRX210, and SRX220 devices, after you enable chassis clustering and reboot
the system, the built-in interface named fe-0/0/6 is repurposed as the management
interface and is automatically renamed fxp0.
For SRX240, SRX550, and SRX650, devices, control interfaces are dedicated Gigabit
Ethernet ports. For SRX100, SRX210, and SRX220 devices, after you enable chassis
clustering and reboot the system, the built-in interface named fe-0/0/7 is repurposed
as the control interface and is automatically renamed fxp1.
In chassis cluster mode, the interfaces on the secondary node are renumbered internally.
For example, the management interface port on the front panel of each SRX210 device
is still labeled fe-0/0/6, but internally, the node 1 port is referred to as fe-2/0/6.
Related • Example: Configuring Chassis Clustering on an SRX Series Devices on page 141
Documentation
Example: Setting the Chassis Cluster Node ID and Cluster ID for SRX Series Devices
This example shows how to set the chassis cluster node ID and chassis cluster ID , which
you must configure after connecting two devices together. A chassis cluster ID identifies
the cluster to which the devices belong, and a chassis cluster node ID identifies a unique
node within the cluster. After wiring the two devices together, you use CLI operational
mode commands to enable chassis clustering by assigning a cluster ID and node ID on
each chassis in the cluster. The cluster ID is the same on both nodes.
• Requirements on page 92
• Overview on page 93
• Configuration on page 93
• Verification on page 94
Requirements
Before you begin, ensure that you can connect to each device through the console port.
Ensure that the devices are running the same version of the Junos operating system
(Junos OS). and SRX Series devices are of same model.
Overview
The system uses the chassis cluster ID and chassis cluster node ID to apply the correct
configuration for each node (for example, when you use the apply-groups command to
configure the chassis cluster management interface). The chassis cluster ID and node
ID statements are written to the EPROM, and the statements take effect when the system
is rebooted.
In this example, you configure a chassis cluster ID of 1. You also configure a chassis cluster
node ID of 0 for the first node, which allows redundancy groups to be primary on this
node when priority settings for both nodes are the same, and a chassis cluster node ID
of 1 for the other node.
Configuration
Step-by-Step To specify the chassis cluster node ID and cluster ID, you need to set two devices to
Procedure cluster mode and reboot the devices. You must enter the following operational mode
commands on both devices:
To do this, you connect to the console port on the primary device, give
it a node ID, and identify the cluster it will belong to, and then reboot
the system. You then connect the console port to the other device, give
it a node ID, and assign it the same cluster ID you gave to the first node,
and then reboot the system. In both instances, you can cause the system
to boot automatically by including the reboot parameter in the CLI
command line. (For further explanation of primary and secondary nodes,
see “Understanding Chassis Cluster Redundancy Groups” on page 121.)
Verification
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}[edit]
user@host> show chassis cluster status
Cluster ID: 1
Node Priority Status Preempt Manual failover
Meaning The sample output shows that devices in the chassis cluster are communicating properly,
with one device functioning as the primary node and the other as the secondary node.
Most of SRX Series devices contain an fxp0 interface. The fxp0 interfaces function like
standard management interfaces on SRX Series devices and allow network access to
each node in the cluster.
Management interfaces are the primary interfaces for accessing the device remotely.
Typically, a management interface is not connected to the in-band network, but is
connected instead to the device's internal network. Through a management interface
you can access the device over the network using utilities such as ssh and telnet and
configure the device from anywhere, regardless of its physical location. SNMP can use
the management interface to gather statistics from the device. A management interface
enables authorized users and management systems connect to the device over the
network.
Some SRX Series devices have a dedicated management port on the front panel. For
other types of platforms, you can configure a management interface on one of the network
interfaces. This interface can be dedicated to management or shared with other traffic.
Before users can access the management interface, you must configure it. Information
required to set up the management interface includes its IP address and prefix. In many
types of Junos OS devices (or recommended configurations), it is not possible to route
traffic between the management interface and the other ports. Therefore, you must
select an IP address in a separate (logical) network, with a separate prefix (netmask).
For most SRX Series chassis clusters, the fxp0 interface is a dedicated port. SRX340 and
SRX345 devices contain an fxp0 interface. SRX300 and SRX320 devices do not have a
dedicated port for fxp0. The fxp0 interface is repurposed from a built-in interface. The
fxp0 interface is created when the system reboots the devices after you designate one
node as the primary device and the other as the secondary device.
We recommend giving each node in a chassis cluster a unique IP address for the fxp0
interface of each node. This practice allows independent node management.
NOTE: For some SRX Series devices, such as the SRX100 and SRX200 line
devices, do not have a dedicated port for fxp0. For SRX100, SRX210, the fxp0
interface is repurposed from a built-in interface.
Related • Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and
Documentation Logical Interface Naming on page 79
This example shows how to provide network management access to a chassis cluster.
• Requirements on page 96
• Overview on page 96
• Configuration on page 97
• Verification on page 100
Requirements
Before you begin, set the chassis cluster node ID and cluster ID. See “Example: Setting
the Chassis Cluster Node ID and Cluster ID” on page 92.
Overview
You must assign a unique IP address to each node in the cluster to provide network
management access. This configuration is not replicated across the two nodes.
NOTE: If you try to access the nodes in a cluster over the network before you
configure the fxp0 interface, you will lose access to the cluster.
• Node 0 name—node0-router
• Node 1 name—node1-router
• Node 0 name—node0-router
• Node 1 name—node1-router
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the [edit] hierarchy level,
and then enter commit from configuration mode.
{primary:node0}[edit]
user@host#
set groups node0 system host-name node0-router
set groups node0 interfaces fxp0 unit 0 family inet address 10.1.1.1/24
set groups node1 system host-name node1-router
set groups node1 interfaces fxp0 unit 0 family inet address 10.1.1.2/24
{primary:node0}[edit]
user@host# set groups node0 system host-name node0-router
user@host# set groups node0 interfaces fxp0 unit 0 family inet address 10.1.1.1/24
{primary:node0}[edit]
set groups node1 system host-name node1-router
set groups node1 interfaces fxp0 unit 0 family inet address 10.1.1.2/24
{primary:node0}[edit]
user@host# commit
Results From configuration mode, confirm your configuration by entering the show groups and
show apply-groups commands. If the output does not display the intended configuration,
repeat the configuration instructions in this example to correct it.
{primary:node0}[edit]
user@host# show groups
node0 {
system {
host-name node0-router;
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 10.1.1.1/24;
}
}
}
}
}
node1 {
system {
host-name node1-router;
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 10.1.1.2/24;
}
}
}
}
}
{primary:node0}[edit]
user@host# show apply-groups
## Last changed: 2010-09-16 11:08:29 UTC
apply-groups "${node}";
If you are done configuring the device, enter commit from configuration mode.
Action To verify the configuration is working properly, enter the show interfaces terse, show
configuration groups node node0 interfaces and show configuration groups node node1
interfaces commands.
{primary:node0} [edit]
user@host> show interfaces terse | match fxp0
fxp0 up up
fxp0.0 up up inet 10.1.1.1/24
{primary:node0} [edit]
user@host> show configuration groups node0 interfaces
fxp0 {
unit 0 {
family inet {
address 10.1.1.1/24;
}
}
}
{primary:node0} [edit]
user@host> show configuration groups node1 interfaces
fxp0 {
unit 0 {
family inet {
address 10.1.1.2/24;
}
}
}
Meaning The output displays the management interface information with their status.
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the [edit] hierarchy level,
and then enter commit from configuration mode.
{primary:node0}[edit]
user@host#
set groups node0 system host-name node0-router
set groups node0 interfaces fxp0 unit 0 family inet6 address 2001:db8:1::2/32
set groups node1 system host-name node1-router
set groups node1 interfaces fxp0 unit 0 family inet6 address 2001:db8:1::3/32
{primary:node0}[edit]
user@host# set groups node0 system host-name node0-router
user@host# set groups node0 interfaces fxp0 unit 0 family inet6 address
2001:db8:1::2/32
{primary:node0}[edit]
user@host# set groups node1 system host-name node1-router
user@host# set groups node1 interfaces fxp0 unit 0 family inet6 address
2001:db8:1::3/32
{primary:node0}[edit]
user@host# commit
Results From configuration mode, confirm your configuration by entering the show groups and
show apply-groups commands. If the output does not display the intended configuration,
repeat the configuration instructions in this example to correct it.
{primary:node0}[edit]
user@host# show groups
node0 {
system {
host-name node0-router;
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 2001:db8:1::2/32;
}
}
}
}
}
node1 {
system {
host-name node1-router;
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 2001:db8:1::3/32;
}
}
}
}
}
{primary:node0}[edit]
user@host# show apply-groups
## Last changed: 2010-09-16 11:08:29 UTC
apply-groups "${node}";
If you are done configuring the device, enter commit from configuration mode.
Verification
Action To verify the configuration is working properly, enter the show interfaces terse and show
configuration groups node0 interfaces commands.
{primary:node0} [edit]
fxp0 up up
fxp0.0 up up inet 2001:db8:1::2/32
{primary:node0} [edit]
user@host> show configuration groups node0 interfaces
fxp0 {
unit 0 {
family inet {
address 2001:db8:1::2/32;
}
}
}
{primary:node0} [edit]
user@host> show configuration groups node1 interfaces
fxp0 {
unit 0 {
family inet {
address 2001:db8:1::3/32;
}
}
}
Meaning The output displays the management interface information with their status.
The fabric is a physical connection between two nodes of a cluster and is formed by
connecting a pair of Ethernet interfaces back-to-back (one from each node).
Unlike for the control link, whose interfaces are determined by the system, you specify
the physical interfaces to be used for the fabric data link in the configuration.
The fabric is the data link between the nodes and is used to forward traffic between the
chassis. Traffic arriving on a node that needs to be processed on the other is forwarded
over the fabric data link. Similarly, traffic processed on a node that needs to exit through
an interface on the other node is forwarded over the fabric.
The data link is referred to as the fabric interface. It is used by the cluster's Packet
Forwarding Engines to transmit transit traffic and to synchronize the data plane software’s
dynamic runtime state. The fabric provides for synchronization of session state objects
created by operations such as authentication, Network Address Translation (NAT),
Application Layer Gateways (ALGs), and IP Security (IPsec) sessions.
When the system creates the fabric interface, the software assigns it an internally derived
IP address to be used for packet transmission.
and thereby causes the RG0 secondary node to move to a disabled state.)
After the fabric configuration is committed, do not reset either device to the
factory default configuration.
• Supported Fabric Interface Types for SRX Series Devices (SRX300 Series, SRX550M,
SRX1500, SRX4100/SRX4200, and SRX5000 Series) on page 104
• Supported Fabric Interface Types for SRX Series Devices (SRX650, SRX550, SRX240,
SRX210, and SRX100 Devices) on page 105
• Jumbo Frame Support on page 105
• Understanding Fabric Interfaces on SRX5000 Line Devices for IOC2 and IOC3 on page 105
• Understanding Session RTOs on page 106
• Understanding Data Forwarding on page 107
• Understanding Fabric Data Link Failure and Recovery on page 107
Supported Fabric Interface Types for SRX Series Devices (SRX300 Series, SRX550M, SRX1500,
SRX4100/SRX4200, and SRX5000 Series)
For SRX Series chassis clusters, the fabric link can be any pair of Ethernet interfaces
spanning the cluster; the fabric link can be any pair of Gigabit Ethernet interface. Examples:
• For SRX300, SRX320, SRX340, and SRX345 devices, the fabric link can be any pair of
Gigabit Ethernet interfaces.
• For SRX Series chassis clusters made up of SRX550M devices, SFP interfaces on
Mini-PIMs cannot be used as the fabric link.
• For SRX1500, the fabric link can be any pair of Ethernet interfaces spanning the cluster;
the fabric link can be any pair of Gigabit Ethernet interface or any pair of 10-Gigabit
Ethernet interface.
• Supported fabric interface types for SRX4100 and SRX4200 devices are 10-Gigabit
Ethernet (xe) (10-Gigabit Ethernet Interface SFP+ slots).
• Supported fabric interface types for SRX4600 devices are 40-Gigabit Ethernet (xe)
(40-Gigabit Ethernet Interface QSFP slots).
• Supported fabric interface types supported for SRX5000 line devices are:
• Fast Ethernet
• Gigabit Ethernet
• 10-Gigabit Ethernet
• 40-Gigabit Ethernet
• 100-Gigabit Ethernet
For details about port and interface usage for management, control, and fabric links, see
“Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and Logical
Interface Naming” on page 79.
Supported Fabric Interface Types for SRX Series Devices (SRX650, SRX550, SRX240, SRX210,
and SRX100 Devices)
For SRX100, SRX210, SRX220, SRX240, SRX550, and SRX650 devices, the fabric link
can be any pair of Gigabit Ethernet interfaces or Fast Ethernet interfaces (as applicable).
Interfaces on SRX210 devices are Fast Ethernet or Gigabit Ethernet (the paired interfaces
must be of a similar type) and all interfaces on SRX100 devices are Fast Ethernet
interfaces.
Table 16 on page 105 shows the fabric interface types that are supported for SRX Series
devices.
Table 16: Supported Fabric Interface Types for SRX Series Devices
SRX650 and
SRX550 SRX240 SRX220 SRX210 SRX100
Understanding Fabric Interfaces on SRX5000 Line Devices for IOC2 and IOC3
Starting with Junos OS Release 15.1X49-D10, the SRX5K-MPC3-100G10G (IOC3) and
the SRX5K-MPC3-40G10G (IOC3) are introduced.
The SRX5K-MPC (IOC2) is a Modular Port Concentrator (MPC) that is supported on the
SRX5400, SRX5600, and SRX5800. This interface card accepts Modular Interface Cards
(MICs), which add Ethernet ports to your services gateway to provide the physical
connections to various network media types. The MPCs and MICs support fabric links for
chassis clusters. The SRX5K-MPC provides 10-Gigabit Ethernet (with 10x10GE MIC),
40-Gigabit Ethernet, 100-Gigabit Ethernet, and 20x1GE Ethernet ports as fabric ports.
On SRX5400 devices, only SRX5K-MPCs (IOC2) are supported.
The two types of IOC3 Modular Port Concentrators (MPCs), which have different built-in
MICs, are the 24x10GE + 6x40GE MPC and the 2x100GE + 4x10GE MPC.
Due to power and thermal constraints, all four PICs on the 24x10GE + 6x40GE cannot
be powered on. A maximum of two PICs can be powered on at the same time.
Use the set chassis fpc <slot> pic <pic> power off command to choose the PICs you want
to power on.
WARNING:
On SRX5400, SRX5600, and SRX5800 devices in a chassis cluster, when
the PICs containing fabric links on the SRX5K-MPC3-40G10G (IOC3) are
powered off to turn on alternate PICs, always ensure that:
• The new fabric links are configured on the new PICs that are turned on. At
least one fabric link must be present and online to ensure minimal RTO
loss.
• If no alternate fabric links are configured on the PICs that are turned on,
RTO synchronous communication between the two nodes stops and the
chassis cluster session state will not back up, because the fabric link is
missing. You can view the CLI output for this scenario indicating a bad
chassis cluster state by using the show chassis cluster interfaces command.
To provide for session (or flow) redundancy, the data plane software synchronizes its
state by sending special payload packets called runtime objects (RTOs) from one node
to the other across the fabric data link. By transmitting information about a session
between the nodes, RTOs ensure the consistency and stability of sessions if a failover
were to occur, and thus they enable the system to continue to process traffic belonging
to existing sessions. To ensure that session information is always synchronized between
the two nodes, the data plane software gives RTOs transmission priority over transit
traffic.
The data plane software creates RTOs for UDP and TCP sessions and tracks state
changes. It also synchronizes traffic for IPv4 pass-through protocols such as Generic
Routing Encapsulation (GRE) and IPsec.
• RTOs for creating and deleting temporary openings in the firewall (pinholes) and
child session pinholes
A chassis cluster can receive traffic on an interface on one node and send it out to an
interface on the other node. (In active/active mode, the ingress interface for traffic might
exist on one node and its egress interface on the other.)
• When packets are processed on one node, but need to be forwarded out an egress
interface on the other node
• When packets arrive on an interface on one node, but must be processed on the other
node
If the ingress and egress interfaces for a packet are on one node, but the packet must
be processed on the other node because its session was established there, it must
traverse the data link twice. This can be the case for some complex media sessions,
such as voice-over-IP (VoIP) sessions.
The fabric data link is vital to the chassis cluster. If the link is unavailable, traffic forwarding
and RTO synchronization are affected, which can result in loss of traffic and unpredictable
system behavior.
To eliminate this possibility, Junos OS uses fabric monitoring to check whether the fabric
link, or the two fabric links in the case of a dual fabric link configuration, are alive by
periodically transmitting probes over the fabric links. If Junos OS detects fabric faults,
RG1+ status of the secondary node changes to ineligible. It determines that a fabric fault
has occurred if a fabric probe is not received but the fabric interface is active. To recover
from this state, both the fabric links need to come back to online state and should start
exchanging probes. As soon as this happens, all the FPCs on the previously ineligible
node will be reset. They then come to online state and rejoin the cluster.
NOTE: If you make any changes to the configuration while the secondary
node is disabled, execute the commit command to synchronize the
configuration after you reboot the node. If you did not make configuration
changes, the configuration file remains synchronized with that of the primary
node.
Starting with Junos OS Release 12.1X47-D10 and Junos OS Release 17.3R1, recovery of
the fabric link and synchronization take place automatically.
When both the primary and secondary nodes are healthy (that is, there are no failures)
and the fabric link goes down, RG1+ redundancy group(s) on the secondary node becomes
ineligible. When one of the nodes is unhealthy (that is, there is a failure), RG1+ redundancy
group(s) on this node (either the primary or secondary node) becomes ineligible. When
both nodes are unhealthy and the fabric link goes down, RG1+ redundancy group(s) on
the secondary node becomes ineligible. When the fabric link comes up, the node on which
RG1+ became ineligible performs a cold synchronization on all Services Processing Units
and transitions to active standby.
NOTE:
• If RG0 is primary on an unhealthy node, then RG0 will fail over from an
unhealthy to a healthy node. For example, if node 0 is primary for RG0+
and node 0 becomes unhealthy, then RG1+ on node 0 will transition to
ineligible after 66 seconds of a fabric link failure and RG0+ fails over to
node 1, which is the healthy node.
Use the show chassis cluster interfaces CLI command to verify the status of the fabric
link.
12.1X47 Starting with Junos OS Release 12.1X47-D10 and Junos OS Release 17.3R1,
the fabric monitoring feature is enabled by default on SRX5800, SRX5600,
and SRX5400 devices.
12.1X47 Starting with Junos OS Release 12.1X47-D10 and Junos OS Release 17.3R1,
recovery of the fabric link and synchronization take place automatically.
This example shows how to configure the chassis cluster fabric. The fabric is the
back-to-back data connection between the nodes in a cluster. Traffic on one node that
needs to be processed on the other node or to exit through an interface on the other node
passes over the fabric. Session state information also passes over the fabric.
Requirements
Before you begin, set the chassis cluster ID and chassis cluster node ID. See “Example:
Setting the Chassis Cluster Node ID and Cluster ID for SRX Series Devices” on page 92.
Overview
In most SRX Series devices in a chassis cluster, you can configure any pair of Gigabit
Ethernet interfaces or any pair of 10-Gigabit interfaces to serve as the fabric between
nodes.
You cannot configure filters, policies, or services on the fabric interface. Fragmentation
is not supported on the fabric link. The MTU size is 8980 bytes. We recommend that no
interface in the cluster exceed this MTU size. Jumbo frame support on the member links
is enabled by default.
Only the same type of interfaces can be configured as fabric children, and you must
configure an equal number of child links for fab0 and fab1.
NOTE: If you are connecting each of the fabric links through a switch, you
must enable the jumbo frame feature on the corresponding switch ports. If
both of the fabric links are connected through the same switch, the
RTO-and-probes pair must be in one virtual LAN (VLAN) and the data pair
must be in another VLAN. Here too, the jumbo frame feature must be enabled
on the corresponding switch ports.
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the [edit] hierarchy level,
and then enter commit from configuration mode.
{primary:node0}[edit]
set interfaces fab0 fabric-options member-interfaces ge-0/0/1
set interfaces fab1 fabric-options member-interfaces ge-7/0/1
{primary:node0}[edit]
user@host# set interfaces fab0 fabric-options member-interfaces ge-0/0/1
{primary:node0}[edit]
user@host# set interfaces fab1 fabric-options member-interfaces ge-7/0/1
Results From configuration mode, confirm your configuration by entering the show interfaces
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
{primary:node0}[edit]
user@host# show interfaces
...
fab0 {
fabric-options {
member-interfaces {
ge-0/0/1;
}
}
}
fab1 {
fabric-options {
member-interfaces {
ge-7/0/1;
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Action From operational mode, enter the show interfaces terse | match fab command.
{primary:node0}
Action From the CLI, enter the show chassis cluster data-plane interfaces command:
{primary:node1}
user@host> show chassis cluster data-plane interfaces
fab0:
Name Status
ge-2/1/9 up
ge-2/2/5 up
fab1:
Name Status
ge-8/1/9 up
ge-8/2/5 up
Action From the CLI, enter the show chassis cluster data-plane statistics command:
{primary:node1}
user@host> show chassis cluster data-plane statistics
Services Synchronized:
Service name RTOs sent RTOs received
Translation context 0 0
Incoming NAT 0 0
Resource manager 0 0
Session create 0 0
Session close 0 0
Session change 0 0
Gate create 0 0
Session ageout refresh requests 0 0
Session ageout refresh replies 0 0
IPSec VPN 0 0
Firewall user authentication 0 0
MGCP ALG 0 0
H323 ALG 0 0
SIP ALG 0 0
SCCP ALG 0 0
PPTP ALG 0 0
RTSP ALG 0 0
To clear displayed chassis cluster data plane statistics, enter the clear chassis cluster
data-plane statistics command from the CLI:
{primary:node1}
user@host> clear chassis cluster data-plane statistics
• Understanding Chassis Cluster Control Plane and Control Links on page 115
• Verifying Chassis Cluster Control Plane Statistics on page 117
• Clearing Chassis Cluster Control Plane Statistics on page 118
• Example: Configuring Chassis Cluster Control Ports on page 118
The control plane software, which operates in active or backup mode, is an integral part
of Junos OS that is active on the primary node of a cluster. It achieves redundancy by
communicating state, configuration, and other information to the inactive Routing Engine
on the secondary node. If the master Routing Engine fails, the secondary one is ready to
assume control.
• Runs on the Routing Engine and oversees the entire chassis cluster system, including
interfaces on both nodes
• Manages system and data plane resources, including the Packet Forwarding Engine
(PFE) on each node
• Manages routing state, Address Resolution Protocol (ARP) processing, and Dynamic
Host Configuration Protocol (DHCP) processing
• On the primary node (where the Routing Engine is active), control information flows
from the Routing Engine to the local Packet Forwarding Engine.
• Control information flows across the control link to the secondary node's Routing
Engine and Packet Forwarding Engine.
The control plane software running on the master Routing Engine maintains state for
the entire cluster, and only processes running on its node can update state information.
The master Routing Engine synchronizes state for the secondary node and also processes
all host traffic.
The control link relies on a proprietary protocol to transmit session state, configuration,
and liveliness signals across the nodes.
NOTE: For a single control link in a chassis cluster, the same control port
should be used for the control link connection and for configuration on both
nodes. For example, if port 0 is configured as a control port on node 0, then
port 0 should be configured as a control port on node 1 with a cable connection
between the two ports. For dual control links, control port 0 on node 0 should
be connected to control port 0 on node 1 and control port 1 should be
connected to control port 1 on node 1. Cross connections, that is, connecting
port 0 on one node to port 1 on the other node and vice versa, do not work.
• On SRX5400, SRX5600, and SRX5800 devices, by default, all control ports are
disabled. Each SPC in a device has two control ports, and each device can have multiple
SPCs plugged into it. To set up the control link in a chassis cluster with SRX5600 or
SRX5800 devices, you connect and configure the control ports that you will use on
each device (fpc<n> and fpc<n>) and then initialize the device in cluster mode.
• For SRX4600 devices, dedicated chassis cluster (HA) control ports and fabric ports
are available. No control link configuration is needed for SRX4600 devices; however,
you need to configure fabric link explicitly for chassis cluster deployments.
• For SRX4100 and SRX4200 devices, there are dedicated chassis cluster (HA) control
ports available. No control link configuration is needed for SRX4100 and SRX4200
devices. For more information about all SRX4100 and SRX4200 ports including
dedicated control and fabric link ports, see “Understanding SRX Series Chassis Cluster
Slot Numbering and Physical Port and Logical Interface Naming” on page 79.
NOTE: For SRX4100 and SRX4200 devices, when devices are not in cluster
mode, dedicated HA ports cannot be used as revenue ports or traffic ports.
• For SRX300, SRX320, SRX340, SRX345, and SRX550M devices, the control link uses
the ge-0/0/1 interface.
• For SRX240, SRX550M, and SRX650 devices, the control link uses the ge-0/0/1
interface.
• For SRX220 devices, the control link uses the ge-0/0/7 interface.
• For SRX100 and SRX210 devices, the control link uses the fe-0/0/7 interface.
For details about port and interface usage for management, control, and fabric links, see
“Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and Logical
Interface Naming” on page 79.
Action From the CLI, enter the show chassis cluster control-plane statistics command:
{primary:node1}
user@host> show chassis cluster control-plane statistics
{primary:node1}
user@host> show chassis cluster control-plane statistics
To clear displayed chassis cluster control plane statistics, enter the clear chassis cluster
control-plane statistics command from the CLI:
{primary:node1}
user@host> clear chassis cluster control-plane statistics
This example shows how to configure chassis cluster control ports on SRX5400,
SRX5600, and SRX5800 devices. You need to configure the control ports that you will
use on each device to set up the control link.
Requirements
Before you begin:
• Understand chassis cluster control links. See “Understanding Chassis Cluster Control
Plane and Control Links” on page 115.
• Physically connect the control ports on the devices. See “Connecting SRX Series Devices
to Create a Chassis Cluster” on page 71.
Overview
By default, all control ports on SRX5400, SRX5600, and SRX5800 devices are disabled.
After connecting the control ports, configuring the control ports, and establishing the
chassis cluster, the control link is set up.
This example configures control ports with the following FPCs and ports as the control
link:
• FPC 4, port 0
Configuration
CLI Quick To quickly configure this section of the example, copy the following commands, paste
Configuration them into a text file, remove any line breaks, change any details necessary to match your
network configuration, copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
{primary:node0}[edit]
set chassis cluster control-ports fpc 4 port 0
set chassis cluster control-ports fpc 10 port 0
Step-by-Step To configure control ports for use as the control link for the chassis cluster:
Procedure
• Specify the control ports.
{primary:node0}[edit]
user@host# set chassis cluster control-ports fpc 4 port 0
{primary:node0}[edit]
user@host# set chassis cluster control-ports fpc 10 port 0
Results From configuration mode, confirm your configuration by entering the show chassis cluster
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
{primary:node0}[edit]
user@host# show chassis cluster
...
control-ports {
fpc 4 port 0;
fpc 10 port 0;
}
...
If you are done configuring the device, enter commit from configuration mode.
Verification
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Cluster ID: 1
Node Priority Status Preempt Manual failover
Meaning Use the show chassis cluster status command to confirm that the devices in the chassis
cluster are communicating with each other. The chassis cluster is functioning properly,
as one device is the primary node and the other is the secondary node.
Related • Understanding Chassis Cluster Control Plane and Control Links on page 115
Documentation
• Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and
Logical Interface Naming on page 79
Chassis clustering provides high availability of interfaces and services through redundancy
groups and primacy within groups.
Redundancy groups are independent units of failover. Each redundancy group fails over
from one node to the other independent of other redundancy groups. When a redundancy
group fails over, all its objects fail over together.
Three things determine the primacy of a redundancy group: the priority configured for
the node, the node ID (in case of tied priorities), and the order in which the node comes
up. If a lower priority node comes up first, then it will assume the primacy for a redundancy
group (and will stay as primary if preempt is not enabled). If preempt is added to a
redundancy group configuration, the device with the higher priority in the group can initiate
a failover to become master. By default, preemption is disabled. For more information
on preemption, see preempt (Chassis Cluster).
A chassis cluster can include many redundancy groups, some of which might be primary
on one node and some of which might be primary on the other. Alternatively, all
redundancy groups can be primary on a single node. One redundancy group's primacy
does not affect another redundancy group's primacy. You can create up to 128 redundancy
groups.
You can configure redundancy groups to suit your deployment. You configure a redundancy
group to be primary on one node and backup on the other node. You specify the node on
which the group is primary by setting priorities for both nodes within a redundancy group
configuration. The node with the higher priority takes precedence, and the redundancy
group's objects on it are active.
If a redundancy group is configured so that both nodes have the same priority, the node
with the lowest node ID number always takes precedence, and the redundancy group is
primary on it. In a two-node cluster, node 0 always takes precedence in a priority tie.
The redundancy group 0 configuration specifies the priority for each node. The following
priority scheme determines redundancy group 0 primacy. Note that the three-second
value is the interval if the default heartbeat-threshold and heartbeat-interval values are
used.
• The node that comes up first (at least three seconds prior to the other node) is the
primary node.
• If both nodes come up at the same time (or within three seconds of each other):
• The node with the higher configured priority is the primary node.
• If there is a tie (either because the same value was configured or because default
settings were used), the node with the lower node ID (node 0) is the primary node.
You cannot enable preemption for redundancy group 0. If you want to change the primary
node for redundancy group 0, you must do a manual failover.
Each redundancy group x contains one or more redundant Ethernet interfaces. A redundant
Ethernet interface is a pseudo interface that contains at minimum a pair of physical
Gigabit Ethernet interfaces or a pair of Fast Ethernet interfaces. If a redundancy group is
active on node 0, then the child links of all the associated redundant Ethernet interfaces
on node 0 are active. If the redundancy group fails over to node 1, then the child links of
all redundant Ethernet interfaces on node 1 become active.
The following priority scheme determines redundancy group x primacy, provided preempt
is not configured. If preempt is configured, the node with the higher priority is the primary
node. Note that the three-second value is the interval if the default heartbeat-threshold
and heartbeat-interval values are used.
• The node that comes up first (at least three seconds prior to the other node) is the
primary node.
• If both nodes come up at the same time (or within three seconds of each other):
• The node with the higher configured priority is the primary node.
• If there is a tie (either because the same value was configured or because default
settings were used), the node with the lower node ID (node 0) is the primary node.
On SRX Series chassis clusters, you can configure multiple redundancy groups to
load-share traffic across the cluster. For example, you can configure some redundancy
groups x to be primary on one node and some redundancy groups x to be primary on the
other node. You can also configure a redundancy group x in a one-to-one relationship
with a single redundant Ethernet interface to control which interface traffic flows through.
The traffic for a redundancy group is processed on the node where the redundancy group
is active. Because more than one redundancy group can be configured, it is possible that
the traffic from some redundancy groups is processed on one node while the traffic for
other redundancy groups is processed on the other node (depending on where the
redundancy group is active). Multiple redundancy groups make it possible for traffic to
arrive over an ingress interface of one redundancy group and over an egress interface
that belongs to another redundancy group. In this situation, the ingress and egress
interfaces might not be active on the same node. When this happens, the traffic is
forwarded over the fabric link to the appropriate node.
When you configure a redundancy group x, you must specify a priority for each node to
determine the node on which the redundancy group x is primary. The node with the higher
priority is selected as primary. The primacy of a redundancy group x can fail over from
one node to the other. When a redundancy group x fails over to the other node, its
redundant Ethernet interfaces on that node are active and their interfaces are passing
traffic.
Table 17 on page 124 gives an example of redundancy group x in an SRX Series chassis
cluster and indicates the node on which the group is primary. It shows the redundant
Ethernet interfaces and their interfaces configured for redundancy group x.
NOTE: Some devices have both Gigabit Ethernet ports and Fast Ethernet
ports.
Requirements
Before you begin:
1. Set the chassis cluster node ID and cluster ID. See “Example: Setting the Chassis
Cluster Node ID and Cluster ID” on page 92.
2. Configure the chassis cluster management interface. See “Example: Configuring the
Chassis Cluster Management Interface” on page 96.
3. Configure the chassis cluster fabric. See “Example: Configuring the Chassis Cluster
Fabric Interfaces” on page 109.
Overview
A chassis cluster redundancy group is an abstract entity that includes and manages a
collection of objects. Each redundancy group acts as an independent unit of failover and
is primary on only one node at a time.
In this example, you create two chassis cluster redundancy groups, 0 and 1:
The preempt option is enabled, and the number of gratuitous ARP requests that an
interface can send to notify other network devices of its presence after the redundancy
group it belongs to has failed over is 4.
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the [edit] hierarchy level,
and then enter commit from configuration mode.
[edit]
set chassis cluster redundancy-group 0 node 0 priority 100
set chassis cluster redundancy-group 0 node 1 priority 1
set chassis cluster redundancy-group 1 node 0 priority 100
set chassis cluster redundancy-group 1 node 1 priority 1
set chassis cluster redundancy-group 1 preempt
set chassis cluster redundancy-group 1 gratuitous-arp-count 4
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 0 node 0 priority 100
user@host# set chassis cluster redundancy-group 0 node 1 priority 1
user@host# set chassis cluster redundancy-group 1 node 0 priority 100
user@host# set chassis cluster redundancy-group 1 node 1 priority 1
2. Configure the node with the higher priority to preempt the device with the lower
priority and become primary for the redundancy group.
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 preempt
You cannot enable preemption for redundancy group 0. If you want to change the
primary node for redundancy group 0, you must do a manual failover.
3. Specify the number of gratuitous ARP requests that an interface can send to notify
other network devices of its presence after the redundancy group it belongs to has
failed over.
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 gratuitous-arp-count 4
Results From configuration mode, confirm your configuration by entering the show chassis cluster
status redundancy-group commands. If the output does not display the intended
configuration, repeat the configuration instructions in this example to correct it.
{primary:node0}[edit]
user@host# show chassis cluster
chassis {
cluster {
redundancy-group 0 {
If you are done configuring the device, enter commit from configuration mode.
Verification
Action From operational mode, enter the show chassis cluster status redundancy-group command.
{primary:node0}
user@host>show chassis cluster status redundancy-group 1
Cluster ID: 1
Node Priority Status Preempt Manual failover
from each node are assigned to the redundant Ethernet interface, a redundant Ethernet
interface link aggregation group can be formed. A single redundant Ethernet interface
might include a Fast Ethernet interface from node 0 and a Fast Ethernet interface from
node 1 or a Gigabit Ethernet interface from node 0 and a Gigabit Ethernet interface from
node 1.
A redundant Ethernet interface's child interface is associated with the redundant Ethernet
interface as part of the child interface configuration. The redundant Ethernet interface
child interface inherits most of its configuration from its parent.
The maximum number of redundant Ethernet interfaces that you can configure varies,
depending on the device type you are using, as shown in Table 18 on page 130 and
Table 19 on page 131. Note that the number of redundant Ethernet interfaces configured
determines the number of redundancy groups that can be configured.
SRX4600 128
SRX4100, 128
SRX4200
SRX5400, 128
SRX5600,
SRX5800
SRX300, 128
SRX320,
SRX340,
SRX345
SRX550M 58
SRX1500 128
SRX100 8
SRX210 8
SRX220 8
SRX240 24
SRX550 58
SRX650 68
A redundant Ethernet interface's child interface is associated with the redundant Ethernet
interface as part of the child interface configuration. The redundant Ethernet interface
child interface inherits most of its configuration from its parent.
A redundant Ethernet interface inherits its failover properties from the redundancy group
x that it belongs to. A redundant Ethernet interface remains active as long as its primary
child interface is available or active. For example, if reth0 is associated with redundancy
group 1 and redundancy group 1 is active on node 0, then reth0 is up as long as the node
0 child of reth0 is up.
Point-to-Point Protocol over Ethernet (PPPoE) over redundant Ethernet (reth) interface
is supported on SRX100, SRX210, SRX220, SRX240, SRX550, SRX650, SRX300, SRX320,
SRX340, SRX345, and SRX550M devices in chassis cluster mode. This feature allows
an existing PPPoE session to continue without starting a new PPP0E session in the event
of a failover.
Point-to-Point Protocol over Ethernet (PPPoE) over redundant Ethernet (reth) interface
is supported on SRX300, SRX320, SRX340, SRX345, and SRX550M devices in chassis
cluster mode. This feature allows an existing PPPoE session to continue without starting
a new PPP0E session in the event of a failover.
For example:
ge-2/0/2 {
unit 0 {
family inet {
address 1.1.1.1/24;
}
}
}
interfaces {
ge-2/0/2 {
gigether-options {
redundant-parent reth2;
}
}
reth2 {
redundant-ether-options {
redundancy-group 1;
}
unit 0 {
family inet {
address 1.1.1.1/24;
}
}
}
}
Related • Example: Configuring Chassis Cluster Redundant Ethernet Interfaces for IPv4 and IPv6
Documentation Addresses on page 133
Example: Configuring Chassis Cluster Redundant Ethernet Interfaces for IPv4 and IPv6
Addresses
This example shows how to configure chassis cluster redundant Ethernet interfaces. A
redundant Ethernet interface is a pseudointerface that contains two or more physical
interfaces, with at least one from each node of the cluster.
Requirements
Before you begin:
• Understand how to set the chassis cluster node ID and cluster ID. See “Example: Setting
the Chassis Cluster Node ID and Cluster ID” on page 92.
• Understand how to set the chassis cluster fabric. See “Example: Configuring the Chassis
Cluster Fabric Interfaces” on page 109.
• Understand how to set the chassis cluster node redundancy groups. See “Example:
Configuring Chassis Cluster Redundancy Groups” on page 125.
Overview
After physical interfaces have been assigned to the redundant Ethernet interface, you
set the configuration that pertains to them at the level of the redundant Ethernet interface,
and each of the child interfaces inherits the configuration.
If multiple child interfaces are present, then the speed of all the child interfaces must be
the same.
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the [edit] hierarchy level,
and then enter commit from configuration mode.
{primary:node0}[edit]
user@host# set interfaces ge-0/0/0 gigether-options redundant-parent reth1
set interfaces ge-7/0/0 gigether-options redundant-parent reth1
set interfaces fe-1/0/0 fast-ether-options redundant-parent reth2
set interfaces fe-8/0/0 fast-ether-options redundant-parent reth2
set interfaces reth1 redundant-ether-options redundancy-group 1
set interfaces reth1 unit 0 family inet mtu 1500
set interfaces reth1 unit 0 family inet address 10.1.1.3/24
set security zones security-zone Trust interfaces reth1.0
To quickly configure this example, copy the following commands, paste them into a text
file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the [edit] hierarchy level,
and then enter commit from configuration mode.
{primary:node0}[edit]
user@host# set interfaces ge-0/0/0 gigether-options redundant-parent reth1
set interfaces ge-7/0/0 gigether-options redundant-parent reth1
set interfaces fe-1/0/0 fast-ether-options redundant-parent reth2
set interfaces fe-8/0/0 fast-ether-options redundant-parent reth2
set interfaces reth2 redundant-ether-options redundancy-group 1
set interfaces reth2 unit 0 family inet6 mtu 1500
set interfaces reth2 unit 0 family inet6 address 2010:2010:201::2/64
{primary:node0}[edit]
user@host# set interfaces ge-0/0/0 gigether-options redundant-parent reth1
user@host# set interfaces ge-7/0/0 gigether-options redundant-parent reth1
{primary:node0}[edit]
user@host# set interfaces fe-1/0/0 fast-ether-options redundant-parent reth2
user@host# set interfaces fe-8/0/0 fast-ether-options redundant-parent reth2
{primary:node0}[edit]
user@host# set interfaces reth1 redundant-ether-options redundancy-group 1
{primary:node0}[edit]
user@host# set interfaces reth1 unit 0 family inet mtu 1500
NOTE: The maximum transmission unit (MTU) set on the reth interface
can be different from the MTU on the child interface.
{primary:node0}[edit]
user@host# set interfaces reth1 unit 0 family inet address 10.1.1.3/24
{primary:node0}[edit]
user@host# set security zones security-zone Trust interfaces reth1.0
{primary:node0}[edit]
user@host# set interfaces ge-0/0/0 gigether-options redundant-parent reth1
user@host# set interfaces ge-7/0/0 gigether-options redundant-parent reth1
{primary:node0}[edit]
{primary:node0}[edit]
user@host# set interfaces reth2 redundant-ether-options redundancy-group 1
{primary:node0}[edit]
user@host# set interfaces reth2 unit 0 family inet6 mtu 1500
{primary:node0}[edit]
user@host# set interfaces reth2 unit 0 family inet6 address 2010:2010:201::2/64
{primary:node0}[edit]
user@host# set security zones security-zone Trust interfaces reth2.0
Results From configuration mode, confirm your configuration by entering the show interfaces
reth0 command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
{primary:node0}[edit]
user@host# show interfaces
interfaces {
...
fe-1/0/0 {
fastether-options {
redundant-parent reth2;
}
}
fe-8/0/0 {
fastether-options {
redundant-parent reth2;
}
}
ge-0/0/0 {
gigether-options {
redundant-parent reth1;
}
}
ge-7/0/0 {
gigether-options {
redundant-parent reth1;
}
}
reth1 {
redundant-ether-options {
redundancy-group 1;
}
unit 0 {
family inet {
mtu 1500;
address 10.1.1.3/24;
}
}
}
reth2 {
redundant-ether-options {
redundancy-group 1;
}
unit 0 {
family inet6 {
mtu 1500;
address 2010:2010:201::2/64;
}
}
}
...
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Confirm that the configuration is working properly.
Purpose Verify the configuration of the chassis cluster redundant Ethernet interfaces.
Action From operational mode, enter the show interfaces | match reth1 command:
{primary:node0}
user@host> show interfaces | match reth1
ge-0/0/0.0 up down aenet --> reth1.0
ge-7/0/0.0 up down aenet --> reth0.0
reth1 up down
reth1.0 up down inet 10.1.1.3/24
Purpose Verify information about the control interface in a chassis cluster configuration.
Action From operational mode, enter the show chassis cluster interfaces command:
{primary:node0}
user@host> show chassis cluster interfaces
Control interfaces:
Index Interface Monitored-Status Internal-SA
0 em0 Down Disabled
1 em1 Down Disabled
Fabric interfaces:
Name Child-interface Status
(Physical/Monitored)
fab0
fab0
Redundant-pseudo-interface Information:
Name Status Redundancy-group
reth1 Up 1
This example shows how to specify the number of redundant Ethernet interfaces for a
chassis cluster. You must configure the redundant Ethernet interfaces count so that the
redundant Ethernet interfaces that you configure are recognized.
Requirements
Before you begin, set the chassis cluster ID and chassis cluster node ID. See “Example:
Setting the Chassis Cluster Node ID and Cluster ID” on page 92.
Overview
Before you configure redundant Ethernet interfaces for a chassis cluster, you must specify
the number of redundant Ethernet interfaces for the chassis cluster.
In this example, you set the number of redundant Ethernet interfaces for a chassis cluster
to 2.
Configuration
Step-by-Step To set the number of redundant Ethernet interfaces for a chassis cluster:
Procedure
1. Specify the number of redundant Ethernet interfaces:
{primary:node0}[edit]
user@host# set chassis cluster reth-count 2
[edit]
user@host# commit
Verification
Action To verify the configuration, enter the show configuration chassis cluster command.
Related • Example: Configuring Chassis Cluster Redundant Ethernet Interfaces for IPv4 and IPv6
Documentation Addresses on page 133
This example shows how to set up chassis clustering on an SRX Series device (using
SRX1500 as example).
Requirements
Before you begin:
• Physically connect the two devices and ensure that they are the same models. For
example, on the SRX1500 Services Gateway, connect the dedicated control ports on
node 0 and node 1.
• Set the two devices to cluster mode and reboot the devices. You must enter the
following operational mode commands on both devices, for example:
• On node 0:
• On node 1:
The cluster-id is the same on both devices, but the node ID must be different because
one device is node 0 and the other device is node 1. The range for the cluster-id is 0
through 255 and setting it to 0 is equivalent to disabling cluster mode.
• After clustering occurs for the devices, continuing with the SRX1500 Services Gateway
example, the ge-0/0/0 interface on node 1 changes to ge-7/0/0.
NOTE:
After clustering occurs,
NOTE:
After the reboot, the following interfaces are assigned and repurposed to
form a cluster:
• For SRX300 and SRX320 devices, ge-0/0/0 becomes fxp0 and is used
for individual management of the chassis cluster.
See “Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and
Logical Interface Naming” on page 79 for complete mapping of the SRX Series devices.
From this point forward, configuration of the cluster is synchronized between the node
members and the two separate devices function as one device.
Overview
This example shows how to set up chassis clustering on an SRX Series device using the
SRX1500 device as example.
The node 1 renumbers its interfaces by adding the total number of system FPCs to the
original FPC number of the interface. See Table 20 on page 143 for interface renumbering
on the SRX Series device.
SRX345
After clustering is enabled, the system creates fxp0, fxp1, and em0 interfaces. Depending
on the device, the fxp0, fxp1, and em0 interfaces that are mapped to a physical interface
are not user defined. However, the fab interface is user defined.
EX Series EX Series
reth 1.0
203.0.113.233/24
ge-0/0/5 ge-7/0/5
UNTRUST ZONE
ge-0/0/4 ge-7/0/4
reth 0.0
198.51.100.1/24
EX Series EX Series
g030647
TRUST ZONE
Configuration
CLI Quick To quickly configure a chassis cluster on an SRX1500 Services Gateway, copy the following
Configuration commands and paste them into the CLI:
On {primary:node0}
[edit]
set groups node0 system host-name srx1500-1
set groups node0 interfaces fxp0 unit 0 family inet address 192.16.35.46/24
If you are configuring SRX300, SRX320, SRX340, SRX345, and SRX550M device, see
Table 21 on page 144 for command and interface settings for your device and substitute
these commands into your CLI.
set chassis cluster ge-0/0/3 weight 255 ge-0/0/3 weight 255 ge-0/0/3 weight 255 ge-1/0/0 weight 255
redundancy-group 1
interface-monitor
set chassis cluster ge-0/0/4 weight 255 ge-0/0/4 weight 255 ge-0/0/4 weight 255 ge-10/0/0 weight 255
redundancy-group 1
interface-monitor
set chassis cluster ge-1/0/3 weight 255 ge-3/0/3 weight 255 ge-5/0/3 weight 255 ge-1/0/1 weight 255
redundancy-group 1
interface-monitor
set chassis cluster ge-1/0/4 weight 255 ge-3/0/4 weight 255 ge-5/0/4 weight 255 ge-10/0/1 weight 255
redundancy-group 1
interface-monitor
Table 22: SRX Series Services Gateways Interface Settings (SRX100, SRX210, SRX220, SRx240,
SRX550)
Command SRX100 SRX210 SRX220 SRX240 SRX550
set chassis cluster fe-0/0/0 weight fe-0/0/3 weight ge-0/0/0 weight ge-0/0/5 weight ge-1/0/0 weight
redundancy-group 255 255 255 255 255
1 interface-monitor
set chassis cluster fe-0/0/2 weight fe-0/0/2 weight ge-3/0/0 weight ge-5/0/5 weight ge-10/0/0 weight
redundancy-group 255 255 255 255 255
1 interface-monitor
Table 22: SRX Series Services Gateways Interface Settings (SRX100, SRX210, SRX220, SRx240,
SRX550) (continued)
Command SRX100 SRX210 SRX220 SRX240 SRX550
set chassis cluster fe-1/0/0 weight fe-2/0/3 weight ge-0/0/1 weight ge-0/0/6 weight ge-1/0/1 weight
redundancy-group 255 255 255 255 255
1 interface-monitor
set chassis cluster fe-1/0/2 weight fe-2/0/2 weight ge-3/0/1 weight ge-5/0/6 weight ge-10/0/1 weight
redundancy-group 255 255 255 255 255
1 interface-monitor
Step-by-Step The following example requires you to navigate various levels in the configuration
Procedure hierarchy. For instructions on how to do that, see Using the CLI Editor in Configuration
Mode in the CLI User Guide.
NOTE: Perform Steps 1 through 5 on the primary device (node 0). They are
automatically copied over to the secondary device (node 1) when you execute
a commit command. The configurations are synchronized because the control
link and fab link interfaces are activated. To verify the configurations, use the
show interface terse command and review the output.
1. Set up hostnames and management IP addresses for each device using configuration
groups. These configurations are specific to each device and are unique to its specific
node.
user@host# set groups node0 interfaces fxp0 unit 0 family inet address
192.16.35.46/24
user@host# set groups node1 system host-name srx1500-2
user@host# set groups node1 interfaces fxp0 unit 0 family inet address
192.16.35.47/24
Set the default route and backup router for each node.
user@host# set groups node0 system backup-router <backup next-hop from fxp0>
destination <management network/mask>
user@host# set groups node1 system backup-router <backup next-hop from fxp0>
destination <management network/mask>
Set the apply-group command so that the individual configurations for each node
set by the previous commands are applied only to that node.
2. Define the interfaces used for the fab connection (data plane links for RTO sync)
by using physical ports ge-0/0/1 from each node. These interfaces must be
connected back-to-back, or through a Layer 2 infrastructure.
3. Set up redundancy group 0 for the Routing Engine failover properties, and set up
redundancy group 1 (all interfaces are in one redundancy group in this example) to
define the failover properties for the redundant Ethernet interfaces.
4. Set up interface monitoring to monitor the health of the interfaces and trigger
redundancy group failover.
5. Set up the redundant Ethernet (reth) interfaces and assign the redundant interface
to a zone.
Results From operational mode, confirm your configuration by entering the show configuration
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
}
}
}
}
}
}
apply-groups "${node}";
chassis {
cluster {
reth-count 2;
redundancy-group 0 {
node 0 priority 100;
node 1 priority 1;
}
redundancy-group 1 {
node 0 priority 100;
node 1 priority 1;
interface-monitor {
ge–0/0/3 weight 255;
ge–0/0/2 weight 255;
ge–7/0/2 weight 255;
ge–7/0/3 weight 255;
}
}
}
}
interfaces {
ge–0/0/2 {
gigether–options {
redundant–parent reth1;
}
unit 0 {
family inet {
address 2.2.2.2/30;
}
}
}
ge–0/0/3 {
gigether–options {
redundant–parent reth0;
}
}
ge–7/0/2 {
gigether–options {
redundant–parent reth1;
}
}
ge–7/0/3 {
gigether–options {
redundant–parent reth0;
}
}
fab0 {
fabric–options {
member–interfaces {
ge–0/0/1;
}
}
}
fab1 {
fabric–options {
member–interfaces {
ge–2/0/1;
}
}
}
reth0 {
redundant–ether–options {
redundancy–group 1;
}
unit 0 {
family inet {
address 10.16.8.1/24;
}
}
}
reth1 {
redundant–ether–options {
redundancy–group 1;
}
unit 0 {
family inet {
address 1.2.0.233/24;
}
}
}
}
...
security {
zones {
security–zone Untrust {
interfaces {
reth1.0;
}
}
security–zone Trust {
interfaces {
reth0.0;
}
}
}
policies {
from–zone Trust to–zone Untrust {
policy 1 {
match {
source–address any;
destination–address any;
application any;
}
then {
permit;
}
}
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Confirm that the configuration is working properly.
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host# show chassis cluster status
Cluster ID: 1
Node Priority Status Preempt Manual failover
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link name: em0
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 1
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-7/0/3 255 Up 1
ge-7/0/2 255 Up 1
ge-0/0/2 255 Up 1
ge-0/0/3 255 Up 1
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitored interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster statistics command.
{primary:node0}
user@host> show chassis cluster statistics
Purpose Verify information about chassis cluster control plane statistics (heartbeats sent and
received) and the fabric link statistics (probes sent and received).
Action From operational mode, enter the show chassis cluster control-plane statistics command.
{primary:node0}
user@host> show chassis cluster control-plane statistics
Purpose Verify information about the number of RTOs sent and received for services.
Action From operational mode, enter the show chassis cluster data-plane statistics command.
{primary:node0}
user@host> show chassis cluster data-plane statistics
Services Synchronized:
Service name RTOs sent RTOs received
Translation context 0 0
Incoming NAT 0 0
Resource manager 6 0
Session create 161 0
Session close 148 0
Session change 0 0
Gate create 0 0
Session ageout refresh requests 0 0
Session ageout refresh replies 0 0
IPSec VPN 0 0
Firewall user authentication 0 0
MGCP ALG 0 0
H323 ALG 0 0
SIP ALG 0 0
SCCP ALG 0 0
PPTP ALG 0 0
RPC ALG 0 0
RTSP ALG 0 0
RAS ALG 0 0
MAC address learning 0 0
GPRS GTP 0 0
Purpose Verify the state and priority of both nodes in a cluster and information about whether
the primary node has been preempted or whether there has been a manual failover.
Action From operational mode, enter the chassis cluster status redundancy-group command.
{primary:node0}
user@host> show chassis cluster status redundancy-group 1
Cluster ID: 1
Node Priority Status Preempt Manual failover
Purpose Use these logs to identify any chassis cluster issues. You should run these logs on both
nodes.
This example shows how to enable eight-queue CoS on redundant Ethernet interfaces
on SRX Series devices in a chassis cluster. This example is applicable to SRX5800,
SRX5600, SRX5400, SRX4200, and SRX4100.
Requirements
This example uses the following hardware and software components:
Overview
The SRX Series devices support eight queues, but only four queues are enabled by default.
Use the set chassis fpc x pic y max-queues-per-interface 8 command to enable eight
queues explicitly at the chassis level. The values of x and y depends on the location of
the IOC and the PIC number where the interface is located on the device on which CoS
needs to be implemented. To find the IOC location use the show chassis fpc pic-status
or show chassis hardware commands.
You must restart the chassis control for the configuration to take effect.
NOTE: On SRX Series devices, eight QoS queues are supported per ae
interface.
Figure 44 on page 156 shows how to configure eight-queue CoS on redundant Ethernet
interfaces on SRX Series devices in a chassis cluster.
reth\0
SRX5600 SRX5600
(Device 1) ge-5/1/14 (Device 1) ge-11/1/14
Control link
Node 0 Node 1
Fabric link
ge-5/1/15 ge-11/1/15
reth\1
g034215
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the [edit] hierarchy level,
and then enter commit from configuration mode.
Step-by-Step The following example requires you to navigate various levels in the configuration
Procedure hierarchy. For instructions on how to do that, see Using the CLI Editor in Configuration
Mode in the CLI User Guide.
[edit chassis]
user@host# set fpc 5 pic 1 max-queues-per-interface 8
[edit interfaces]
user@host# set ge-5/1/14 gigether-options redundant-parent reth0
user@host# set ge-11/1/14 gigether-options redundant-parent reth0
user@host# set ge-5/1/15 gigether-options redundant-parent reth1
user@host# set ge-11/1/15 gigether-options redundant-parent reth1
user@host# set reth0 redundant-ether-options redundancy-group 1
user@host# set reth0 vlan-tagging
user@host# set reth0 unit 0 vlan-id 1350
user@host# set reth0 unit 0 family inet address 192.0.2.1/24
user@host# set reth1 hierarchical-scheduler
user@host# set reth1 vlan-tagging
user@host# set reth1 redundant-ether-options redundancy-group 2
user@host# set reth1 unit 0 vlan-id 1351
user@host# set reth1 unit 0 family inet address 192.0.2.2/24
user@host# set reth1 unit 1 vlan-id 1352
[edit class-of-service]
user@host# set classifiers inet-precedence inet_prec_4 forwarding-class q0
loss-priority low code-points 000
user@host# set classifiers inet-precedence inet_prec_4 forwarding-class q2
loss-priority low code-points 010
user@host# set classifiers inet-precedence inet_prec_4 forwarding-class q3
loss-priority low code-points 011
user@host# set classifiers inet-precedence inet_prec_4 forwarding-class q1
loss-priority low code-points 001
user@host# set classifiers inet-precedence inet_prec_4 forwarding-class q4
loss-priority low code-points 100
user@host# set classifiers inet-precedence inet_prec_4 forwarding-class q5
loss-priority low code-points 101
user@host# set classifiers inet-precedence inet_prec_4 forwarding-class q6
loss-priority low code-points 110
user@host# set classifiers inet-precedence inet_prec_4 forwarding-class q7
loss-priority low code-points 111
[edit class-of-service]
user@host# set forwarding-classes queue 0 q0
user@host# set forwarding-classes queue 1 q1
user@host# set forwarding-classes queue 2 q2
user@host# set forwarding-classes queue 3 q3
user@host# set forwarding-classes queue 4 q4
user@host# set forwarding-classes queue 5 q5
user@host# set forwarding-classes queue 6 q6
user@host# set forwarding-classes queue 7 q7
[edit class-of-service]
user@host# set traffic-control-profiles 1 scheduler-map sched_map
user@host# set traffic-control-profiles 1 shaping-rate 200m
[edit class-of-service]
user@host# set interfaces reth0 unit 0 classifiers inet-precedence inet_prec_4
[edit class-of-service]
user@host# set interfaces reth1 unit 0 output-traffic-control-profile 1
[edit class-of-service]
user@host# set scheduler-maps sched_map forwarding-class q0 scheduler S0
user@host# set scheduler-maps sched_map forwarding-class q1 scheduler S1
user@host# set scheduler-maps sched_map forwarding-class q2 scheduler S2
user@host# set scheduler-maps sched_map forwarding-class q3 scheduler S3
user@host# set scheduler-maps sched_map forwarding-class q4 scheduler S4
user@host# set scheduler-maps sched_map forwarding-class q5 scheduler S5
user@host# set scheduler-maps sched_map forwarding-class q6 scheduler S6
user@host# set scheduler-maps sched_map forwarding-class q7 scheduler S7
user@host# set schedulers S0 transmit-rate percent 20
user@host# set schedulers S1 transmit-rate percent 5
user@host# set schedulers S2 transmit-rate percent 5
user@host# set schedulers S3 transmit-rate percent 10
user@host# set schedulers S4 transmit-rate percent 10
user@host# set schedulers S5 transmit-rate percent 10
user@host# set schedulers S6 transmit-rate percent 10
user@host# set schedulers S7 transmit-rate percent 30
Results From configuration mode, confirm your configuration by entering the show class-of-service
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
[edit]
user@host# show class-of-service
classifiers {
inet-precedence inet_prec_4 {
forwarding-class q0 {
loss-priority low code-points 000;
}
forwarding-class q2 {
loss-priority low code-points 010;
}
forwarding-class q3 {
loss-priority low code-points 011;
}
forwarding-class q1 {
loss-priority low code-points 001;
}
forwarding-class q4 {
loss-priority low code-points 100;
}
forwarding-class q5 {
loss-priority low code-points 101;
}
forwarding-class q6 {
loss-priority low code-points 110;
}
forwarding-class q7 {
S3 {
transmit-rate percent 10;
}
S4 {
transmit-rate percent 10;
}
S5 {
transmit-rate percent 10;
}
S6 {
transmit-rate percent 10;
}
S7 {
transmit-rate percent 30;
}
}
If you are done configuring the device, enter commit from configuration mode.
To restart chassis control, enter restart chassis-control command from operational mode.
NOTE: When you execute the restart chassis-control command all the FRU
cards on the box are reset, thus impacting traffic. Changing the number of
queues must be executed during a scheduled downtime. It takes 5-10 minutes
for the cards to come online after the restart chassis-control command is
executed.
Verification
Related • Understanding Chassis Cluster Control Plane and Control Links on page 115
Documentation
• Preparing Your Equipment for Chassis Cluster Formation on page 61
• Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and
Logical Interface Naming on page 79
Action From the CLI, enter the show chassis cluster ? command:
{primary:node1}
user@host> show chassis cluster ?
Possible completions:
interfaces Display chassis-cluster interfaces
statistics Display chassis-cluster traffic statistics
status Display chassis-cluster status
Action From the CLI, enter the show chassis cluster statistics command:
{primary:node1}
user@host> show chassis cluster statistics
{primary:node1}
user@host> show chassis cluster statistics
{primary:node1}
user@host> show chassis cluster statistics
To clear displayed information about chassis cluster services and interfaces, enter the
clear chassis cluster statistics command from the CLI:
{primary:node1}
user@host> clear chassis cluster statistics
Dual control links, where two pairs of control link interfaces are connected between each
device in a cluster, are supported for the SRX4600, SRX5600 and SRX5800 Services
Gateways. Having two control links helps to avoid a possible single point of failure.
For the SRX5600 and SRX5800 Services Gateways, this functionality requires a second
Routing Engine, as well as a second Switch Control Board (SCB) to house the Routing
Engine, to be installed on each device in the cluster. The purpose of the second Routing
Engine is only to initialize the switch on the SCB.
NOTE: For the SRX5400 Services Gateways, dual control is not supported
due to limited slots.
NOTE: For the SRX3000 line, this functionality requires an SRX Clustering
Module (SCM) to be installed on each device in the cluster. Although the
SCM fits in the Routing Engine slot, it is not a Routing Engine. SRX3000 line
devices do not support a second Routing Engine. The purpose of the SCM is
to initialize the second control link.
NOTE: For the SRX5000 line, the second Routing Engine must be running
Junos OS Release 10.0 or later.
The second Routing Engine, to be installed on SRX5000 line devices only, does not
provide backup functionality. It does not need to be upgraded, even when there is a
software upgrade of the master Routing Engine on the same node. Note the following
conditions:
• You cannot run the CLI or enter configuration mode on the second Routing Engine.
• You do not need to set the chassis ID and cluster ID on the second Routing Engine.
• You need only a console connection to the second Routing Engine. (A console
connection is not needed unless you want to check that the second Routing Engine
booted up or to upgrade a software image.)
• You cannot log in to the second Routing Engine from the master Routing Engine.
• Example: Configuring Chassis Cluster Control Ports for Dual Control Links on page 171
Connecting Dual Control Links for SRX Series Devices in a Chassis Cluster
For SRX5600 and SRX5800 devices, you can connect two control links between the
two devices, effectively reducing the chance of control link failure.
Dual control links are not supported on SRX5400 due to the limited number of slots.
For SRX5600 and SRX5800 devices, connect two pairs of the same type of Ethernet
ports. For each device, you can use ports on the same Services Processing Card (SPC),
but we recommend that they be on two different SPCs to provide high availability.
Figure 45 on page 171 shows a pair of SRX5800 devices with dual control links connected.
In this example, control port 0 and control port 1 are connected on different SPCs.
NOTE: For SRX5600 and SRX5800 devices, you must connect control port
0 on one node to control port 0 on the other node and, likewise, control port
1 to control port 1. If you connect control port 0 to control port 1, the nodes
cannot receive heartbeat packets across the control links.
Example: Configuring Chassis Cluster Control Ports for Dual Control Links
This example shows how to configure chassis cluster control ports for use as dual control
links on SRX5600, and SRX5800 devices. You need to configure the control ports that
you will use on each device to set up the control links.
NOTE: Dual control links are not supported on an SRX5400 device due to
the limited number of slots.
Requirements
Before you begin:
• Understand chassis cluster control links. See “Understanding Chassis Cluster Control
Plane and Control Links” on page 115.
• Physically connect the control ports on the devices. See “Connecting SRX Series Devices
to Create a Chassis Cluster” on page 71.
Overview
By default, all control ports on SRX5600 and SRX5800 devices are disabled. After
connecting the control ports, configuring the control ports, and establishing the chassis
cluster, the control links are set up.
This example configures control ports with the following FPCs and ports as the dual
control links:
• FPC 4, port 0
• FPC 6, port 1
Configuration
CLI Quick To quickly configure this section of the example, copy the following commands, paste
Configuration them into a text file, remove any line breaks, change any details necessary to match your
network configuration, copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
{primary:node0}[edit]
set chassis cluster control-ports fpc 4 port 0
set chassis cluster control-ports fpc 10 port 0
set chassis cluster control-ports fpc 6 port 1
set chassis cluster control-ports fpc 12 port 1
Step-by-Step To configure control ports for use as dual control links for the chassis cluster:
Procedure
• Specify the control ports.
{primary:node0}[edit]
user@host# set chassis cluster control-ports fpc 4 port 0
{primary:node0}[edit]
user@host# set chassis cluster control-ports fpc 10 port 0
{primary:node0}[edit]
user@host# set chassis cluster control-ports fpc 6 port 1
{primary:node0}[edit]
user@host# set chassis cluster control-ports fpc 12 port 1
Results From configuration mode, confirm your configuration by entering the show chassis cluster
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
{primary:node0}[edit]
user@host# show chassis cluster
...
control-ports {
fpc 4 port 0;
fpc 6 port 1;
fpc 10 port 0;
fpc 12 port 1;
}
...
If you are done configuring the device, enter commit from configuration mode.
Verification
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Cluster ID: 1
Node Priority Status Preempt Manual failover
Meaning Use the show chassis cluster status command to confirm that the devices in the chassis
cluster are communicating with each other. The chassis cluster is functioning properly,
as one device is the primary node and the other is the secondary node.
Upgrading the Second Routing Engine When Using Chassis Cluster Dual Control Links
on SRX5600 and SRX5800 Devices
For SRX5600 and SRX5800 devices, a second Routing Engine is required for each device
in a cluster if you are using dual control links. The second Routing Engine does not provide
backup functionality; its purpose is only to initialize the switch on the Switch Control
Board (SCB). The second Routing Engine must be running Junos OS Release 12.1X47-D35,
12.3X48-D30, 15.1X49-D40 or later. For more information, see knowledge base article
KB30371.
NOTE: For the SRX5400 Services Gateways, dual control is not supported
due to limited slots.
Because you cannot run the CLI or enter configuration mode on the second Routing
Engine, you cannot upgrade the Junos OS image with the usual upgrade commands.
Instead, use the master Routing Engine to create a bootable USB storage device, which
you can then use to install a software image on the second Routing Engine.
1. Use FTP to copy the installation media into the /var/tmp directory of the master
Routing Engine.
2. Insert a USB storage device into the USB port on the master Routing Engine.
start shell
cd /var/tmp
su [enter]
password: [enter SU password]
where
The following code example can be used to write the image that you copied to the
master Routing Engine in step 1 onto the USB storage device:
exit
7. After the software image is written to the USB storage device, remove the device and
insert it into the USB port on the second Routing Engine.
8. Move the console connection from the master Routing Engine to the second Routing
Engine, if you do not already have a connection.
9. Reboot the second Routing Engine. Issue the following command (for Junos OS
Release 15.1X49-D65 and earlier):
# reboot
• When the following system output appears, remove the USB storage device and
press Enter:
Related •
Documentation • Understanding Chassis Cluster Control Plane and Control Links on page 115
• Example: Configuring Chassis Cluster Control Ports for Dual Control Links on page 171
For a redundancy group to automatically failover to another node, its interfaces must be
monitored. When you configure a redundancy group, you can specify a set of interfaces
that the redundancy group is to monitor for status (or “health”) to determine whether
the interface is up or down. A monitored interface can be a child interface of any of its
redundant Ethernet interfaces. When you configure an interface for a redundancy group
to monitor, you give it a weight.
Every redundancy group has a threshold tolerance value initially set to 255. When an
interface monitored by a redundancy group becomes unavailable, its weight is subtracted
from the redundancy group's threshold. When a redundancy group's threshold reaches
0, it fails over to the other node. For example, if redundancy group 1 was primary on node
0, on the threshold-crossing event, redundancy group 1 becomes primary on node 1. In
this case, all the child interfaces of redundancy group 1's redundant Ethernet interfaces
begin handling traffic.
A redundancy group failover occurs because the cumulative weight of the redundancy
group's monitored interfaces has brought its threshold value to 0. When the monitored
interfaces of a redundancy group on both nodes reach their thresholds at the same time,
the redundancy group is primary on the node with the lower node ID, in this case node 0.
NOTE:
• If you want to dampen the failovers occurring because of interface
monitoring failures, use the hold-down-interval statement.
Requirements
Before you begin, create a redundancy group. See “Example: Configuring Chassis Cluster
Redundancy Groups” on page 125.
Overview
To retrieve the remaining redundancy group threshold after a monitoring interface is
down, you can configure your system to monitor the health of the interfaces belonging
to a redundancy group. When you assign a weight to an interface to be monitored, the
system monitors the interface for availability. If a physical interface fails, the weight is
deducted from the corresponding redundancy group's threshold. Every redundancy group
has a threshold of 255. If the threshold hits 0, a failover is triggered, even if the redundancy
group is in manual failover mode and the preempt option is not enabled.
In this example, you check the process of the remaining threshold of a monitoring interface
by configuring two interfaces from each node and mapping them to Redundancy Group
1 (RG1), each with different weights. You use 130 and 140 for node 0 interfaces and 150
and 120 for node 1 interfaces. You configure one interface from each node and map the
interfaces to Redundancy Group 2 (RG2), each with default weight of 255.
Figure 46 on page 180 illustrates the network topology used in this example.
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the edit hierarchy level, and
then enter commit from configuration mode.
Step-by-Step The following example requires you to navigate various levels in the configuration
Procedure hierarchy. For instructions on how to do that, see Using the CLI Editor in Configuration
Mode in the Junos OS CLI User Guide.
3. Set up redundancy group 0 for the Routing Engine failover properties, and set up
RG1 and RG2 (all interfaces are in one redundancy group in this example) to define
the failover properties for the redundant Ethernet interfaces.
4. Set up interface monitoring to monitor the health of the interfaces and trigger
redundancy group failover.
NOTE: Interface failover only occurs after the weight reaches zero.
5. Set up the redundant Ethernet (reth) interfaces and assign them to a zone.
[edit interfaces]
Results From configuration mode, confirm your configuration by entering the show chassis and
show interfaces commands. If the output does not display the intended configuration,
repeat the configuration instructions in this example to correct it.
[edit]
user@host# show chassis
cluster {
traceoptions {
flag all;
}
reth-count 3;
node 0; ## Warning: 'node' is deprecated
node 1; ## Warning: 'node' is deprecated
redundancy-group 0 {
node 0 priority 254;
node 1 priority 1;
}
redundancy-group 1 {
node 0 priority 200;
node 1 priority 100;
interface-monitor {
ge-0/0/1 weight 130;
ge-0/0/2 weight 140;
ge-8/0/1 weight 150;
ge-8/0/2 weight 120;
}
}
redundancy-group 2 {
node 0 priority 200;
node 1 priority 100;
interface-monitor {
ge-0/0/3 weight 255;
ge-8/0/3 weight 255;
}
}
}
[edit]
user@host# show interfaces
ge-0/0/1 {
gigether-options {
redundant-parent reth0;
}
}
ge-0/0/2 {
gigether-options {
redundant-parent reth1;
}
}
ge-0/0/3 {
gigether-options {
redundant-parent reth2;
}
}
ge-8/0/1 {
gigether-options {
redundant-parent reth0;
}
}
ge-8/0/2 {
gigether-options {
redundant-parent reth1;
}
}
ge-8/0/3 {
gigether-options {
redundant-parent reth2;
}
}
reth0 {
redundant-ether-options {
redundancy-group 1;
}
unit 0 {
family inet {
address 10.1.1.1/24;
}
}
}
reth1 {
redundant-ether-options {
redundancy-group 1;
}
unit 0 {
family inet {
address 10.2.2.2/24;
}
}
}
reth2 {
redundant-ether-options {
redundancy-group 2;
}
unit 0 {
family inet {
address 10.3.3.3/24;
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
The following sections walk you through the process of verifying and (in some cases)
troubleshooting the interface status. The process shows you how to check the status of
each interface in the redundancy group, check them again after they have been disabled,
and looks for details about each interface, until you have circled through all interfaces
in the redundancy group.
In this example, you verify the process of the remaining threshold of a monitoring interface
by configuring two interfaces from each node and mapping them to RG1, each with
different weights. You use 130 and 140 for node 0 interfaces and 150 and 120 for node 1
interfaces. You configure one interface from each node and map the interfaces to RG2,
each with the default weight of 255.
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Monitor Failure codes:
CS Cold Sync monitoring FL Fabric Connection monitoring
GR GRES monitoring HW Hardware monitoring
IF Interface monitoring IP IP monitoring
LB Loopback monitoring MB Mbuf monitoring
NH Nexthop monitoring NP NPC monitoring
SP SPU monitoring SM Schedule monitoring
CF Config Sync monitoring
Cluster ID: 2
Node Priority Status Preempt Manual Monitor-failures
Meaning Use the show chassis cluster status command to confirm that devices in the chassis
cluster are communicating properly, with one device functioning as the primary node
and the other as the secondary node.
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitoring interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control interfaces:
Index Interface Monitored-Status Internal-SA
0 em0 Up Disabled
1 em1 Down Disabled
Fabric interfaces:
Name Child-interface Status
(Physical/Monitored)
fab0 ge-0/0/0 Up / Up
fab0
fab1 ge-8/0/0 Up / Up
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 1
reth2 Up 2
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-8/0/2 120 Up 1
ge-8/0/1 150 Up 1
ge-0/0/2 140 Up 1
ge-0/0/1 130 Up 1
ge-8/0/3 255 Up 2
ge-0/0/3 255 Up 2
Meaning The sample output confirms that monitoring interfaces are up and that the weight of
each interface being monitored is displayed correctly as configured. These values do not
change if the interface goes up or down. The weights only change for the redundant group
and can be viewed when you use the show chassis cluster information command.
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitoring interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster information command.
{primary:node0}
user@host> show chassis cluster information
node0:
--------------------------------------------------------------------------
Redundancy Group Information:
node1:
--------------------------------------------------------------------------
Redundancy Group Information:
Meaning The sample output confirms that node 0 and node 1 are healthy, and the green LED on
the device indicates that there are no failures. Also, the default weight of the redundancy
group (255) is displayed. The default weight is deducted whenever an interface mapped
to the corresponding redundancy group goes down.
Refer to subsequent verification sections to see how the redundancy group value varies
when a monitoring interface goes down or comes up.
Action From configuration mode, enter the set interface ge-0/0/1 disable command.
{primary:node0}
user@host# set interface ge-0/0/1 disable
user@host# commit
node0:
configuration check succeeds
node1:
commit complete
node0:
commit complete
{primary:node0}
user@host# show interfaces ge-0/0/1
disable;
gigether-options {
redundant-parent reth0;
}
Verifying Chassis Cluster Status After Disabling Interface ge-0/0/1 of RG1 in Node
0 with a Weight of 130
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Monitor Failure codes:
CS Cold Sync monitoring FL Fabric Connection monitoring
GR GRES monitoring HW Hardware monitoring
IF Interface monitoring IP IP monitoring
LB Loopback monitoring MB Mbuf monitoring
NH Nexthop monitoring NP NPC monitoring
SP SPU monitoring SM Schedule monitoring
CF Config Sync monitoring
Cluster ID: 2
Node Priority Status Preempt Manual Monitor-failures
Meaning Use the show chassis cluster status command to confirm that devices in the chassis
cluster are communicating properly, with one device functioning as the primary node
and the other as the secondary node.
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitoring interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link status: Up
Control interfaces:
Index Interface Monitored-Status Internal-SA
0 em0 Up Disabled
1 em1 Down Disabled
Fabric interfaces:
Name Child-interface Status
(Physical/Monitored)
fab0 ge-0/0/0 Up / Up
fab0
fab1 ge-8/0/0 Up / Up
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Down 1
reth1 Up 1
reth2 Up 2
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-8/0/2 120 Up 1
ge-8/0/1 150 Up 1
ge-0/0/2 140 Up 1
Meaning The sample output confirms that monitoring interface ge-0/0/1 is down.
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitoring interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster information command.
{primary:node0}
user@host> show chassis cluster information
node0:
--------------------------------------------------------------------------
Redundancy Group Information:
Failure Information:
node1:
--------------------------------------------------------------------------
Redundancy Group Information:
Meaning The sample output confirms that in node 0, the RG1 weight is reduced to 125 (that is, 255
minus 130) because monitoring interface ge-0/0/1 (weight of 130) went down. The
monitoring status is unhealthy, the device LED is amber, and the interface status of
ge-0/0/1 is down.
NOTE: If interface ge-0/0/1 is brought back up, the weight of RG1 in node 0
becomes 255. Conversely, if interface ge-0/0/2 is also disabled, the weight
of RG1 in node 0 becomes 0 or less (in this example, 125 minus 140 = -15) and
triggers failover, as indicated in the next verification section.
Action From configuration mode, enter the set interface ge-0/0/2 disable command.
{primary:node0}
user@host# set interface ge-0/0/2 disable
user@host# commit
node0:
configuration check succeeds
node1:
commit complete
node0:
commit complete
{primary:node0}
user@host# show interfaces ge-0/0/2
disable;
gigether-options {
redundant-parent reth1;
}
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Monitor Failure codes:
CS Cold Sync monitoring FL Fabric Connection monitoring
GR GRES monitoring HW Hardware monitoring
IF Interface monitoring IP IP monitoring
LB Loopback monitoring MB Mbuf monitoring
NH Nexthop monitoring NP NPC monitoring
SP SPU monitoring SM Schedule monitoring
CF Config Sync monitoring
Cluster ID: 2
Node Priority Status Preempt Manual Monitor-failures
Meaning Use the show chassis cluster status command to confirm that devices in the chassis
cluster are communicating properly, with one device functioning as the primary node
and the other as the secondary node. On RG1, you see interface failure, because both
interfaces mapped to RG1 on node 0 failed during interface monitoring.
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control interfaces:
Index Interface Monitored-Status Internal-SA
0 em0 Up Disabled
1 em1 Down Disabled
Fabric interfaces:
Name Child-interface Status
(Physical/Monitored)
fab0 ge-0/0/0 Up / Up
fab0
fab1 ge-8/0/0 Up / Up
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 1
reth2 Up 2
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-8/0/2 120 Up 1
ge-8/0/1 150 Up 1
ge-0/0/2 140 Down 1
ge-0/0/1 130 Down 1
ge-8/0/3 255 Up 2
ge-0/0/3 255 Up 2
Meaning The sample output confirms that monitoring interfaces ge-0/0/1 and ge-0/0/2 are down.
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitoring interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster information command.
{primary:node0}
user@host> show chassis cluster information
node0:
--------------------------------------------------------------------------
Redundancy Group Information:
Failure Information:
node1:
--------------------------------------------------------------------------
Redundancy Group Information:
Meaning The sample output confirms that in node 0, monitoring interfaces ge-0/0/1 and ge-0/0/2
are down. The weight of RG1 on node 0 reached zero value, which triggered RG1 failover
during use of the show chassis cluster status command.
NOTE: For RG2, the default weight of 255 is set for redundant Ethernet
interface 2 (reth2). When interface monitoring is required, we recommend
that you use the default weight when you do not have backup links like those
in RG1. That is, if interface ge-0/0/3 is disabled, it immediately triggers failover
because the weight becomes 0 (255 minus 225), as indicated in the next
verification section.
Action From configuration mode, enter the set interface ge-0/0/3 disable command.
{primary:node0}
user@host# set interface ge-0/0/3 disable
user@host# commit
node0:
configuration check succeeds
node1:
commit complete
node0:
commit complete
{primary:node0}
user@host# show interfaces ge-0/0/3
disable;
gigether-options {
redundant-parent reth2;
}
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Monitor Failure codes:
CS Cold Sync monitoring FL Fabric Connection monitoring
GR GRES monitoring HW Hardware monitoring
IF Interface monitoring IP IP monitoring
LB Loopback monitoring MB Mbuf monitoring
NH Nexthop monitoring NP NPC monitoring
SP SPU monitoring SM Schedule monitoring
Cluster ID: 2
Node Priority Status Preempt Manual Monitor-failures
Meaning Use the show chassis cluster status command to confirm that devices in the chassis
cluster are communicating properly, with one device functioning as the primary node
and the other as the secondary node.
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link status: Up
Control interfaces:
Index Interface Monitored-Status Internal-SA
0 em0 Up Disabled
1 em1 Down Disabled
Fabric interfaces:
Name Child-interface Status
(Physical/Monitored)
fab0 ge-0/0/0 Up / Up
fab0
fab1 ge-8/0/0 Up / Up
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 1
reth2 Up 2
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-8/0/2 120 Up 1
ge-8/0/1 150 Up 1
ge-0/0/2 140 Down 1
ge-0/0/1 130 Down 1
ge-8/0/3 255 Up 2
ge-0/0/3 255 Down 2
Meaning The sample output confirms that monitoring interfaces ge-0/0/1, ge-0/0/2, and ge-0/0/3
are down.
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitoring interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster information command.
{primary:node0}
user@host> show chassis cluster information
node0:
--------------------------------------------------------------------------
Redundancy Group Information:
Failure Information:
node1:
--------------------------------------------------------------------------
Redundancy Group Information:
Meaning The sample output confirms that in node 0, monitoring interfaces ge-0/0/1, ge-0/0/2,
and ge-0/0/3 are down.
Action From configuration mode, enter the delete interfaces ge-0/0/2 disable command.
{primary:node0}
node0:
configuration check succeeds
node1:
commit complete
node0:
commit complete
Meaning The sample output confirms that interface ge-0/0/2 disable is deleted.
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Monitor Failure codes:
CS Cold Sync monitoring FL Fabric Connection monitoring
GR GRES monitoring HW Hardware monitoring
IF Interface monitoring IP IP monitoring
LB Loopback monitoring MB Mbuf monitoring
NH Nexthop monitoring NP NPC monitoring
SP SPU monitoring SM Schedule monitoring
CF Config Sync monitoring
Cluster ID: 2
Node Priority Status Preempt Manual Monitor-failures
Meaning Use the show chassis cluster status command to confirm that devices in the chassis
cluster are communicating properly, with as one device functioning as the primary node
and the other as the secondary node.
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link status: Up
Control interfaces:
Index Interface Monitored-Status Internal-SA
0 em0 Up Disabled
1 em1 Down Disabled
Fabric interfaces:
Name Child-interface Status
(Physical/Monitored)
fab0 ge-0/0/0 Up / Up
fab0
fab1 ge-8/0/0 Up / Up
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 1
reth2 Up 2
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-8/0/2 120 Up 1
ge-8/0/1 150 Up 1
ge-0/0/2 140 Up 1
ge-0/0/1 130 Down 1
ge-8/0/3 255 Up 2
ge-0/0/3 255 Down 2
Meaning The sample output confirms that monitoring interfaces ge-0/0/1 and ge-0/0/3 are down.
Monitoring interface ge-0/0/2 is up after the disable has been deleted.
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitoring interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster information command.
{primary:node0}
user@host> show chassis cluster information
node0:
--------------------------------------------------------------------------
Redundancy Group Information:
Failure Information:
node1:
--------------------------------------------------------------------------
Redundancy Group Information:
Meaning The sample output confirms that in node 0, monitoring interfaces ge-0/0/1 and ge-0/0/3
are down. Monitoring interface ge-0/0/2 is active after the disable has been deleted.
Action From configuration mode, enter the set chassis cluster redundancy-group 2 preempt
command.
{primary:node0}
user@host# set chassis cluster redundancy-group 2 preempt
user@host# commit
node0:
configuration check succeeds
node1:
commit complete
node0:
commit complete
Meaning The sample output confirms that chassis cluster RG2 preempted on node 0.
NOTE: In the next section, you check that RG2 fails over back to node 0 when
preempt is enabled when the disabled node 0 interface is brought online.
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Monitor Failure codes:
CS Cold Sync monitoring FL Fabric Connection monitoring
GR GRES monitoring HW Hardware monitoring
IF Interface monitoring IP IP monitoring
LB Loopback monitoring MB Mbuf monitoring
NH Nexthop monitoring NP NPC monitoring
Cluster ID: 2
Node Priority Status Preempt Manual Monitor-failures
Meaning Use the show chassis cluster status command to confirm that devices in the chassis
cluster are communicating properly, with one device functioning as the primary node
and the other as the secondary node.
Action From configuration mode, enter the delete interfaces ge-0/0/3 disable command.
{primary:node0}
user@host# delete interfaces ge-0/0/3 disable
user@host# commit
node0:
configuration check succeeds
node1:
commit complete
node0:
commit complete
Meaning The sample output confirms that interface ge-0/0/3 disable has been deleted.
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Cluster ID: 2
Node Priority Status Preempt Manual Monitor-failures
Meaning Use the show chassis cluster status command to confirm that devices in the chassis
cluster are communicating properly, with one device functioning as the primary node
and the other as the secondary node.
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link status: Up
Control interfaces:
Index Interface Monitored-Status Internal-SA
0 em0 Up Disabled
1 em1 Down Disabled
Fabric interfaces:
Name Child-interface Status
(Physical/Monitored)
fab0 ge-0/0/0 Up / Up
fab0
fab1 ge-8/0/0 Up / Up
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 1
reth2 Up 2
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-8/0/2 120 Up 1
ge-8/0/1 150 Up 1
ge-0/0/2 140 Up 1
ge-0/0/1 130 Down 1
ge-8/0/3 255 Up 2
ge-0/0/3 255 Up 2
Meaning The sample output confirms that monitoring interface ge-0/0/1 is down. Monitoring
interfaces ge-0/0/2, and ge-0/0/3 are up after deleting the disable.
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitoring interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster information command.
{primary:node0}
user@host> show chassis cluster information
node0:
--------------------------------------------------------------------------
Redundancy Group Information:
Failure Information:
node1:
--------------------------------------------------------------------------
Redundancy Group Information:
Meaning The sample output confirms that in node 0, monitoring interface ge-0/0/1 is down. RG2
on node 0 state is back to primary state (because of the preempt enable) with a healthy
weight of 255 when interface ge-0/0/3 is back up.
• Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and
Logical Interface Naming on page 79
IP address monitoring configuration allows you to set not only the address to monitor
and its failover weight but also a global IP address monitoring threshold and weight. Only
after the IP address monitoring global-threshold is reached because of cumulative
monitored address reachability failure will the IP address monitoring global-weight value
be deducted from the redundant group’s failover threshold. Thus, multiple addresses
can be monitored simultaneously as well as monitored to reflect their importance to
maintaining traffic flow. Also, the threshold value of an IP address that is unreachable
and then becomes reachable again will be restored to the monitoring threshold. This will
not, however, cause a failback unless the preempt option has been enabled.
Starting in Junos OS Release 12.1X46-D35 and Junos OS Release 17.3R1, for all SRX Series
devices, the reth interface supports proxy ARP.
One Services Processing Unit (SPU) or Packet Forwarding Engine (PFE) per node is
designated to send Internet Control Message Protocol (ICMP) ping packets for the
monitored IP addresses on the cluster. The primary PFE sends ping packets using Address
Resolution Protocol (ARP) requests resolved by the Routing Engine (RE). The source for
these pings is the redundant Ethernet interface MAC and IP addresses. The secondary
PFE resolves ARP requests for the monitored IP address itself. The source for these pings
is the physical child MAC address and a secondary IP address configured on the redundant
Ethernet interface. For the ping reply to be received on the secondary interface, the I/O
card (IOC), central PFE processor, or Flex IOC adds both the physical child MAC address
and the redundant Ethernet interface MAC address to its MAC table. The secondary PFE
responds with the physical child MAC address to ARP requests sent to the secondary IP
address configured on the redundant Ethernet interface.
The default interval to check the reachability of a monitored IP address is once per second.
The interval can be adjusted using the retry-interval command. The default number of
permitted consecutive failed ping attempts is 5. The number of allowed consecutive
failed ping attempts can be adjusted using the retry-count command. After failing to
reach a monitored IP address for the configured number of consecutive attempts, the IP
address is determined to be unreachable and its failover value is deducted from the
redundancy group's global-threshold.
NOTE: On SRX5600 and SRX5800 devices, only two of the 10 ports on each
PIC of 40-port 1-Gigabit Ethernet I/O cards (IOCs) can simultaneously enable
IP address monitoring. Because there are four PICs per IOC, this permits a
total of eight ports per IOC to be monitored. If more than two ports per PIC
on 40-port 1-Gigabit Ethernet IOCs are configured for IP address monitoring,
the commit will succeed but a log entry will be generated, and the accuracy
and stability of IP address monitoring cannot be ensured. This limitation does
not apply to any other IOCs or devices.
Once the IP address is determined to be unreachable, its weight is deducted from the
global-threshold. If the recalculated global-threshold value is not 0, the IP address is
marked unreachable, but the global-weight is not deducted from the redundancy group’s
threshold. If the redundancy group IP monitoring global-threshold reaches 0 and there
are unreachable IP addresses, the redundancy group will continuously fail over and fail
back between the nodes until either an unreachable IP address becomes reachable or
a configuration change removes unreachable IP addresses from monitoring. Note that
both default and configured hold-down-interval failover dampening is still in effect.
Every redundancy group x has a threshold tolerance value initially set to 255. When an
IP address monitored by redundancy group x becomes unavailable, its weight is subtracted
from the redundancy group x's threshold. When redundancy group x's threshold reaches
0, it fails over to the other node. For example, if redundancy group 1 was primary on node
0, on the threshold-crossing event, redundancy group 1 becomes primary on node 1. In
this case, all the child interfaces of redundancy group 1's redundant Ethernet interfaces
begin handling traffic.
A redundancy group x failover occurs because the cumulative weight of the redundancy
group x's monitored IP addresses and other monitoring has brought its threshold value
to 0. When the monitored IP addresses of redundancy group x on both nodes reach their
thresholds at the same time, redundancy group x is primary on the node with the lower
node ID, which is typically node 0.
NOTE: Upstream device failure detection for the chassis cluster feature is
supported on SRX Series devices.
This example shows how to configure redundancy group IP address monitoring for an
SRX Series device in a chassis cluster.
Requirements
Before you begin:
• Set the chassis cluster node ID and cluster ID. See “Example: Setting the Chassis Cluster
Node ID and Cluster ID for SRX Series Devices” on page 92
• Configure the chassis cluster management interface. See “Example: Configuring the
Chassis Cluster Management Interface” on page 96.
• Configure the chassis cluster fabric. See “Example: Configuring the Chassis Cluster
Fabric Interfaces” on page 109.
Overview
You can configure redundancy groups to monitor upstream resources by pinging specific
IP addresses that are reachable through redundant Ethernet interfaces on either node
in a cluster. You can also configure global threshold, weight, retry interval, and retry count
parameters for a redundancy group. When a monitored IP address becomes unreachable,
the weight of that monitored IP address is deducted from the redundancy group IP
address monitoring global threshold. When the global threshold reaches 0, the global
weight is deducted from the redundancy group threshold. The retry interval determines
the ping interval for each IP address monitored by the redundancy group. The pings are
sent as soon as the configuration is committed. The retry count sets the number of
allowed consecutive ping failures for each IP address monitored by the redundancy group.
In this example, you configure the following settings for redundancy group 1:
• IP address to monitor—10.1.1.10
• IP address retry-count—10
• Weight—150
• Secondary IP address—10.1.1.101
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the [edit] hierarchy level,
and then enter commit from configuration mode.
{primary:node0}[edit]
user@host#
set chassis cluster redundancy-group 1 ip-monitoring global-weight 100
set chassis cluster redundancy-group 1 ip-monitoring global-threshold 200
set chassis cluster redundancy-group 1 ip-monitoring retry-interval 3
set chassis cluster redundancy-group 1 ip-monitoring retry-count 10
set chassis cluster redundancy-group 1 ip-monitoring family inet 10.1.1.10 weight 150
interface reth1.0 secondary-ip-address 10.1.1.101
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 ip-monitoring global-weight
100
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 ip-monitoring global-threshold
200
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 ip-monitoring retry-interval 3
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 ip-monitoring retry-count 10
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 ip-monitoring family inet 10.1.1.10
weight 100 interface reth1.0 secondary-ip-address 10.1.1.101
Results From configuration mode, confirm your configuration by entering the show chassis cluster
redundancy-group 1 command. If the output does not display the intended configuration,
repeat the configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
{primary:node0}[edit]
user@host# show chassis cluster redundancy-group 1
ip-monitoring {
global-weight 100;
global-threshold 200;
family {
inet {
10.1.1.10 {
weight 100;
interface reth1.0 secondary-ip-address 10.1.1.101;
}
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Action From operational mode, enter the show chassis cluster ip-monitoring status command.
For information about a specific group, enter the show chassis cluster ip-monitoring status
redundancy-group command.
{primary:node0}
user@host> show chassis cluster ip-monitoring status
node0:
--------------------------------------------------------------------------
Redundancy group: 1
Global threshold: 200
Current threshold: -120
node1:
--------------------------------------------------------------------------
Redundancy group: 1
Global threshold: 200
Current threshold: -120
Related • Understanding Chassis Cluster Redundancy Group Interface Monitoring on page 177
Documentation
• Understanding Chassis Cluster Redundancy Group IP Address Monitoring on page 207
There are various types of objects to monitor as you work with devices configured as
chassis clusters, including global-level objects and objects that are specific to redundancy
groups. This section describes the monitoring of global-level objects.
The SRX5000 lines have one or more Services Processing Units (SPUs) that run on a
Services Processing Card (SPC). All flow-based services run on the SPU. Other SRX
Series devices have a flow-based forwarding process, flowd, which forwards packets
through the device.
Persistent SPU and central point failure on a node is deemed a catastrophic Packet
Forwarding Engine (PFE) failure. In this case, the node's PFE is disabled in the cluster by
reducing the priorities of redundancy groups x to 0.
• A central point failure triggers failover to the secondary node. The failed node's PFE,
which includes all SPCs and all I/O cards (IOCs), is automatically restarted. If the
secondary central point has failed as well, the cluster is unable to come up because
there is no primary device. Only the data plane (redundancy group x) is failed over.
• A single, failed SPU causes failover of redundancy group x to the secondary node. All
IOCs and SPCs on the failed node are restarted and redundancy group x is failed over
to the secondary node. Failover to the secondary node is automatic without the need
for user intervention. When the failed (former) primary node has its failing component
restored, failback is determined by the preempt configuration for the redundancy group
x. The interval for dead SPU detection is 30 seconds.
The following list describes the limitations for inserting an SPC on SRX5400, SRX5600,
and SRX5800 devices in chassis cluster mode:
• The chassis cluster must be in active/passive mode before and during the SPC insert
procedure.
• A new SPC must be inserted in a slot that is higher than the central point slot.
The existing combo central point cannot be changed to a full central point after the
new SPC is inserted.
• During an SPC insert procedure, the IKE and IPsec configurations cannot be modified.
• Users cannot specify the SPU and the IKE instance to anchor a tunnel.
• After a new SPC is inserted, the existing tunnels cannot use the processing power of
the new SPC and redistribute it to the new SPC.
A failed flowd process causes failover of redundancy group x to the secondary node.
Failover to the secondary node is automatic without the need for user intervention. When
the failed (former) primary node has its failing component restored, failback is determined
by the preempt configuration for the redundancy group x.
During SPC and flowd monitoring failures on a local node, the data plane redundancy
group RG1+ fails over to the other node that is in a good state. However, the control plane
RG0 does not fail over and remains primary on the same node as it was before the failure.
When the node is rebooted, or when the SPUs or flowd come back up from failure, the
priority for all the redundancy groups 1+ is 0. When an SPU or flowd comes up, it tries to
start the cold-sync process with its mirror SPU or flowd on the other node.
If this is the only node in the cluster, the priorities for all the redundancy groups 1+ stay
at 0 until a new node joins the cluster. Although the priority is at 0, the device can still
receive and send traffic over its interfaces. A priority of 0 implies that it cannot fail over
in case of a failure. When a new node joins the cluster, all the SPUs or flowd, as they
come up, will start the cold-sync process with the mirror SPUs or flowd of the existing
node.
When the SPU or flowd of a node that is already up detects the cold-sync request from
the SPU or flowd of the peer node, it posts a message to the system indicating that the
cold-sync process is complete. The SPUs or flowd of the newly joined node posts a similar
message. However, they post this message only after all the RTOs are learned and
cold-sync is complete. On receipt of completion messages from all the SPUs or flowd,
the priority for redundancy groups 1+ moves to the configured priority on each node if
there are no other failures of monitored components, such as interfaces. This action
ensures that the existing primary node for redundancy 1+ groups always moves to the
configured priority first. The node joining the cluster later moves to its configured priorities
only after all its SPUs or flowd have completed their cold-sync process. This action in
turn guarantees that the newly added node is ready with all the RTOs before it takes
over mastership.
If your SRX5600 or SRX5800 Services Gateway is part of a chassis cluster, when you
replace a Services Processing Card (SPC) with a SPC2 on the device, you must fail over
all redundancy groups to one node.
• When the SPC2 is to be installed on a node (for example, on node 1, the secondary
node), node 1 is shut down so the SPC2 can be installed.
• Once node 1 is powered up and rejoins the cluster, the number of SPUs on node 1 will
be higher than the number of SPUs on node 0, the primary node. Now, one node (node
0) still has an old SPC while the other node has the new SPC2; SPC2s have four SPUs
per card, and the older SPCs have two SPUs per card.
The cold-sync process is based on node 0 total SPU number. Once those SPUs in node
1 corresponding to node 0 SPUs have completed the cold-sync, the node 1 will declare
cold-sync completed. Since the additional SPUs in node 1 do not have the corresponding
node 0 SPUs, there is nothing to be synchronized and failover from node 0 to node 1
does not cause any issue.
SPU monitoring functionality monitors all SPUs and reports if there are any SPU failure.
For example assume that both nodes originally have 2 existing SPCs and you have
replaced both SPCs with SPC2 on node 1. Now we have 4 SPUs in node 0 and 8 SPUs
in node 1. The SPU monitoring function monitors the 4 SPUs on node 0 and 8 SPUs on
node 1. If any of those 8 SPUs failed in node 1, the SPU monitoring will still report to
the Juniper Services Redundancy Protocol (jsrpd) process that there is an SPU failure.
The jsrpd process controls chassis clustering.
• Once node 1 is ready to failover, you can initiate all redundancy group failover manually
to node 1. Node 0 will be shut down to replace its SPC with the SPC2. After the
replacement, node 0 and node 1 will have exactly the same hardware setup.
Once node 0 is powered up and rejoins the cluster, the system will operate as a normal
chassis cluster.
Related • Understanding Chassis Cluster Redundancy Group Interface Monitoring on page 177
Documentation
• Example: Configuring Chassis Cluster Interface Monitoring on page 178
IP Monitoring Overview
IP monitoring allows for failover based upon end to-end reachability of a configured
monitored IP address. On SRX Series devices, the reachability test is done by sending a
ping to the monitored IP address from both the primary node and the secondary node
through the reth interface and checking if a response is returned. The monitored IP address
can be on a directly connected host in the same subnet as the reth interface or on a
remote device reachable through a next-hop router.
The reachability states of the monitored IP address are reachable, unreachable, and
unknown. The status is “unknown” if Packet Forwarding Engines are not yet up and
running. The status changes to either "reachable" or "unreachable," depending on the
corresponding message from the Packet Forwarding Engine.
Table 23 on page 217 provides details of different combinations of monitored results from
both the primary and secondary nodes, and the corresponding actions by the Juniper
Services Redundancy Protocol (jsrpd) process.
NOTE:
• You can configure up to 64 IP addresses for IP monitoring on SRX5000
line devices.
Table 24 on page 218 provides details on multiple interface combinations of IOC2 and
IOC3 with maximum MAC numbers.
Table 24: Maximum MACs Supported for IP Monitoring on IOC2 and IOC3
Maximum MACs Supported
Cards Interfaces for IP Monitoring
20GE 20
2X40GE 2
1X100GE 1
IOC3 24x10GE 24
(SRX5K-MPC3-40G10G
or
6x40GE 6
SRX5K-MPC3-100G10G)
2x100GE + 4x10GE 6
Note the following limitations for IP monitoring support on SRX5000 line IOC2 and IOC3:
• IP monitoring is supported through the reth or the RLAG interface. If your configuration
does not specify either of these interfaces, the route lookup returns a non-reth/RLAG
interface, which results in a failure report.
Example: Configuring IP Monitoring on SRX5000 Line Devices for IOC2 and IOC3
This example shows how to monitor IP address on a an SRX5000 line device with chassis
cluster enabled.
Requirements
This example uses the following hardware and software:
• Two SRX5400 Services Gateways with MIC (SRX-MIC-10XG-SFPP [IOC2]), and one
Ethernet switch
The procedure mentioned in this example are also applicable to IOC3 also.
• Physically connect the two SRX5400 devices (back-to-back for the fabric and control
ports).
Overview
IP address monitoring checks end-to-end reachability of the configured IP address and
allows a redundancy group to automatically fail over when it is not reachable through
the child link of redundant Ethernet (reth) interface. Redundancy groups on both devices,
or nodes, in a cluster can be configured to monitor specific IP addresses to determine
whether an upstream device in the network is reachable.
Topology
In this example, two SRX5400 devices in a chassis cluster are connected to an Ethernet
switch. The example shows how the redundancy groups can be configured to monitor
key upstream resources reachable through redundant Ethernet interfaces on either node
in a cluster.
You set the system to send pings every second, with 10 losses required to declare
unreachability to peer. You also set up a secondary IP address to allow testing from the
secondary node.
In this example, you configure the following settings for redundancy group 1:
• IP monitoring global-weight—255
• IP monitoring global-threshold—240
• IP monitoring retry-count—10
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details to match your network configuration,
copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter
commit from configuration mode.
{primary:node0}[edit]
user@host# set chassis cluster reth-count 10
{primary:node0}[edit]
user@host# set chassis cluster control-ports fpc 3 port 0
user@host# set chassis cluster control-ports fpc 0 port 0
{primary:node0}[edit]
4. Specify a redundancy group's priority for primacy on each node of the cluster. The
higher number takes precedence.
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 0 node 0 priority 254
user@host# set chassis cluster redundancy-group 0 node 1 priority 1
user@host# set chassis cluster redundancy-group 1 node 0 priority 200
user@host# set chassis cluster redundancy-group 1 node 1 priority 199
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 ip-monitoring global-weight
255
user@host# set chassis cluster redundancy-group 1 ip-monitoring global-threshold
240
user@host# set chassis cluster redundancy-group 1 ip-monitoring retry-interval 3
user@host# set chassis cluster redundancy-group 1 ip-monitoring retry-count 10
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 ip-monitoring family inet
192.0.2.2 weight 80
user@host# set chassis cluster redundancy-group 1 ip-monitoring family inet
192.0.2.2 interface reth0.0 secondary-ip-address 192.0.2.12
user@host# set chassis cluster redundancy-group 1 ip-monitoring family inet
198.51.100.2 weight 80
user@host# set chassis cluster redundancy-group 1 ip-monitoring family inet
198.51.100.2 interface reth1.0 secondary-ip-address 198.51.100.12
user@host# set chassis cluster redundancy-group 1 ip-monitoring family inet
203.0.113.2 weight 80
user@host# set chassis cluster redundancy-group 1 ip-monitoring family inet
203.0.113.2 interface reth2.0 secondary-ip-address 203.0.113.12
7. Assign child interfaces for the redundant Ethernet interfaces from node 0, node 1,
and node 2.
{primary:node0}[edit]
user@host# set interfaces xe-1/2/1 gigether-options redundant-parent reth0
user@host# set interfaces xe-1/2/2 gigether-options redundant-parent reth2
user@host# set interfaces xe-1/2/3 gigether-options redundant-parent reth1
user@host# set interfaces xe-4/2/1 gigether-options redundant-parent reth0
user@host# set interfaces xe-4/2/2 gigether-options redundant-parent reth2
user@host# set interfaces xe-4/2/3 gigether-options redundant-parent reth1
{primary:node0}[edit]
user@host# set interfaces reth0 redundant-ether-options redundancy-group 1
user@host# set interfaces reth0 unit 0 family inet address 192.0.2.1/24
user@host# set interfaces reth1 redundant-ether-options redundancy-group 1
user@host# set interfaces reth1 unit 0 family inet address 198.51.100.1/24
user@host# set interfaces reth2 redundant-ether-options redundancy-group 1
user@host# set interfaces reth2 unit 0 family inet address 203.0.113.1/24
Results From configuration mode, confirm your configuration by entering the show security chassis
cluster and show interfaces commands. If the output does not display the intended
configuration, repeat the configuration instructions in this example to correct it.
chassis {
cluster {
reth-count 10;
redundancy-group 0 {
node 0 priority 254;
node 1 priority 1;
}
redundancy-group 1 {
node 0 priority 200;
node 1 priority 199;
ip-monitoring {
global-weight 255;
global-threshold 240;
retry-interval 3;
retry-count 10;
family {
inet {
192.0.2.2 {
weight 80;
interface reth0.0 secondary-ip-address 192.0.2.12;
}
198.51.100.2 {
weight 80;
interface reth1.0 secondary-ip-address 198.51.100.2;
}
203.0.113.2 {
weight 80;
interface reth2.0 secondary-ip-address 203.0.113.2;
}
}
}
}
}
}
}
interfaces {
xe-1/2/1 {
gigether-options {
redundant-parent reth0;
}
}
xe-1/2/2 {
gigether-options {
redundant-parent reth2;
}
}
xe-1/2/3 {
gigether-options {
redundant-parent reth1;
}
}
xe-4/2/1 {
gigether-options {
redundant-parent reth0;
}
}
xe-4/2/2 {
gigether-options {
redundant-parent reth2;
}
}
xe-4/2/3 {
gigether-options {
redundant-parent reth1;
}
}
fab0 {
fabric-options {
member-interfaces {
xe-1/2/0;
}
}
}
fab1 {
fabric-options {
member-interfaces {
xe-4/2/0;
}
}
}
reth0 {
redundant-ether-options {
redundancy-group 1;
}
unit 0 {
family inet {
address 192.0.2.1/24;
}
}
}
reth1 {
redundant-ether-options {
redundancy-group 1;
}
unit 0 {
family inet {
address 198.51.100.1/24;
}
}
}
reth2 {
redundant-ether-options {
redundancy-group 1;
}
unit 0 {
family inet {
address 203.0.113.1/24;
}
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Confirm the configuration is working properly.
Purpose Verify the IP status being monitored from both nodes and the failure count for both nodes.
Action From operational mode, enter the show chassis cluster ip-monitoring status command.
node0:
--------------------------------------------------------------------------
Redundancy group: 1
Global weight: 255
Global threshold: 240
Current threshold: 240
node1:
--------------------------------------------------------------------------
Redundancy group: 1
Global weight: 255
Global threshold: 240
Current threshold: 240
Junos OS transmits heartbeat signals over the control link at a configured interval. The
system uses heartbeat transmissions to determine the “health” of the control link. If the
number of missed heartbeats has reached the configured threshold, the system assesses
whether a failure condition exists.
For dual control links, which are supported on SRX5600 and SRX5800 lines, the Juniper
Services Redundancy Protocol process (jsrpd) sends and receives the control heartbeat
messages on both control links. As long as heartbeats are received on one of the control
links, Junos OS considers the other node to be alive.
The product of the heartbeat-threshold option and the heartbeat-interval option defines
the wait time before failover is triggered. The default values of these options produce a
wait time of 3 seconds. A heartbeat-threshold of 5 and a heartbeat-interval of 1000
milliseconds would yield a wait time of 5 seconds. Setting the heartbeat-threshold to 4
and the heartbeat-interval to 1250 milliseconds would also yield a wait time of 5 seconds.
In a chassis cluster environment, if more than 1000 logical interfaces are used, the cluster
heartbeat timers are recommended to be increased from the default of 3 seconds. At
maximum capacity on an SRX5400, SRX5600 or an SRX5800 device, we recommend
that you increase the configured time before failover to at least 5 seconds. In a large
chassis cluster configuration on an SRX3400 or SRX3600 device, we recommend
increasing the wait to 8 seconds.
A control link failure is defined as not receiving heartbeats over the control link while
heartbeats are still being received over the fabric link.
In the event of a legitimate control link failure, redundancy group 0 remains primary on
the node on which it is currently primary, inactive redundancy groups x on the primary
node become active, and the secondary node enters a disabled state.
NOTE: When the secondary node is disabled, you can still log in to the
management port and run diagnostics.
To determine if a legitimate control link failure has occurred, the system relies on
redundant liveliness signals sent across both the control link and the fabric link.
The system periodically transmits probes over the fabric link and heartbeat signals over
the control link. Probes and heartbeat signals share a common sequence number that
maps them to a unique time event. Junos OS identifies a legitimate control link failure if
the following two conditions exist:
• At least one probe with a sequence number corresponding to that of a missing heartbeat
signal was received on the fabric link.
If the control link fails, the 180-second countdown begins and the secondary node state
is ineligible. If the fabric link fails before the 180-second countdown reaches zero, the
secondary node becomes primary because the loss of both links is interpreted by the
system to indicate that the other node is dead. Because concurrent loss of both control
and fabric links means that the nodes are no longer synchronizing states nor comparing
priorities, both nodes might thus temporarily become primary, which is not a stable
operating state. However, once the control link is reestablished, the node with the higher
priority value automatically becomes primary, the other node becomes secondary, and
the cluster returns to normal operation.
When a legitimate control link failure occurs, the following conditions apply:
• Redundancy group 0 remains primary on the node on which it is currently primary (and
thus its Routing Engine remains active), and all redundancy groups x on the node
become primary.
If the system cannot determine which Routing Engine is primary, the node with the
higher priority value for redundancy group 0 is primary and its Routing Engine is active.
(You configure the priority for each node when you configure the redundancy-group
statement for redundancy group 0.)
To recover a device from the disabled mode, you must reboot the device. When you
reboot the disabled node, the node synchronizes its dynamic state with the primary
node.
NOTE: If you make any changes to the configuration while the secondary
node is disabled, execute the commit command to synchronize the
configuration after you reboot the node. If you did not make configuration
changes, the configuration file remains synchronized with that of the primary
node.
You cannot enable preemption for redundancy group 0. If you want to change the primary
node for redundancy group 0, you must do a manual failover.
When you use dual control links (supported on SRX5600 and SRX5800 devices), note
the following conditions:
• Host inbound or outbound traffic can be impacted for up to 3 seconds during a control
link failure. For example, consider a case where redundancy group 0 is primary on node
0 and there is a Telnet session to the Routing Engine through a network interface port
on node 1. If the currently active control link fails, the Telnet session will lose packets
for 3 seconds, until this failure is detected.
• A control link failure that occurs while the commit process is running across two nodes
might lead to commit failure. In this situation, run the commit command again after 3
seconds.
NOTE: For SRX5600 and SRX5800 devices, dual control links require a
second Routing Engine on each node of the chassis cluster.
You can specify that control link recovery be done automatically by the system by setting
the control-link-recovery statement. In this case, once the system determines that the
control link is healthy, it issues an automatic reboot on the disabled node. When the
disabled node reboots, the node joins the cluster again.
• Example: Configuring Chassis Cluster Control Ports for Dual Control Links on page 171
This example shows how to enable control link recovery, which allows the system to
automatically take over after the control link recovers from a failure.
Requirements
Before you begin:
• Understand chassis cluster control links. See Understanding Chassis Cluster Control
Plane and Control Links.
• Understand chassis cluster dual control links. See “Understanding Chassis Cluster Dual
Control Links” on page 169.
• Connect dual control links in a chassis cluster. See “Connecting Dual Control Links for
SRX Series Devices in a Chassis Cluster” on page 170.
Overview
You can enable the system to perform control link recovery automatically. After the
control link recovers, the system takes the following actions:
• It checks whether it receives at least 30 consecutive heartbeats on the control link or,
in the case of dual control links (SRX5600 and SRX5800 devices only), on either
control link. This is to ensure that the control link is not flapping and is healthy.
• After it determines that the control link is healthy, the system issues an automatic
reboot on the node that was disabled when the control link failed. When the disabled
node reboots, it can rejoin the cluster. There is no need for any manual intervention.
Configuration
{primary:node0}[edit]
user@host# set chassis cluster control-link-recovery
{primary:node0}[edit]
user@host# commit
• Connecting Dual Control Links for SRX Series Devices in a Chassis Cluster on page 170
• Example: Configuring Chassis Cluster Control Ports for Dual Control Links on page 171
Chassis cluster employs a number of highly efficient failover mechanisms that promote
high availability to increase your system's overall reliability and productivity.
A redundancy group is a collection of objects that fail over as a group. Each redundancy
group monitors a set of objects (physical interfaces), and each monitored object is
assigned a weight. Each redundancy group has an initial threshold of 255. When a
monitored object fails, the weight of the object is subtracted from the threshold value
of the redundancy group. When the threshold value reaches zero, the redundancy group
fails over to the other node. As a result, all the objects associated with the redundancy
group fail over as well. Graceful restart of the routing protocols enables the SRX Series
device to minimize traffic disruption during a failover.
Back-to-back failovers of a redundancy group in a short interval can cause the cluster
to exhibit unpredictable behavior. To prevent such unpredictable behavior, configure a
dampening time between failovers. On failover, the previous primary node of a redundancy
group moves to the secondary-hold state and stays in the secondary-hold state until the
hold-down interval expires. After the hold-down interval expires, the previous primary
node moves to the secondary state. If a failure occurs on the new primary node during
the hold-down interval, the system fails over immediately and overrides the hold-down
interval.
The default dampening time for a redundancy group 0 is 300 seconds (5 minutes) and
is configurable to up to 1800 seconds with the hold-down-interval statement. For some
configurations, such as those with a large number of routes or logical interfaces, the
default interval or the user-configured interval might not be sufficient. In such cases, the
system automatically extends the dampening time in increments of 60 seconds until
the system is ready for failover.
The hold-down interval affects manual failovers, as well as automatic failovers associated
with monitoring failures.
On SRX Series devices, chassis cluster failover performance is optimized to scale with
more logical interfaces. Previously, during redundancy group failover, gratuitous arp
(GARP) is sent by the Juniper Services Redundancy Protocol (jsrpd) process running in
the Routing Engine on each logical interface to steer the traffic to the appropriate node.
With logical interface scaling, the Routing Engine becomes the checkpoint and GARP is
directly sent from the Services Processing Unit (SPU).
You can enable the preemptive behavior on both nodes in a redundancy group and assign
a priority value for each node in the redundancy group. The node in the redundancy group
with the higher configured priority is initially designated as the primary in the group, and
the other node is initially designated as the secondary in the redundancy group.
When a redundancy group swaps the state of its nodes between primary and secondary,
there is a possibility that a subsequent state swap of its nodes can happen again soon
after the first state swap. This rapid change in states results in flapping of the primary
and secondary systems.
Starting with Junos OS Release 17.4R1, a failover delay timer is introduced on SRX Series
devices in a chassis cluster to limit the flapping of redundancy group state between the
secondary and the primary nodes in a preemptive failover.
• Preemptive delay –The preemptive delay time is the amount of time a redundancy
group in a secondary state waits when the primary state is down in a preemptive failover
before switching to the primary state. This delay timer delays the immediate failover
for a configured period of time––between 1 and 21,600 seconds.
• Preemptive period–Time period (1 to 1440 seconds) during which the preemptive limit
is applied, that is, number of configured preemptive failovers are applied when preempt
is enabled for a redundancy group.
Consider the following scenario where you have configured a preemptive period as 300
seconds and preemptive limit as 50.
When the preemptive limit is configured as 50, the count starts at 0 and increments with
a first preemptive failover; this process continues until the count reaches the configured
preemptive limit, that is 50, before the preemptive period expires. When the preemptive
limit (50) is exceeded, you must manually reset the preempt count to allow preemptive
failovers to occur again.
When you have configured the preemptive period as 300 seconds, and if the time
difference between the first preemptive failover and the current failover has already
exceeded 300 seconds, and the preemptive limit (50) is not yet reached, then the
preemptive period will be reset. After resetting, the last failover is considered as the first
preemptive failover of the new preemptive period and the process starts all over again.
This enhancement enables the administrator to introduce a failover delay, which can
reduce the number of failovers and result in a more stable network state due to the
reduction in active /standby flapping within the redundancy group.
Consider the following example, where a redundancy group, that is primary on the node
0 is ready for preemptive transition to the secondary state during a failover. Priority is
assigned to each node and the preemptive option is also enabled for the nodes.
Figure 47 on page 236 illustrates the sequence of steps in transition from the primary state
to the secondary state when a preemptive delay timer is configured.
1. The node in the primary state is ready for preemptive transition to secondary state if
the preemptive option is configured, and the node in secondary state has the priority
over the node in primary state. If the preemptive delay is configured, the node in the
primary state transitions to primary-preempt-hold state . If preemptive delay is not
configured, then instant transition to the secondary state happens.
2. The node is in primary-preempt-hold state waiting for the preemptive delay timer to
expire. The preemptive delay timer is checked and transition is held until the timer
expires. The primary node stays in the primary-preempt-hold state until the timer
expires, before transitioning to the secondary state.
3. The node transitions from primary-preempt-hold state into secondary-hold state and
then to the secondary state.
4. The node stays in the secondary-hold state for the default time (1 second) or the
configured time (a minimum of 300 seconds), and then the node transitions to the
secondary state.
This topic explains how to configure the delay timer on SRX Series devices in a chassis
cluster. Back-to-back redundancy group failovers that occur too quickly can cause a
chassis cluster to exhibit unpredictable behavior. Configuring the delay timer and failover
rate limit delays immediate failover for a configured period of time.
To configure the preemptive delay timer and failover rate limit between redundancy
group failovers:
You can set the delay timer between 1 and 21,600 seconds. Default value is 1 second.
{primary:node1}
[edit chassis cluster redundancy-group number preempt]
user@host# set delay interval
You can set maximum number of preemptive failovers between 1 to 50 and time
period during which the limit is applied between 1 to 1440 seconds.
In the following example, you are setting the preemptive delay timer to 300 seconds,
and the preemptive limit to 10 for a premptive period of 600 seconds. That is, this
configuration delays immediate failover for 300 seconds, and it limits a maximum of 10
preemptive failovers in a duration of 600 seconds.
You can use the clear chassis clusters preempt-count command to clear the preempt
failover counter for all redundancy groups. When a preempt rate limit is configured, the
counter starts with a first preemptive failover and the count is reduced; this process
continues until the count reaches zero before the timer expires. You can use this command
to clear the preempt failover counter and reset it to start again.
17.4R1 Starting with Junos OS Release 17.4R1, a failover delay timer is introduced on
SRX Series devices in a chassis cluster to limit the flapping of redundancy
group state between the secondary and the primary nodes in a preemptive
failover.
• Understanding SNMP Failover Traps for Chassis Cluster Redundancy Group Failover
on page 241
This example shows how to configure the dampening time between back-to-back
redundancy group failovers for a chassis cluster. Back-to-back redundancy group failovers
that occur too quickly can cause a chassis cluster to exhibit unpredictable behavior.
Requirements
Before you begin:
Overview
The dampening time is the minimum interval allowed between back-to-back failovers
for a redundancy group. This interval affects manual failovers and automatic failovers
caused by interface monitoring failures.
In this example, you set the minimum interval allowed between back-to-back failovers
to 420 seconds for redundancy group 0.
Configuration
Step-by-Step To configure the dampening time between back-to-back redundancy group failovers:
Procedure
1. Set the dampening time for the redundancy group.
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 0 hold-down-interval 420
{primary:node0}[edit]
user@host# commit
• Understanding SNMP Failover Traps for Chassis Cluster Redundancy Group Failover
on page 241
You can initiate a redundancy group x (redundancy groups numbered 1 through 128)
failover manually. A manual failover applies until a failback event occurs.
For example, suppose that you manually do a redundancy group 1 failover from node 0
to node 1. Then an interface that redundancy group 1 is monitoring fails, dropping the
threshold value of the new primary redundancy group to zero. This event is considered
a failback event, and the system returns control to the original redundancy group.
You can also initiate a redundancy group 0 failover manually if you want to change the
primary node for redundancy group 0. You cannot enable preemption for redundancy
group 0.
When you do a manual failover for redundancy group 0, the node in the primary state
transitions to the secondary-hold state. The node stays in the secondary-hold state for
the default or configured time (a minimum of 300 seconds) and then transitions to the
secondary state.
State transitions in cases where one node is in the secondary-hold state and the other
node reboots, or the control link connection or fabric link connection is lost to that node,
are described as follows:
• Reboot case—The node in the secondary-hold state transitions to the primary state;
the other node goes dead (inactive).
• Control link failure case—The node in the secondary-hold state transitions to the
ineligible state and then to a disabled state; the other node transitions to the primary
state.
• Fabric link failure case—The node in the secondary-hold state transitions directly to
the ineligible state.
Keep in mind that during an in-service software upgrade (ISSU), the transitions described
here cannot happen. Instead, the other (primary) node transitions directly to the secondary
state because Juniper Networks releases earlier than 10.0 do not interpret the
secondary-hold state. While you start an ISSU, if one of the nodes has one or more
redundancy groups in the secondary-hold state, you must wait for them to move to the
secondary state before you can do manual failovers to make all the redundancy groups
be primary on one node.
12.1X47-D10 Starting with Junos OS Release 12.1X47-D10 and Junos OS Release 17.3R1,
fabric monitoring is enabled by default. With this enabling, the node
transitions directly to the ineligible state in case of fabric link failures.
12.1X46-D20 Starting with Junos OS Release 12.1X46-D20 and Junos OS Release 17.3R1,
fabric monitoring is enabled by default. With this enabling, the node
transitions directly to the ineligible state in case of fabric link failures.
• Understanding SNMP Failover Traps for Chassis Cluster Redundancy Group Failover
on page 241
Understanding SNMP Failover Traps for Chassis Cluster Redundancy Group Failover
Chassis clustering supports SNMP traps, which are triggered whenever there is a
redundancy group failover.
The trap message can help you troubleshoot failovers. It contains the following
information:
These are the different states that a cluster can be in at any given instant: hold, primary,
secondary-hold, secondary, ineligible, and disabled. Traps are generated for the following
state transitions (only a transition from a hold state does not trigger a trap):
A transition can be triggered because of any event, such as interface monitoring, SPU
monitoring, failures, and manual failovers.
The trap is forwarded over the control link if the outgoing interface is on a node different
from the node on the Routing Engine that generates the trap.
You can specify that a trace log be generated by setting the traceoptions flag snmp
statement.
Related • Understanding Chassis Cluster Redundancy Group Manual Failover on page 239
Documentation
• Initiating a Chassis Cluster Manual Redundancy Group Failover on page 242
You can initiate a failover manually with the request command. A manual failover bumps
up the priority of the redundancy group for that member to 255.
• Example: Configuring Chassis Cluster Redundant Ethernet Interfaces for IPv4 and IPv6
Addresses on page 133
WARNING: Unplugging the power cord and holding the power button to
initiate a chassis cluster redundancy group failover might result in
unpredictable behavior.
Use the show command to display the status of nodes in the cluster:
{primary:node0}
user@host> show chassis cluster status redundancy-group 0
Cluster ID: 9
Node Priority Status Preempt Manual failover
Use the request command to trigger a failover and make node 1 primary:
{primary:node0}
user@host> request chassis cluster failover redundancy-group 0 node 1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Initiated manual failover for redundancy group 0
Use the show command to display the new status of nodes in the cluster:
{secondary-hold:node0}
user@host> show chassis cluster status redundancy-group 0
Cluster ID: 9
Node Priority Status Preempt Manual failover
Output to this command shows that node 1 is now primary and node 0 is in the
secondary-hold state. After 5 minutes, node 0 will transition to the secondary state.
You can reset the failover for redundancy groups by using the request command. This
change is propagated across the cluster.
{secondary-hold:node0}
user@host> request chassis cluster failover reset redundancy-group 0
node0:
--------------------------------------------------------------------------
No reset required for redundancy group 0.
node1:
--------------------------------------------------------------------------
Successfully reset manual failover for redundancy group 0
You cannot trigger a back-to-back failover until the 5-minute interval expires.
{secondary-hold:node0}
user@host> request chassis cluster failover redundancy-group 0 node 0
node0:
--------------------------------------------------------------------------
Manual failover is not permitted as redundancy-group 0 on node0 is in
secondary-hold state.
Use the show command to display the new status of nodes in the cluster:
{secondary-hold:node0}
user@host> show chassis cluster status redundancy-group 0
Cluster ID: 9
Node Priority Status Preempt Manual failover
Output to this command shows that a back-to-back failover has not occurred for either
node.
After doing a manual failover, you must issue the reset failover command before requesting
another failover.
When the primary node fails and comes back up, election of the primary node is done
based on regular criteria (priority and preempt).
Related • Understanding Chassis Cluster Redundancy Group Manual Failover on page 239
Documentation
• Example: Configuring a Chassis Cluster with a Dampening Time Between Back-to-Back
Redundancy Group Failovers on page 238
• Understanding SNMP Failover Traps for Chassis Cluster Redundancy Group Failover
on page 241
Action From the CLI, enter the show chassis cluster status command:
{primary:node1}
user@host> show chassis cluster status
Cluster ID: 3
Node name Priority Status Preempt Manual failover
{primary:node1}
user@host> show chassis cluster status
Cluster ID: 15
Node Priority Status Preempt Manual failover
{primary:node1}
user@host> show chassis cluster status
Cluster ID: 15
Node Priority Status Preempt Manual failover
Related • Initiating a Chassis Cluster Manual Redundancy Group Failover on page 242
Documentation
• Example: Configuring the Number of Redundant Ethernet Interfaces in a Chassis Cluster
on page 138
To clear the failover status of a chassis cluster, enter the clear chassis cluster failover-count
command from the CLI:
{primary:node1}
user@host> clear chassis cluster failover-count
Cleared failover-count for all redundancy-groups
Related • Initiating a Chassis Cluster Manual Redundancy Group Failover on page 242
Documentation
• Example: Configuring the Number of Redundant Ethernet Interfaces in a Chassis Cluster
on page 138
You can connect two fabric links between each device in a cluster, which provides a
redundant fabric link between the members of a cluster. Having two fabric links helps to
avoid a possible single point of failure.
When you use dual fabric links, the RTOs and probes are sent on one link and the
fabric-forwarded and flow-forwarded packets are sent on the other link. If one fabric link
fails, the other fabric link handles the RTOs and probes, as well as the data forwarding.
The system selects the physical interface with the lowest slot, PIC, or port number on
each node for the RTOs and probes.
For all SRX Series devices, you can connect two fabric links between two devices,
effectively reducing the chance of a fabric link failure.
In most SRX Series devices in a chassis cluster, you can configure any pair of Gigabit
Ethernet interfaces or any pair of 10-Gigabit interfaces to serve as the fabric between
nodes.
For dual fabric links, both of the child interface types should be the same type. For
example, both should be Gigabit Ethernet interfaces or 10-Gigabit interfaces.
Example: Configuring the Chassis Cluster Dual Fabric Links with Matching Slots and
Ports
This example shows how to configure the chassis cluster fabric with dual fabric links
with matching slots and ports. The fabric is the back-to-back data connection between
the nodes in a cluster. Traffic on one node that needs to be processed on the other node
or to exit through an interface on the other node passes over the fabric. Session state
information also passes over the fabric.
Requirements
Before you begin, set the chassis cluster ID and chassis cluster node ID. See “Example:
Setting the Chassis Cluster Node ID and Cluster ID” on page 92.
Overview
In most SRX Series devices in a chassis cluster, you can configure any pair of Gigabit
Ethernet interfaces or any pair of 10-Gigabit interfaces to serve as the fabric between
nodes.
You cannot configure filters, policies, or services on the fabric interface. Fragmentation
is not supported on the fabric link. The MTU size is 8980 bytes. We recommend that no
interface in the cluster exceed this MTU size. Jumbo frame support on the member links
is enabled by default.
This example illustrates how to configure the fabric link with dual fabric links with
matching slots and ports on each node.
A typical configuration is where the dual fabric links are formed with matching slots/ports
on each node. That is, ge-3/0/0 on node 0 and ge-10/0/0 on node 1 match, as do ge-0/0/0
on node 0 and ge-7/0/0 on node 1 (the FPC slot offset is 7).
Only the same type of interfaces can be configured as fabric children, and you must
configure an equal number of child links for fab0 and fab1.
NOTE: If you are connecting each of the fabric links through a switch, you
must enable the jumbo frame feature on the corresponding switch ports. If
both of the fabric links are connected through the same switch, the
RTO-and-probes pair must be in one virtual LAN (VLAN) and the data pair
must be in another VLAN. Here, too, the jumbo frame feature must be enabled
on the corresponding switch ports.
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the [edit] hierarchy level,
and then enter commit from configuration mode.
{primary:node0}[edit]
set interfaces fab0 fabric-options member-interfaces ge-0/0/0
set interfaces fab0 fabric-options member-interfaces ge-3/0/0
set interfaces fab1 fabric-options member-interfaces ge-7/0/0
set interfaces fab1 fabric-options member-interfaces ge-10/0/0
Step-by-Step To configure the chassis cluster fabric with dual fabric links with matching slots and ports
Procedure on each node:
{primary:node0}[edit]
user@host# set interfaces fab0 fabric-options member-interfaces ge-0/0/0
user@host# set interfaces fab0 fabric-options member-interfaces ge-3/0/0
user@host# set interfaces fab1 fabric-options member-interfaces ge-7/0/0
user@host# set interfaces fab1 fabric-options member-interfaces ge-10/0/0
Results From configuration mode, confirm your configuration by entering the show interfaces
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
{primary:node0}[edit]
user@host# show interfaces
...
fab0 {
fabric-options {
member-interfaces {
ge-0/0/0;
ge-3/0/0;
}
}
}
fab1 {
fabric-options {
member-interfaces {
ge-7/0/0;
ge-10/0/0;
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Action From operational mode, enter the show interfaces terse | match fab command.
{primary:node0}
Example: Configuring Chassis Cluster Dual Fabric Links with Different Slots and Ports
This example shows how to configure the chassis cluster fabric with dual fabric links
with different slots and ports. The fabric is the back-to-back data connection between
the nodes in a cluster. Traffic on one node that needs to be processed on the other node
or to exit through an interface on the other node passes over the fabric. Session state
information also passes over the fabric.
Requirements
Before you begin, set the chassis cluster ID and chassis cluster node ID. See “Example:
Setting the Chassis Cluster Node ID and Cluster ID” on page 92.
Overview
In most SRX Series devices in a chassis cluster, you can configure any pair of Gigabit
Ethernet interfaces or any pair of 10-Gigabit interfaces to serve as the fabric between
nodes.
You cannot configure filters, policies, or services on the fabric interface. Fragmentation
is not supported on the fabric link.
The maximum transmission unit (MTU) size supported is 9014. We recommend that no
interface in the cluster exceed this MTU size. Jumbo frame support on the member links
is enabled by default.
This example illustrates how to configure the fabric link with dual fabric links with different
slots and ports on each node.
Make sure you physically connect the RTO-and-probes link to the RTO-and-probes link
on the other node. Likewise, make sure you physically connect the data link to the data
link on the other node.
• The node 0 RTO-and-probes link ge-2/1/9 to the node 1 RTO-and-probes link ge-11/0/0
• The node 0 data link ge-2/2/5 to the node 1 data link ge-11/3/0
Only the same type of interfaces can be configured as fabric children, and you must
configure an equal number of child links for fab0 and fab1.
NOTE: If you are connecting each of the fabric links through a switch, you
must enable the jumbo frame feature on the corresponding switch ports. If
both of the fabric links are connected through the same switch, the
RTO-and-probes pair must be in one virtual LAN (VLAN) and the data pair
must be in another VLAN. Here too, the jumbo frame feature must be enabled
on the corresponding switch ports.
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the [edit] hierarchy level,
and then enter commit from configuration mode.
{primary:node0}[edit]
set interfaces fab0 fabric-options member-interfaces ge-2/1/9
set interfaces fab0 fabric-options member-interfaces ge-2/2/5
set interfaces fab1 fabric-options member-interfaces ge-11/0/0
set interfaces fab1 fabric-options member-interfaces ge-11/3/0
Step-by-Step To configure the chassis cluster fabric with dual fabric links with different slots and ports
Procedure on each node:
{primary:node0}[edit]
user@host# set interfaces fab0 fabric-options member-interfaces ge-2/1/9
user@host# set interfaces fab0 fabric-options member-interfaces ge-2/2/5
user@host# set interfaces fab1 fabric-options member-interfaces ge-11/0/0
user@host# set interfaces fab1 fabric-options member-interfaces ge-11/3/0
Results From configuration mode, confirm your configuration by entering the show interfaces
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
{primary:node0}[edit]
user@host# show interfaces
...
fab0 {
fabric-options {
member-interfaces {
ge-2/1/9;
ge-2/2/5;
}
}
}
fab1 {
fabric-options {
member-interfaces {
ge-11/0/0;
ge-11/3/0;
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Action From operational mode, enter the show interfaces terse | match fab command.
{primary:node0}
The goal of conditional route advertisement in a chassis cluster is to ensure that incoming
traffic from the upstream network arrives on the node that is on the currently active
redundant Ethernet interface. To understand how this works, keep in mind that in a
chassis cluster, each node has its own set of interfaces. Figure 48 on page 256 shows a
typical scenario, with a redundant Ethernet interface connecting the corporate LAN,
through a chassis cluster, to an external network segment.
Related • Example: Configuring Conditional Route Advertising in a Chassis Cluster on page 256
Documentation
• Verifying a Chassis Cluster Configuration on page 163
This example shows how to configure conditional route advertising in a chassis cluster
to ensure that incoming traffic from the upstream network arrives on the node that is on
the currently active redundant Ethernet interface.
Requirements
Before you begin, understand conditional route advertising in a chassis cluster. See
“Understanding Conditional Route Advertising in a Chassis Cluster” on page 255.
Overview
As illustrated in Figure 49 on page 258, routing prefixes learned from the redundant Ethernet
interface through the IGP are advertised toward the network core using BGP. Two BGP
sessions are maintained, one from interface t1-1/0/0 and one from t1-1/0/1 for BGP
multihoming. All routing prefixes are advertised on both sessions. Thus, for a route
advertised by BGP, learned over a redundant Ethernet interface, if the active redundant
Ethernet interface is on the same node as the BGP session, you advertise the route with
a “good” BGP attribute.
To achieve this behavior, you apply a policy to BGP before exporting routes. An additional
term in the policy match condition determines the current active redundant Ethernet
interface child interface of the next hop before making the routing decision. When the
active status of a child redundant Ethernet interface changes, BGP reevaluates the export
policy for all routes affected.
The condition statement in this configuration works as follows. The command states
that any routes evaluated against this condition will pass only if:
• The current child interface of the redundant Ethernet interface is active at node 0 (as
specified by the route-active-on node0 keyword).
{primary:node0}[edit]
user@host# set policy-options condition reth-nh-active-on-0 route-active-on node0
Note that a route might have multiple equal-cost next hops, and those next hops might
be redundant Ethernet interfaces, regular interfaces, or a combination of both. The route
still satisfies the requirement that it has a redundant Ethernet interface as its next hop.
If you use the BGP export policy set for node 0 in the previous example command, only
OSPF routes that satisfy the following requirements will be advertised through the session:
• The OSPF routes have a redundant Ethernet interface as their next hop.
• The current child interface of the redundant Ethernet interface is currently active at
node 0.
You must also create and apply a separate policy statement for the other BGP session
by using this same process.
In addition to the BGP MED attribute, you can define additional BGP attributes, such as
origin-code, as-path, and community.
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the [edit] hierarchy level,
and then enter commit from configuration mode.
{primary:node0}[edit]
set policy-options policy-statement reth-nh-active-on-0 term ospf-on-0 from protocol
ospf
set policy-options policy-statement reth-nh-active-on-0 term ospf-on-0 from condition
reth-nh-active-on-0
set policy-options policy-statement reth-nh-active-on-0 term ospf-on-0 then metric 10
set policy-options policy-statement reth-nh-active-on-0 term ospf-on-0 then accept
set policy-options condition reth-nh-active-on-0 route-active-on node0
{primary:node0}[edit]
user@host# set policy-options policy-statement reth-nh-active-on-0 term ospf-on-0
from protocol ospf
{primary:node0}[edit]
user@host# set policy-options policy-statement reth-nh-active-on-0 term ospf-on-0
from condition reth-nh-active-on-0
{primary:node0}[edit]
user@host# set policy-options policy-statement reth-nh-active-on-0 term ospf-on-0
then metric 10
{primary:node0}[edit]
Results From configuration mode, confirm your configuration by entering the show policy-options
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
{primary:node0}[edit]
user@host# show policy-options
policy-statement reth-nh-active-on-0 {
term ospf-on-0 {
from {
protocol ospf;
condition reth-nh-active-on-0;
}
then {
metric 10;
accept;
}
}
}
condition reth-nh-active-on-0 route-active-on node0;
If you are done configuring the device, enter commit from configuration mode.
Support for Ethernet link aggregation groups (LAGs) based on IEEE 802.3ad makes it
possible to aggregate physical interfaces on a standalone device. LAGs on standalone
devices provide increased interface bandwidth and link availability. Aggregation of links
in a chassis cluster allows a redundant Ethernet interface to add more than two physical
child interfaces thereby creating a redundant Ethernet interface LAG. A redundant Ethernet
interface LAG can have up to eight links per redundant Ethernet interface per node (for
a total of 16 links per redundant Ethernet interface).
The aggregated links in a redundant Ethernet interface LAG provide the same bandwidth
and redundancy benefits of a LAG on a standalone device with the added advantage of
chassis cluster redundancy. A redundant Ethernet interface LAG has two types of
simultaneous redundancy. The aggregated links within the redundant Ethernet interface
on each node are redundant; if one link in the primary aggregate fails, its traffic load is
taken up by the remaining links. If enough child links on the primary node fail, the redundant
Ethernet interface LAG can be configured so that all traffic on the entire redundant
Ethernet interface fails over to the aggregate link on the other node. You can also configure
interface monitoring for LACP-enabled redundancy group reth child links for added
protection.
Aggregated Ethernet interfaces, known as local LAGs, are also supported on either node
of a chassis cluster but cannot be added to redundant Ethernet interfaces. Local LAGs
are indicated in the system interfaces list using an ae- prefix. Likewise any child interface
of an existing local LAG cannot be added to a redundant Ethernet interface and vice
versa. Note that it is necessary for the switch (or switches) used to connect the nodes
in the cluster to have a LAG link configured and 802.3ad enabled for each LAG on both
nodes so that the aggregate links are recognized as such and correctly pass traffic. The
total maximum number of combined individual node LAG interfaces (ae) and redundant
Ethernet (reth) interfaces per cluster is 128.
NOTE: The redundant Ethernet interface LAG child links from each node in
the chassis cluster must be connected to a different LAG at the peer devices.
If a single peer switch is used to terminate the redundant Ethernet interface
LAG, two separate LAGs must be used in the switch.
Links from different PICs or IOCs and using different cable types (for example, copper
and fiber-optic) can be added to the same redundant Ethernet interface LAG but the
speed of the interfaces must be the same and all interfaces must be in full duplex mode.
We recommend, however, that for purposes of reducing traffic processing overhead,
interfaces from the same PIC or IOC be used whenever feasible. Regardless, all interfaces
configured in a redundant Ethernet interface LAG share the same virtual MAC address.
• Layer 2 transparent mode and Layer 2 security features are supported in redundant
Ethernet interface LAGs.
• Network processor (NP) bundling can coexist with redundant Ethernet interface LAGs
on the same cluster. However, assigning an interface simultaneously to a redundant
Ethernet interface LAG and a network processor bundle is not supported.
NOTE: IOC2 cards do not have network processors but IOC1 cards do have
them.
• Single flow throughput is limited to the speed of a single physical link regardless of the
speed of the aggregate interface.
NOTE: For more information about Ethernet interface link aggregation and
LACP, see the “Aggregated Ethernet” information in the Interfaces Feature
Guide for Security Devices.
This example shows how to configure a redundant Ethernet interface link aggregation
group for a chassis cluster. Chassis cluster configuration supports more than one child
interface per node in a redundant Ethernet interface. When at least two physical child
interface links from each node are included in a redundant Ethernet interface configuration,
the interfaces are combined within the redundant Ethernet interface to form a redundant
Ethernet interface link aggregation group.
Requirements
Before you begin:
• Understand chassis cluster redundant Ethernet interface link aggregation groups. See
“Understanding Chassis Cluster Redundant Ethernet Interface Link Aggregation Groups”
on page 261.
Overview
NOTE: For aggregation to take place, the switch used to connect the nodes
in the cluster must enable IEEE 802.3ad link aggregation for the redundant
Ethernet interface physical child links on each node. Because most switches
support IEEE 802.3ad and are also LACP capable, we recommend that you
enable LACP on SRX Series devices. In cases where LACP is not available on
the switch, you must not enable LACP on SRX Series devices.
In this example, you assign six Ethernet interfaces to reth1 to form the Ethernet interface
link aggregation group:
• ge-1/0/1—reth1
• ge-1/0/2—reth1
• ge-1/0/3—reth1
• ge-12/0/1—reth1
• ge-12/0/2—reth1
• ge-12/0/3—reth1
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the [edit] hierarchy level,
and then enter commit from configuration mode.
{primary:node0}[edit]
set interfaces ge-1/0/1 gigether-options redundant-parent reth1
set interfaces ge-1/0/2 gigether-options redundant-parent reth1
set interfaces ge-1/0/3 gigether-options redundant-parent reth1
set interfaces ge-12/0/1 gigether-options redundant-parent reth1
set interfaces ge-12/0/2 gigether-options redundant-parent reth1
set interfaces ge-12/0/3 gigether-options redundant-parent reth1
{primary:node0}[edit]
user@host# set interfaces ge-1/0/1 gigether-options redundant-parent reth1
user@host# set interfaces ge-1/0/2 gigether-options redundant-parent reth1
user@host# set interfaces ge-1/0/3 gigether-options redundant-parent reth1
user@host# set interfaces ge-12/0/1 gigether-options redundant-parent reth1
user@host# set interfaces ge-12/0/2 gigether-options redundant-parent reth1
user@host# set interfaces ge-12/0/3 gigether-options redundant-parent reth1
Results From configuration mode, confirm your configuration by entering the show interfaces
reth1 command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
ge-12/0/1 {
gigether-options {
redundant-parent reth1;
}
}
ge-12/0/2 {
gigether-options {
redundant-parent reth1;
}
}
ge-12/0/3 {
gigether-options {
redundant-parent reth1;
}
}
...
If you are done configuring the device, enter commit from configuration mode.
Verification
Action From operational mode, enter the show interfaces terse | match reth command.
{primary:node0}
Related • Understanding Chassis Cluster Redundant Ethernet Interface Link Aggregation Groups
Documentation on page 261
• Understanding Chassis Cluster Redundant Ethernet Interface LAG Failover on page 267
Consider a reth0 interface LAG with four underlying physical links and the minimum-links
value set as 2. In this case, a failover is triggered only when the number of active physical
links is less than 2.
NOTE:
• Interface-monitor and minimum-links values are used to monitor LAG link
status and correctly calculate failover weight.
• The minimum-links value is used to keep the redundant Ethernet link status.
However, to trigger a failover, interface-monitor must be set.
{primary:node0}[edit]
user@host# set interfaces ge-0/0/4 gigether-options redundant-parent reth0
user@host# set interfaces ge-0/0/5 gigether-options redundant-parent reth0
user@host# set interfaces ge-0/0/6 gigether-options redundant-parent reth0
user@host# set interfaces ge-0/0/7 gigether-options redundant-parent reth0
Specify the minimum number of links for the redundant Ethernet interface as 2.
{primary:node0}[edit]
user@host# set interfaces reth0 redundant-ether-options minimum-links 2
Set up interface monitoring to monitor the health of the interfaces and trigger redundancy
group failover.
The following scenarios provide examples of how to monitor redundant Ethernet LAG
failover:
{primary:node0}[edit]
In this case, although there are three active physical links and the redundant Ethernet
LAG could have handled the traffic because of minimum-links configured, one physical
link is still down, which triggers a failover based on the computed weight.
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/4 weight
75
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/5 weight
75
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/6 weight
75
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/7 weight
75
In this case, when three physical links are down, the redundant Ethernet interface will go
down due to minimum-links configured. However, the failover will not happen, which in
turn will result in traffic outage.
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/4 weight
100
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/5 weight
100
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/6 weight
100
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/7 weight
100
In this case, when the three physical links are down, the redundant Ethernet interface
will go down because of the minimum-links value. However, at the same time a failover
would be triggered because of interface monitoring computed weights, ensuring that
there is no traffic disruption.
Of all the three scenarios, scenario 3 illustrates the most ideal way to manage redundant
Ethernet LAG failover.
Related • Understanding Chassis Cluster Redundant Ethernet Interface Link Aggregation Groups
Documentation on page 261
You can combine multiple physical Ethernet ports to form a logical point-to-point link,
known as a link aggregation group (LAG) or bundle, such that a media access control
(MAC) client can treat the LAG as if it were a single link.
LAGs can be established across nodes in a chassis cluster to provide increased interface
bandwidth and link availability.
The Link Aggregation Control Protocol (LACP) provides additional functionality for LAGs.
LACP is supported in standalone deployments, where aggregated Ethernet interfaces
are supported, and in chassis cluster deployments, where aggregated Ethernet interfaces
and redundant Ethernet interfaces are supported simultaneously.
You configure LACP on a redundant Ethernet interface by setting the LACP mode for the
parent link with the lacp statement. The LACP mode can be off (the default), active, or
passive.
• Chassis Cluster Redundant Ethernet Interface Link Aggregation Groups on page 269
• Sub-LAGs on page 270
• Supporting Hitless Failover on page 271
• Managing Link Aggregation Control PDUs on page 271
When at least two physical child interface links from each node are included in a redundant
Ethernet interface configuration, the interfaces are combined within the redundant
Ethernet interface to form a redundant Ethernet interface LAG.
Having multiple active redundant Ethernet interface links reduces the possibility of
failover. For example, when an active link is out of service, all traffic on this link is
distributed to other active redundant Ethernet interface links, instead of triggering a
redundant Ethernet active/standby failover.
Aggregated Ethernet interfaces, known as local LAGs, are also supported on either node
of a chassis cluster but cannot be added to redundant Ethernet interfaces. Likewise, any
child interface of an existing local LAG cannot be added to a redundant Ethernet interface,
and vice versa. The total maximum number of combined individual node LAG interfaces
(ae) and redundant Ethernet (reth) interfaces per cluster is 128.
However, aggregated Ethernet interfaces and redundant Ethernet interfaces can coexist,
because the functionality of a redundant Ethernet interface relies on the Junos OS
aggregated Ethernet framework.
For more information, see “Understanding Chassis Cluster Redundant Ethernet Interface
Link Aggregation Groups” on page 261.
Minimum Links
Sub-LAGs
LACP maintains a point-to-point LAG. Any port connected to the third point is denied.
However, a redundant Ethernet interface does connect to two different systems or two
remote aggregated Ethernet interfaces by design.
To support LACP on redundant Ethernet interface active and standby links, a redundant
Ethernet interface is created automatically to consist of two distinct sub-LAGs, where
all active links form an active sub-LAG and all standby links form a standby sub-LAG.
In this model, LACP selection logic is applied and limited to one sub-LAG at a time. In
this way, two redundant Ethernet interface sub-LAGs are maintained simultaneously
while all the LACP advantages are preserved for each sub-LAG.
It is necessary for the switches used to connect the nodes in the cluster to have a LAG
link configured and 802.3ad enabled for each LAG on both nodes so that the aggregate
links are recognized as such and correctly pass traffic.
NOTE: The redundant Ethernet interface LAG child links from each node in
the chassis cluster must be connected to a different LAG at the peer devices.
If a single peer switch is used to terminate the redundant Ethernet interface
LAG, two separate LAGs must be used in the switch.
The lacpd process manages both the active and standby links of the redundant Ethernet
interfaces. A redundant Ethernet interface state remains up when the number of active
up links is more than the number of minimum links configured. Therefore, to support
hitless failover, the LACP state on the redundant Ethernet interface standby links must
be collected and distributed before failover occurs.
• Configure Ethernet links to passively transmit PDUs, sending out link aggregation
control PDUs only when they are received from the remote end of the same link
The local end of a child link is known as the actor and the remote end of the link is known
as the partner. That is, the actor sends link aggregation control PDUs to its protocol
partner that convey what the actor knows about its own state and that of the partner’s
state.
You configure the interval at which the interfaces on the remote side of the link transmit
link aggregation control PDUs by configuring the periodic statement on the interfaces on
the local side. It is the configuration on the local side that specifies the behavior of the
remote side. That is, the remote side transmits link aggregation control PDUs at the
specified interval. The interval can be fast (every second) or slow (every 30 seconds).
For more information, see “Example: Configuring LACP on Chassis Clusters” on page 271.
By default, the actor and partner transmit link aggregation control PDUs every second.
You can configure different periodic rates on active and passive interfaces. When you
configure the active and passive interfaces at different rates, the transmitter honors the
receiver’s rate.
Requirements
Before you begin:
Complete the tasks such as enabling the chassis cluster, configuring interfaces and
redundancy groups. See “SRX Series Chassis Cluster Configuration Overview” on page 62
and “Example: Configuring Chassis Cluster Redundant Ethernet Interfaces” on page 133
for more details.
Overview
You can combine multiple physical Ethernet ports to form a logical point-to-point link,
known as a link aggregation group (LAG) or bundle. You configure LACP on a redundant
Ethernet interface of SRX series device in chassis cluster.
In this example, you set the LACP mode for the reth1 interface to active and set the link
aggregation control PDU transmit interval to slow, which is every 30 seconds.
When you enable LACP, the local and remote sides of the aggregated Ethernet links
exchange protocol data units (PDUs), which contain information about the state of the
link. You can configure Ethernet links to actively transmit PDUs, or you can configure the
links to passively transmit them (sending out LACP PDUs only when they receive them
from another link). One side of the link must be configured as active for the link to be up.
Figure 50: Topology for LAGs Connecting SRX Series Devices in Chassis
Cluster to an EX Series Switch
SRX Series SRX Series
Node 0 Node 1
RETH 1
192.168.2.1/24
ae1 ae2
g200022
ge-0/0/0 ge-0/0/3
ge-0/0/2 ge-0/0/1
EX Series Switch
In the Figure 50 on page 272, the ge-3/0/0 interface on SRX Series device is connected
to ge-0/0/0 interface on EX Series switch and the ge-15/0/0 interface is connected to
ge-0/0/1 on EX Series switch. For more information on EX Series switch configuration,
see Configuring Aggregated Ethernet LACP (CLI Procedure).
Configuration
Step-by-Step The following example requires you to navigate various levels in the configuration
Procedure hierarchy. For instructions on how to do that, see the CLI User Guide.
[edit interfaces]
user@host# set interfaces ge-3/0/0 gigether-options redundant-parent reth1
user@host# set interfaces ge-3/0/1 gigether-options redundant-parent reth1
user@host# set interfaces ge-15/0/0 gigether-options redundant-parent reth1
user@host# set interfaces ge-15/0/1 gigether-options redundant-parent reth1
[edit interfaces]
user@host# set interfaces reth1 redundant-ether-options redundancy-group 1
[edit interfaces]
user@host# set interfaces reth1 redundant-ether-options lacp active
user@host# set interfaces reth1 redundant-ether-options lacp periodic slow
[edit interfaces]
user@host# set interfaces reth1 unit 0 family inet address 192.168.2.1/24
[edit interfaces]
user@host# commit
Verification
Action From operational mode, enter the show lacp interfaces reth1 command.
The output shows redundant Ethernet interface information, such as the following:
• The LACP state—Indicates whether the link in the bundle is an actor (local or near-end
of the link) or a partner (remote or far-end of the link).
• The LACP mode—Indicates whether both ends of the aggregated Ethernet interface
are enabled (active or passive)—at least one end of the bundle must be active.
This example shows how to specify a minimum number of physical links assigned to a
redundant Ethernet interface on the primary node that must be working for the interface
to be up.
Requirements
Before you begin:
Overview
When a redundant Ethernet interface has more than two child links, you can set a
minimum number of physical links assigned to the interface on the primary node that
must be working for the interface to be up. When the number of physical links on the
primary node falls below the minimum-links value, the interface will be down even if
some links are still working.
In this example, you specify that three child links on the primary node and bound to reth1
(minimum-links value) be working to prevent the interface from going down. For example,
in a redundant Ethernet interface LAG configuration in which six interfaces are assigned
to reth1, setting the minimum-links value to 3 means that all reth1 child links on the primary
node must be working to prevent the interface’s status from changing to down.
Configuration
{primary:node0}[edit]
user@host# set interfaces reth1 redundant-ether-options minimum-links 3
{primary:node0}[edit]
user@host# commit
Verification
Purpose To verify the configuration is working properly, enter the show interface reth1 command.
Action From operational mode, enter the show show interfaces reth1 command.
{primary:node0}[edit]
user@host> show interfaces reth1
Physical interface: reth1, Enabled, Physical link is Down
Interface index: 129, SNMP ifIndex: 548
Link-level type: Ethernet, MTU: 1514, Speed: Unspecified, BPDU Error: None,
MAC-REWRITE Error: None, Loopback: Disabled, Source filtering: Disabled,
Flow control: Disabled, Minimum links needed: 3, Minimum bandwidth needed: 0
Device flags : Present Running
Interface flags: Hardware-Down SNMP-Traps Internal: 0x0
Related • Understanding Chassis Cluster Redundant Ethernet Interface Link Aggregation Groups
Documentation on page 261
Support for Ethernet link aggregation groups (LAGs) based on IEEE 802.3ad makes it
possible to aggregate physical interfaces on a standalone device. LAGs on standalone
devices provide increased interface bandwidth and link availability. Aggregation of links
in a chassis cluster allows a redundant Ethernet interface to add more than two physical
child interfaces, thereby creating a redundant Ethernet interface LAG.
Requirements
This example uses the following software and hardware components:
• SRX5800 with IOC2 or IOC3 with Express Path enabled on IOC2 and IOC3. For details,
see Example: Configuring SRX5K-MPC3-100G10G (IOC3) and SRX5K-MPC3-40G10G
(IOC3) on an SRX5000 Line Device to Support Express Path.
Overview
This example shows how to configure a redundant Ethernet interface link aggregation
group and configure LACP on chassis clusters on an SRX Series device using the ports
from either IOC2 or IOC3 in Express Path mode. Note that configuring child interfaces by
mixing links from both IOC2 and IOC3 is not supported.
• xe-1/0/0
• xe-3/0/0
• xe-14/0/0
• xe-16/0/0
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, delete, and then copy and paste the commands into the CLI at the [edit]
hierarchy level, and then enter commit from configuration mode.
Step-by-Step The following example requires you to navigate various levels in the configuration
Procedure hierarchy. For instructions on how to do that, see Using the CLI Editor in Configuration
Mode in CLI User Guide.
[edit chassis]
user@host# set chassis cluster reth-count 5
[edit interfaces]
user@host# set xe-1/0/0 gigether-options redundant-parent reth0
user@host# set xe-3/0/0 gigether-options redundant-parent reth0
user@host# set xe-14/0/0 gigether-options redundant-parent reth0
user@host# set xe-16/0/0 gigether-options redundant-parent reth0
[edit interfaces]
user@host# set reth0 unit 0 family inet address 192.0.2.1/24
[edit interfaces]
user@host# set reth0 redundant-ether-options lacp active
user@host# set reth0 redundant-ether-options lacp periodic fast
user@host# set reth0 redundant-ether-options minimum-links 1
Results From configuration mode, confirm your configuration by entering the show interfaces
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
[edit]
user@host# show interfaces
xe-1/0/0 {
gigether-options {
redundant-parent reth0;
}
}
xe-3/0/0 {
gigether-options {
redundant-parent reth0;
}
}
xe-14/0/0 {
gigether-options {
redundant-parent reth0;
}
}
xe-16/0/0 {
gigether-options {
redundant-parent reth0;
}
}
reth0 {
redundant-ether-options {
lacp {
active;
periodic fast;
}
minimum-links 1;
}
unit 0 {
family inet {
address 192.0.2.1/24;
}
}
}
ae1 {
aggregated-ether-options {
lacp {
active;
}
}
unit 0 {
family inet {
address 192.0.2.2/24;
}
}
}
[edit]
user@host# show chassis
chassis cluster {
reth-count 5;
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Action From operational mode, enter the show lacp interfaces command to check that LACP
has been enabled as active on one end.
The output indicates that LACP has been set up correctly and is active at one end.
When you set up an SRX Series chassis cluster, the SRX Series devices must be identical,
including their configuration. The chassis cluster synchronization feature automatically
synchronizes the configuration from the primary node to the secondary node when the
secondary joins the primary as a cluster. By eliminating the manual work needed to ensure
the same configurations on each node in the cluster, this feature reduces expenses.
If you want to disable automatic chassis cluster synchronization between the primary
and secondary nodes, you can do so by entering the set chassis cluster
configuration-synchronize no-secondary-bootup-auto command in configuration mode.
At any time, to reenable automatic chassis cluster synchronization, use the delete chassis
cluster configuration-synchronize no-secondary-bootup-auto command in configuration
mode.
To see whether the automatic chassis cluster synchronization is enabled or not, and to
see the status of the synchronization, enter the show chassis cluster information
configuration-synchronization operational command.
Either the entire configuration from the primary node is applied successfully to the
secondary node, or the secondary node retains its original configuration. There is no
partial synchronization.
NOTE: If you create a cluster with cluster IDs greater than 16, and then decide
to roll back to a previous release image that does not support extended
cluster IDs, the system comes up as standalone.
NOTE: If you have a cluster set up and running with an earlier release of Junos
OS, you can upgrade to Junos OS Release 12.1X45-D10 and re-create a cluster
with cluster IDs greater than 16. However, if for any reason you decide to
revert to the previous version of Junos OS that did not support extended
cluster IDs, the system comes up with standalone devices after you reboot.
However, if the cluster ID set is less than 16 and you roll back to a previous
release, the system will come back with the previous setup.
Action From the CLI, enter the show chassis cluster information configuration-synchronization
command:
{primary:node0}
user@host> show chassis cluster information configuration-synchronization
node0:
--------------------------------------------------------------------------
Configuration Synchronization:
Status:
Activation status: Enabled
Last sync operation: Auto-Sync
Last sync result: Not needed
Last sync mgd messages:
Events:
Mar 5 01:48:53.662 : Auto-Sync: Not needed.
node1:
--------------------------------------------------------------------------
Configuration Synchronization:
Status:
Activation status: Enabled
Events:
Mar 5 01:48:55.339 : Auto-Sync: In progress. Attempt: 1
Mar 5 01:49:40.664 : Auto-Sync: Succeeded. Attempt: 1
Network Time Protocol (NTP) is used to synchronize the time between the Packet
Forwarding Engine and the Routing Engine in a standalone device and between two
devices in a chassis cluster.
In both standalone and chassis cluster modes, the primary Routing Engine runs the NTP
process to get the time from the external NTP server. Although the secondary Routing
Engine runs the NTP process in an attempt to get the time from the external NTP server,
this attempt fails because of network issues. For this reason, the secondary Routing
Engine uses NTP to get the time from the primary Routing Engine.
• Send the time from the primary Routing Engine to the secondary Routing Engine through
the chassis cluster control link.
• Get the time from an external NTP server to the primary or a standalone Routing Engine.
• Get the time from the Routing Engine NTP process to the Packet Forwarding Engine.
NOTE: On SRX Series devices, use the command set system processes
ntpd-service to configure NTP.
Starting with Junos OS Release 15.1X49-D70 and Junos OS Release 17.3R1, configuring
the NTP time adjustment threshold is supported on SRX300, SRX320, SRX340, SRX345,
SRX1500, SRX4100, SRX4200, SRX5400, SRX5600, and SRX5800 devices and vSRX
instances. This feature allows you to configure and enforce the NTP adjustment threshold
for the NTP service and helps in improve the security and flexibility of the NTP service
protocol.
15.1X49-D70 Starting with Junos OS Release 15.1X49-D70 and Junos OS Release 17.3R1,
configuring the NTP time adjustment threshold is supported on SRX300, SRX320,
SRX340, SRX345, SRX1500, SRX4100, SRX4200, SRX5400, SRX5600, and
SRX5800 devices and vSRX instances. This feature allows you to configure and
enforce the NTP adjustment threshold for the NTP service and helps in improve
the security and flexibility of the NTP service protocol.
Related • Example: Simplifying Network Management by Synchronizing the Primary and Backup
Documentation Nodes with NTP on page 284
This example shows how to simplify management by synchronizing the time between
two SRX Series devices operating in a chassis cluster. Using a Network Time Protocol
(NTP) server, the primary node can synchronize time with the secondary node. NTP is
used to synchronize the time between the Packet Forwarding Engine and the Routing
Engine in a standalone device and between two devices in a chassis cluster. You need
to synchronize the system clocks on both nodes of the SRX Series devices in a chassis
cluster in order to manage the following items:
• RTO
• Licenses
• Software updates
• Node failovers
Requirements
This example uses the following hardware and software components:
• Understand the basics of the Network Time Protocol. See NTP Overview.
Overview
When SRX Series devices are operating in chassis cluster mode, the secondary node
cannot access the external NTP server through the revenue port. Junos OS Release 12.1X47
or later supports synchronization of secondary node time with the primary node through
the control link by configuring the NTP server on the primary node.
Topology
Figure 51 on page 285 shows the time synchronization from the peer node using the control
link.
Figure 51: Synchronizing Time From Peer Node Through Control Link
In the primary node, the NTP server is reachable. The NTP process on the primary node
can synchronize the time from the NTP server, and the secondary node can synchronize
the time with the primary node from the control link.
Configuration
• Synchronizing Time from the NTP server on page 286
• Results on page 286
CLI Quick To quickly configure this example, and synchronize the time from the NTP server, copy
Configuration the following commands, paste them into a text file, remove any line breaks, change any
details necessary to match your network configuration, copy and paste the commands
into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step In this example, you configure the primary node to get its time from an NTP server at IP
Procedure address 1.1.1.121. To synchronize the time from the NTP server:
{primary:node0}[edit]
[edit system]
user@host# set ntp server 1.1.1.121
user@host#commit
Results
From configuration mode, confirm your configuration by entering the show system ntp
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
{primary:node0}[edit]
user@host# show system ntp
server 1.1.1.121
If you are done configuring the device, enter commit from configuration mode.
Verification
Confirm that the configuration is working properly.
Action From operational mode, enter the show ntp associations command:
processor="i386", system="JUNOS12.1I20140320_srx_12q1_x47.1-637245",
leap=00, stratum=5, precision=-20, rootdelay=209.819,
rootdispersion=513.087, peer=14596, refid=1.1.1.121,
reftime=d6dbb2f9.b3f41ff7 Tue, Mar 25 2014 15:47:05.702, poll=6,
clock=d6dbb47a.72918b20 Tue, Mar 25 2014 15:53:30.447, state=4,
offset=-6.066, frequency=-55.135, jitter=4.343, stability=0.042
Meaning The output on the primary node shows the NTP association as follows:
• refid—Reference identifier of the remote peer. If the reference identifier is not known,
this field shows a value of 0.0.0.0.
The output on the primary node shows the NTP status as follows:
• x events—Number of events that have occurred since the last code change. An event
is often the receipt of an NTP polling message.
• system—Detailed description of the name and version of the operating system in use.
• precision—Precision of the peer clock, how precisely the frequency and time can be
maintained with this particular timekeeping system.
• refid—Reference identifier of the remote peer. If the reference identifier is not known,
this field shows a value of 0.0.0.0.
• reftime—Local time, in timestamp format, when the local clock was last updated. If
the local clock has never been synchronized, the value is zero.
Action From operational mode, enter the show ntp associations command:
Meaning The output on the secondary node shows the NTP association as follows:
• refid—Reference identifier of the remote peer. If the reference identifier is not known,
this field shows a value of 0.0.0.0.
The output on the secondary node shows the NTP status as follows:
• x events—Number of events that have occurred since the last code change. An event
is often the receipt of an NTP polling message.
• system—Detailed description of the name and version of the operating system in use.
• precision—Precision of the peer clock, how precisely the frequency and time can be
maintained with this particular timekeeping system.
• refid—Reference identifier of the remote peer. If the reference identifier is not known,
this field shows a value of 0.0.0.0.
• reftime—Local time, in timestamp format, when the local clock was last updated. If
the local clock has never been synchronized, the value is zero.
In this case, a single device in the cluster is used to route all traffic while the other device
is used only in the event of a failure (see Figure 52 on page 294). When a failure occurs,
the backup device becomes master and controls all forwarding.
reth 1.0
reth 0.0
EX Series EX Series
g030682
Trust zone
This configuration minimizes the traffic over the fabric link because only one node in the
cluster forwards traffic at any given time.
Related • Example: Configuring an Active/Passive Chassis Cluster Pair (CLI) on page 294
Documentation
• Example: Configuring an Active/Passive Chassis Cluster Pair (J-Web) on page 306
This example shows how to configure active/passive chassis clustering for SRX1500
device.
Requirements
Before you begin:
1. Physically connect a pair of devices together, ensuring that they are the same models.
2. Create a fabric link by connecting a Gigabit Ethernet interface on one device to another
Gigabit Ethernet interface on the other device.
3. Create a control link by connecting the control port of the two SRX1500 devices.
4. Connect to one of the devices using the console port. (This is the node that forms the
cluster.) and set the cluster ID and node number.
5. Connect to the other device using the console port and set the cluster ID and node
number.
Overview
In this example, a single device in the cluster is used to route all traffic, and the other
device is used only in the event of a failure. (See Figure 53 on page 295.) When a failure
occurs, the backup device becomes master and controls all forwarding.
EX Series EX Series
reth 1.0
203.0.113.233/24
ge-0/0/5 ge-7/0/5
UNTRUST ZONE
ge-0/0/4 ge-7/0/4
reth 0.0
198.51.100.1/24
EX Series EX Series
g030647
TRUST ZONE
In this example, you configure group (applying the configuration with the apply-groups
command) and chassis cluster information. Then you configure security zones and security
policies. See Table 25 on page 296 through Table 28 on page 297.
Heartbeat threshold – 3
1 • Priority:
• Node 0: 254
• Node 1: 1
Interface monitoring
• ge-0/0/4
• ge-7/0/4
• ge-0/0/5
• ge-7/0/5
• Unit 0
• 198.51.100.1/24
• Unit 0
• 203.0.113.233/24
This security policy permits traffic from the trust ANY • Match criteria:
zone to the untrust zone. • source-address any
• destination-address any
• application any
• Action: permit
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the [edit] hierarchy level,
and then enter commit from configuration mode.
[edit]
set groups node0 system host-name srx1500-A
set groups node0 interfaces fxp0 unit 0 family inet address 192.0.2.110/24
set groups node1 system host-name srx1500-B
set groups node1 interfaces fxp0 unit 0 family inet address 192.0.2.111/24
set apply-groups “${node}”
set interfaces fab0 fabric-options member-interfaces ge-0/0/1
set interfaces fab1 fabric-options member-interfaces ge-7/0/1
set chassis cluster heartbeat-interval 1000
set chassis cluster heartbeat-threshold 3
set chassis cluster redundancy-group 0 node 0 priority 100
set chassis cluster redundancy-group 0 node 1 priority 1
set chassis cluster redundancy-group 1 node 0 priority 100
set chassis cluster redundancy-group 1 node 1 priority 1
set chassis cluster redundancy-group 1 interface-monitor ge-0/0/4 weight 255
set chassis cluster redundancy-group 1 interface-monitor ge-7/0/4 weight 255
set chassis cluster redundancy-group 1 interface-monitor ge-0/0/5 weight 255
set chassis cluster redundancy-group 1 interface-monitor ge-7/0/5 weight 255
set chassis cluster reth-count 2
set interfaces ge-0/0/5 gigether-options redundant-parent reth1
set interfaces ge-7/0/5 gigether-options redundant-parent reth1
set interfaces ge-0/0/4 gigether-options redundant-parent reth0
set interfaces ge-7/0/4 gigether-options redundant-parent reth0
set interfaces reth0 redundant-ether-options redundancy-group 1
set interfaces reth0 unit 0 family inet address 198.51.100.1/24
set interfaces reth1 redundant-ether-options redundancy-group 1
set interfaces reth1 unit 0 family inet address 203.0.113.233/24
set security zones security-zone untrust interfaces reth1.0
set security zones security-zone trust interfaces reth0.0
set security policies from-zone trust to-zone untrust policy ANY match source-address
any
set security policies from-zone trust to-zone untrust policy ANY match destination-address
any
set security policies from-zone trust to-zone untrust policy ANY match application any
set security policies from-zone trust to-zone untrust policy ANY then permit
{primary:node0}[edit]
user@host# set groups node0 system host-name srx1500-A
user@host# set groups node0 interfaces fxp0 unit 0 family inet address
192.0.2.110/24
user@host# set groups node1 system host-name srx1500-B
user@host# set groups node1 interfaces fxp0 unit 0 family inet address 192.0.2.111/24
user@host# set apply-groups “${node}”
{primary:node0}[edit]
user@host# set interfaces fab0 fabric-options member-interfaces ge-0/0/1
user@host# set interfaces fab1 fabric-options member-interfaces ge-7/0/1
{primary:node0}[edit]
user@host# set chassis cluster heartbeat-interval 1000
user@host# set chassis cluster heartbeat-threshold 3
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 0 node 0 priority 100
user@host# set chassis cluster redundancy-group 0 node 1 priority 1
user@host# set chassis cluster redundancy-group 1 node 0 priority 100
user@host# set chassis cluster redundancy-group 1 node 1 priority 1
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/4
weight 255
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-7/0/4
weight 255
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/5
weight 255
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-7/0/5
weight 255
{primary:node0}[edit]
user@host# set chassis cluster reth-count 2
user@host# set interfaces ge-0/0/5 gigether-options redundant-parent reth1
user@host# set interfaces ge-7/0/5 gigether-options redundant-parent reth1
user@host# set interfaces ge-0/0/4 gigether-options redundant-parent reth0
user@host# set interfaces ge-7/0/4 gigether-options redundant-parent reth0
user@host# set interfaces reth0 redundant-ether-options redundancy-group 1
user@host# set interfaces reth0 unit 0 family inet address 198.51.100.1/24
user@host# set interfaces reth1 redundant-ether-options redundancy-group 1
user@host# set interfaces reth1 unit 0 family inet address 203.0.113.233/24
{primary:node0}[edit]
user@host# set security zones security-zone untrust interfaces reth1.0
user@host# set security zones security-zone trust interfaces reth0.0
{primary:node0}[edit]
user@host# set security policies from-zone trust to-zone untrust policy ANY match
source-address any
user@host# set security policies from-zone trust to-zone untrust policy ANY match
destination-address any
user@host# set security policies from-zone trust to-zone untrust policy ANY match
application any
user@host# set security policies from-zone trust to-zone untrust policy ANY then
permit
Results From configuration mode, confirm your configuration by entering the show configuration
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
}
}
interfaces {
ge–0/0/4 {
gigether–options {
redundant–parent reth0;
}
}
ge–7/0/4{
gigether–options {
redundant–parent reth0;
}
}
ge–0/0/5 {
gigether–options {
redundant–parent reth1;
}
}
ge–7/0/5 {
gigether–options {
redundant–parent reth1;
}
}
fab0 {
fabric–options {
member–interfaces {
ge–0/0/1;
}
}
}
fab1 {
fabric–options {
member–interfaces {
ge–7/0/1;
}
}
}
reth0 {
redundant–ether–options {
redundancy–group 1;
}
unit 0 {
family inet {
address 198.51.100.1/24;
}
}
}
reth1 {
redundant–ether–options {
redundancy–group 1;
}
unit 0 {
family inet {
address 203.0.113.233/24;
}
}
}
}
...
security {
zones {
security–zone untrust {
interfaces {
reth1.0;
}
}
security–zone trust {
interfaces {
reth0.0;
}
}
}
policies {
from-zone trust to-zone untrust {
policy ANY {
match {
source-address any;
destination-address any;
application any;
}
then {
permit;
}
}
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Confirm that the configuration is working properly.
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Cluster ID: 1
Node Priority Status Preempt Manual failover
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control interfaces:
Index Interface Monitored-Status Security
0 em0 Up Disabled
1 em1 Down Disabled
Fabric interfaces:
Name Child-interface Status Security
fab0 ge-0/0/1 Up Disabled
fab0
fab1 ge-7/0/1 Up Disabled
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 1
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 1
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-0/0/4 255 Up 1
ge-7/0/4 255 Up 1
ge-0/0/5 255 Up 1
ge-7/0/5 255 Up 1
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitored interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster statistics command.
{primary:node0}
user@host> show chassis cluster statistics
Purpose Verify information about chassis cluster control plane statistics (heartbeats sent and
received) and the fabric link statistics (probes sent and received).
Action From operational mode, enter the show chassis cluster control-plane statistics command.
{primary:node0}
user@host> show chassis cluster control-plane statistics
Purpose Verify information about the number of RTOs sent and received for services.
Action From operational mode, enter the show chassis cluster data-plane statistics command.
{primary:node0}
user@host> show chassis cluster data-plane statistics
Services Synchronized:
Service name RTOs sent RTOs received
Translation context 0 0
Incoming NAT 0 0
Resource manager 6 0
Session create 161 0
Session close 148 0
Session change 0 0
Gate create 0 0
Session ageout refresh requests 0 0
Session ageout refresh replies 0 0
IPSec VPN 0 0
Firewall user authentication 0 0
MGCP ALG 0 0
H323 ALG 0 0
SIP ALG 0 0
SCCP ALG 0 0
PPTP ALG 0 0
RPC ALG 0 0
RTSP ALG 0 0
RAS ALG 0 0
MAC address learning 0 0
GPRS GTP 0 0
Purpose Verify the state and priority of both nodes in a cluster and information about whether
the primary node has been preempted or whether there has been a manual failover.
Action From operational mode, enter the chassis cluster status redundancy-group command.
{primary:node0}
user@host> show chassis cluster status redundancy-group 1
Cluster ID: 1
Node Priority Status Preempt Manual failover
Purpose Use these logs to identify any chassis cluster issues. You must run these logs on both
nodes.
Heartbeat Threshold: 3
Nodes: 0
Group Number: 0
Priorities: 100
Nodes: 0
Group Number: 1
Priorities: 1
Nodes: 1
Group Number: 0
Priorities: 100
• Select ge-0/0/4.
• Click Apply.
• Select ge-7/0/4.
• Click Apply.
• Select ge-0/0/5.
• Click Apply.
• Select ge-7/0/5.
• Click Apply.
This example shows how to set up basic active/passive chassis clustering on an SRX
Series device (SRX5800 device).
Requirements
Before you begin:
• You need two SRX5800 Services Gateways with identical hardware configurations,
one MX240 edge router, and one EX8208 Ethernet Switch.
• Physically connect the two devices (back-to-back for the fabric and control ports)
and ensure that they are the same models.
• Before the cluster is formed, you must configure control ports for each device, as well
as assign a cluster ID and node ID to each device, and then reboot. When the system
boots, both the nodes come up as a cluster.
• To ensure secure login, configure the internal IPsec SA. When the internal IPsec is
configured, IPsec-based rlogin and remote command (rcmd) are enforced, so an
attacker cannot gain privileged access or observe traffic containing administrator
commands and outputs. You do not need to configure the internal IPsec on both the
nodes. When you commit the configuration, both nodes are synchronized. Only
3des-cbc encryption algorithm is supported. You must ensure that the manual
encryption key is ascii text and 24 characters long; otherwise, the configuration will
result in a commit failure.
You have the option to enable the iked-encryption. The device must be rebooted
after this option is configured.
• Use the show chassis cluster interfaces CLI command to verify that internal SA is
enabled:
Control interfaces:
Index Interface Status Internal SA <- new column
0 em0 Up enabled
1 em1 Down enabled
• Configure the control port for each device, and commit the configuration.
Select FPC 1/13, because the central point is always on the lowest SPC/SPU in the
cluster (for this example, it is slot 0). For maximum reliability, place the control ports
on a separate SPC from the central point (for this example, use the SPC in slot 1).
You must enter the operational mode commands on both devices. For example:
• On node 0:
• On node 1:
• Set the two devices to cluster mode. A reboot is required to enter into cluster mode
after the cluster ID and node ID are set. You can cause the system to boot
automatically by including the reboot parameter in the CLI command line. You must
enter the operational mode commands on both devices. For example:
• On node 0:
• On node 1:
The cluster ID is the same on both devices, but the node ID must be different because
one device is node 0 and the other device is node 1. The range for the cluster ID is 1
through 255. Setting a cluster ID to 0 is equivalent to disabling a cluster. Cluster ID
greater than 15 can only be set when the fabric and control link interfaces are
connected back-to-back.
Now the devices are a pair. From this point forward, configuration of the cluster is
synchronized between the node members, and the two separate devices function as one
device.
Overview
This example shows how to set up basic active/passive chassis clustering on an SRX
Series device. The basic active/passive example is the most common type of chassis
cluster.
• One device actively provides routing, firewall, NAT, VPN, and security services, along
with maintaining control of the chassis cluster.
• The other device passively maintains its state for cluster failover capabilities in case
the active device becomes inactive.
NOTE: This active/passive mode example for the SRX5800 Services Gateway
does not describe in detail miscellaneous configurations such as how to
configure NAT, security policies, or VPNs. They are essentially the same as
they would be for standalone configurations. See Introduction to NAT, Security
Policies Overview, and IPsec VPN Overview. However, if you are performing
proxy ARP in chassis cluster configurations, you must apply the proxy ARP
configurations to the reth interfaces rather than the member interfaces
because the RETH interfaces hold the logical configurations. See Configuring
Proxy ARP (CLI Procedure). You can also configure separate logical interface
configurations using VLANs and trunked interfaces in the SRX5800 Services
Gateway. These configurations are similar to the standalone implementations
using VLANs and trunked interfaces.
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the [edit] hierarchy level,
and then enter commit from configuration mode.
On {primary:node0}
[edit]
set interfaces fab0 fabric-options member-interfaces ge-11/3/0
set interfaces fab1 fabric-options member-interfaces ge-23/3/0
set groups node0 system host-name SRX5800-1
set groups node0 interfaces fxp0 unit 0 family inet address 10.3.5.1/24
set groups node0 system backup-router 10.3.5.254 destination 10.0.0.0/16
set groups node1 system host-name SRX5800-2
set groups node1 interfaces fxp0 unit 0 family inet address 10.3.5.2/24
set groups node1 system backup-router 10.3.5.254 destination 10.0.0.0/16
To quickly configure an EX8208 Core Switch, copy the following commands, paste them
into a text file, remove any line breaks, change any details necessary to match your
network configuration, copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
On {primary:node0}
[edit]
set interfaces xe-1/0/0 unit 0 family ethernet-switching port-mode access vlan members
SRX5800
set interfaces xe-2/0/0 unit 0 family ethernet-switching port-mode access vlan members
SRX5800
set interfaces vlan unit 50 family inet address 2.2.2.254/24
set vlans SRX5800 vlan-id 50
set vlans SRX5800 l3-interface vlan.50
set routing-options static route 0.0.0.0/0 next-hop 2.2.2.1/24
To quickly configure an MX240 edge router, copy the following commands, paste them
into a text file, remove any line breaks, change any details necessary to match your
network configuration, copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
On {primary:node0}
[edit]
set interfaces xe-1/0/0 encapsulation ethernet-bridge unit 0 family ethernet-switching
set interfaces xe-2/0/0 encapsulation ethernet-bridge unit 0 family ethernet-switching
set interfaces irb unit 0 family inet address 1.1.1.254/24
set routing-options static route 2.0.0.0/8 next-hop 1.1.1.1
set routing-options static route 0.0.0.0/0 next-hop (upstream router)
set vlans SRX5800 vlan-id X (could be set to “none”)
set vlans SRX5800 domain-type bridge routing-interface irb.0
Step-by-Step The following example requires you to navigate various levels in the configuration
Procedure hierarchy. For instructions on how to do that, see Using the CLI Editor in Configuration
Mode in the CLI User Guide.
NOTE: In cluster mode, the cluster is synchronized between the nodes when
you execute a commit command. All commands are applied to both nodes
regardless of from which device the command is configured.
1. Configure the fabric (data) ports of the cluster that are used to pass RTOs in
active/passive mode. For this example, use one of the 1-Gigabit Ethernet ports
because running out of bandwidth using active/passive mode is not an issue. Define
two fabric interfaces, one on each chassis, to connect together.
{primary:node0}[edit]
user@host# set interfaces fab0 fabric-options member-interfaces ge-11/3/0
user@host# set interfaces fab1 fabric-options member-interfaces ge-23/3/0
{primary:node0}[edit]
user@host# set groups node0 system host-name SRX5800-1
user@host# set groups node0 interfaces fxp0 unit 0 family inet address 10.3.5.1/24
user@host# set groups node0 system backup-router 10.3.5.254 destination
0.0.0.0/16
user@host# set groups node1 system host-name SRX5800-2
user@host# set groups node1 interfaces fxp0 unit 0 family inet address 10.3.5.2/24
user@host# set groups node1 system backup-router 10.3.5.254 destination
0.0.0.0/16
user@host# set apply-groups “${node}”
3. Configure redundancy groups for chassis clustering. Each node has interfaces in a
redundancy group where interfaces are active in active redundancy groups (multiple
active interfaces can exist in one redundancy group). Redundancy group 0 controls
the control plane and redundancy group 1+ controls the data plane and includes
the data plane ports. For this active/passive mode example, only one chassis cluster
member is active at a time so you need to define redundancy groups 0 and 1 only.
Besides redundancy groups, you must also define:
• Priority for control plane and data plane—Define which device has priority (for
chassis cluster, high priority is preferred) for the control plane, and which device
is preferred to be active for the data plane.
NOTE:
• In active/passive or active/active mode, the control plane
{primary:node0}[edit]
user@host# set chassis cluster reth-count 2
user@host# set chassis cluster redundancy-group 0 node 0 priority 129
user@host# set chassis cluster redundancy-group 0 node 1 priority 128
user@host# set chassis cluster redundancy-group 1 node 0 priority 129
user@host# set chassis cluster redundancy-group 1 node 1 priority 128
4. Configure the data interfaces on the platform so that in the event of a data plane
failover, the other chassis cluster member can take over the connection seamlessly.
Seamless transition to a new active node will occur with data plane failover. In case
of control plane failover, all the daemons are restarted on the new node thus
enabling a graceful restart to avoid losing neighborship with peers (ospf, bgp). This
promotes a seamless transition to the new node without any packet loss.
• Define the membership information of the member interfaces to the reth interface.
• Define which redundancy group the reth interface is a member of. For this
active/passive example, it is always 1.
{primary:node0}[edit]
user@host# set interfaces xe-6/0/0 gigether-options redundant-parent reth0
user@host# set interfaces xe-6/1/0 gigether-options redundant-parent reth1
user@host# set interfaces xe-18/0/0 gigether-options redundant-parent reth0
user@host# set interfaces xe-18/1/0 gigether-options redundant-parent reth1
user@host# set interfaces reth0 redundant-ether-options redundancy-group 1
user@host# set interfaces reth0 unit 0 family inet address 1.1.1.1/24
user@host# set interfaces reth1 redundant-ether-options redundancy-group 1
user@host# set interfaces reth1 unit 0 family inet address 2.2.2.1/24
5. Configure the chassis cluster behavior in case of a failure. For the SRX5800 Services
Gateway, the failover threshold is set at 255. You can alter the weights to determine
the impact on the chassis failover. You must also configure control link recovery.
The recovery automatically causes the secondary node to reboot should the control
link fail, and then come back online. Enter these commands on node 0.
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 interface-monitor xe-6/0/0
weight 255
user@host# set chassis cluster redundancy-group 1 interface-monitor xe-6/1/0
weight 255
user@host# set chassis cluster redundancy-group 1 interface-monitor xe-18/0/0
weight 255
user@host# set chassis cluster redundancy-group 1 interface-monitor xe-18/1/0
weight 255
user@host# set chassis cluster control-link-recovery
This step completes the chassis cluster configuration part of the active/passive
mode example for the SRX5800 Services Gateway. The rest of this procedure
describes how to configure the zone, virtual router, routing, EX8208 Core Switch,
and MX240 Edge Router to complete the deployment scenario.
6. Configure and connect the reth interfaces to the appropriate zones and virtual
routers. For this example, leave the reth0 and reth1 interfaces in the default virtual
router inet.0, which does not require any additional configuration.
{primary:node0}[edit]
user@host# set security zones security-zone untrust interfaces reth0.0
user@host# set security zones security-zone trust interfaces reth1.0
7. For this active/passive mode example, because of the simple network architecture,
use static routes to define how to route to the other network devices.
{primary:node0}[edit]
user@host# set routing-options static route 0.0.0.0/0 next-hop 1.1.1.254
user@host# set routing-options static route 2.0.0.0/8 next-hop 2.2.2.254
8. For the EX8208 Ethernet Switch, the following commands provide only an outline
of the applicable configuration as it pertains to this active/passive mode example
for the SRX5800 Services Gateway; most notably the VLANs, routing, and interface
configuration.
{primary:node0}[edit]
9. For the MX240 edge router, the following commands provide only an outline of the
applicable configuration as it pertains to this active/passive mode example for the
SRX5800 Services Gateway; most notably you must use an IRB interface within a
virtual switch instance on the switch.
{primary:node0}[edit]
user@host# set interfaces xe-1/0/0 encapsulation ethernet-bridge unit 0 family
ethernet-switching
user@host# set interfaces xe-2/0/0 encapsulation ethernet-bridge unit 0 family
ethernet-switching
user@host# set interfaces irb unit 0 family inet address 1.1.1.254/24
user@host# set routing-options static route 2.0.0.0/8 next-hop 1.1.1.1
user@host# set routing-options static route 0.0.0.0/0 next-hop (upstream router)
user@host# set vlans SRX5800 vlan-id X (could be set to “none”)
user@host# set vlans SRX5800 domain-type bridge routing-interface irb.0
user@host# set vlans SRX5800 domain-type bridge interface xe-1/0/0
user@host# set vlans SRX5800 domain-type bridge interface xe-2/0/0
Results From operational mode, confirm your configuration by entering the show configuration
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 10.3.5.1/24;
}
}
}
}
}
node1 {
system {
host-name SRX58002;
backup-router 10.3.5.254 destination 0.0.0.0/16;
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 10.3.5.2/24;
}
}
}
}
}
}
apply-groups "${node}";
system {
root-authentication {
encrypted-password "$ABC1234EFGH5678IJKL9101";
}
name-server {
4.2.2.2;
}
services {
ssh {
root-login allow;
}
netconf {
ssh;
}
web-management {
http {
interface fxp0.0;
}
}
}
}
chassis {
cluster {
control-link-recovery;
reth-count 2;
control-ports {
fpc 1 port 0;
fpc 13 port 0;
}
redundancy-group 0 {
node 0 priority 129;
node 1 priority 128;
}
redundancy-group 1 {
node 0 priority 129;
node 1 priority 128;
interface-monitor {
xe–6/0/0 weight 255;
xe–6/1/0 weight 255;
xe–18/0/0 weight 255;
xe–18/1/0 weight 255;
}
}
}
}
interfaces {
xe–6/0/0 {
gigether–options {
redundant–parent reth0;
}
}
xe–6/1/0 {
gigether–options {
redundant–parent reth1;
}
}
xe–18/0/0 {
gigether–options {
redundant–parent reth0;
}
}
xe–18/1/0 {
gigether–options {
redundant–parent reth1;
}
}
fab0 {
fabric–options {
member–interfaces {
ge–11/3/0;
}
}
}
fab1 {
fabric–options {
member–interfaces {
ge–23/3/0;
}
}
}
reth0 {
redundant–ether–options {
redundancy–group 1;
}
unit 0 {
family inet {
address 1.1.1.1/24;
}
}
}
reth1 {
redundant–ether–options {
redundancy–group 1;
}
unit 0 {
family inet {
address 2.2.2.1/24;
}
}
}
}
routing–options {
static {
route 0.0.0.0/0 {
next–hop 1.1.1.254;
}
route 2.0.0.0/8 {
next–hop 2.2.2.254;
}
}
}
security {
zones {
security–zone trust {
host–inbound–traffic {
system–services {
all;
}
}
interfaces {
reth0.0;
}
}
security–zone untrust {
interfaces {
reth1.0;
}
}
}
policies {
from–zone trust to–zone untrust {
policy 1 {
match {
source–address any;
destination–address any;
application any;
}
then {
permit;
}
}
}
default–policy {
deny–all;
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Confirm that the configuration is working properly.
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
show chassis cluster status
Cluster ID: 1
Node Priority Status Preempt Manual failover
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link name: fxp1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 1
Interface Monitoring:
Interface Weight Status Redundancy-group
xe-6/0/0 255 Up 1
xe-6/1/0 255 Up 1
xe-18/0/0 255 Up 1
xe-18/1/0 255 Up 1
Purpose Verify information about chassis cluster services and control link statistics (heartbeats
sent and received), fabric link statistics (probes sent and received), and the number of
RTOs sent and received for services.
Action From operational mode, enter the show chassis cluster statistics command.
{primary:node0}
Purpose Verify information about chassis cluster control plane statistics (heartbeats sent and
received) and the fabric link statistics (probes sent and received).
Action From operational mode, enter the show chassis cluster control-plane statistics command.
{primary:node0}
user@host> show chassis cluster control-plane statistics
Purpose Verify information about the number of RTOs sent and received for services.
Action From operational mode, enter the show chassis cluster data-plane statistics command.
{primary:node0}
user@host> show chassis cluster data-plane statistics
Services Synchronized:
Service name RTOs sent RTOs received
Translation context 0 0
Incoming NAT 0 0
Resource manager 6 0
Session create 161 0
Session close 148 0
Session change 0 0
Gate create 0 0
Session ageout refresh requests 0 0
Session ageout refresh replies 0 0
IPSec VPN 0 0
Firewall user authentication 0 0
MGCP ALG 0 0
H323 ALG 0 0
SIP ALG 0 0
SCCP ALG 0 0
PPTP ALG 0 0
RPC ALG 0 0
RTSP ALG 0 0
RAS ALG 0 0
MAC address learning 0 0
GPRS GTP 0 0
Purpose Verify the state and priority of both nodes in a cluster and information about whether
the primary node has been preempted or whether there has been a manual failover.
Action From operational mode, enter the chassis cluster status redundancy-group command.
{primary:node0}
user@host> show chassis cluster status redundancy-group 1
Cluster ID: 1
Node Priority Status Preempt Manual failover
Purpose Use these logs to identify any chassis cluster issues. You must run these logs on both
nodes.
In this case, a single device in the cluster terminates in an IPsec tunnel and is used to
process all traffic while the other device is used only in the event of a failure (see
Figure 55 on page 324). When a failure occurs, the backup device becomes master and
controls all forwarding.
This example shows how to configure active/passive chassis clustering with an IPsec
tunnel for SRX Series devices.
Requirements
Before you begin:
• Get two SRX5000 models with identical hardware configurations, one SRX1500 device,
and four EX Series Ethernet switches.
• Physically connect the two devices (back-to-back for the fabric and control ports)
and ensure that they are the same models. You can configure both the fabric and
control ports on the SRX5000 line.
• Set the two devices to cluster mode and reboot the devices. You must enter the
following operational mode commands on both devices, for example:
• On node 0:
• On node 1:
The cluster ID is the same on both devices, but the node ID must be different because
one device is node 0 and the other device is node 1. The range for the cluster ID is 1
through 255. Setting a cluster ID to 0 is equivalent to disabling a cluster.
Cluster ID greater than 15 can only be set when the fabric and control link interfaces
are connected back-to-back.
• Get two SRX5000 models with identical hardware configurations, one SRX1500 edge
router, and four EX Series Ethernet switches.
• Physically connect the two devices (back-to-back for the fabric and control ports)
and ensure that they are the same models. You can configure both the fabric and
control ports on the SRX5000 line.
From this point forward, configuration of the cluster is synchronized between the node
members and the two separate devices function as one device. Member-specific
configurations (such as the IP address of the management port of each member) are
entered using configuration groups.
Overview
In this example, a single device in the cluster terminates in an IPsec tunnel and is used
to process all traffic, and the other device is used only in the event of a failure. (See
Figure 56 on page 326.) When a failure occurs, the backup device becomes master and
controls all forwarding.
In this example, you configure group (applying the configuration with the apply-groups
command) and chassis cluster information. Then you configure IKE, IPsec, static route,
security zone, and security policy parameters. See Table 29 on page 326 through
Table 35 on page 329.
Heartbeat threshold – 3
1 • Priority:
• Node 0: 254
• Node 1: 1
Interface monitoring
• xe-5/0/0
• xe-5/1/0
• xe-17/0/0
• xe-17/1/0
• Unit 0
• 10.1.1.60/16
• Multipoint
• Unit 0
• 10.10.1.1/30
st0
• Unit 0
• 10.10.1.1/30
Proposal proposal-set -
standard
NOTE: On SRX5000 line devices, only reth interfaces are supported for IKE external
interface configuration in IPsec VPN. Other interface types can be configured, but IPsec
VPN might not work.
On some SRX Series devices, reth interfaces and the lo0 interface are supported for
IKE external interface configuration in IPsec VPN. Other interface types can be
configured, but IPsec VPN might not work.
On all SRX5000 line devices, the lo0 logical interface cannot be configured with RG0
if used as an IKE gateway external interface.
Policy std –
NOTE: The manual VPN name and the site-to-site gateway name cannot
be the same.
This security policy permits traffic from the trust ANY • Match criteria:
zone to the untrust zone. • source-address any
• destination-address any
• application any
• Action: permit
This security policy permits traffic from the trust vpn-any • Match criteria:
zone to the vpn zone. • source-address any
• destination-address any
• application any
• Action: permit
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the [edit] hierarchy level,
and then enter commit from configuration mode.
{primary:node0}[edit]
set chassis cluster control-ports fpc 2 port 0
set chassis cluster control-ports fpc 14 port 0
set groups node0 system host-name SRX5800-1
set groups node0 interfaces fxp0 unit 0 family inet address 172.19.100.50/24
set groups node1 system host-name SRX5800-2
set groups node1 interfaces fxp0 unit 0 family inet address 172.19.100.51/24
set apply-groups “${node}”
set interfaces fab0 fabric-options member-interfaces xe-5/3/0
set interfaces fab1 fabric-options member-interfaces xe-17/3/0
set chassis cluster reth-count 2
set chassis cluster heartbeat-interval 1000
{primary:node0}[edit]
user@host# set chassis cluster control-ports fpc 2 port 0
user@host# set chassis cluster control-ports fpc 14 port 0
{primary:node0}[edit]
user@host# set groups node0 system host-name SRX5800-1
user@host# set groups node0 interfaces fxp0 unit 0 family inet address
172.19.100.50/24
user@host#set groups node1 system host-name SRX5800-2
user@host# set groups node1 interfaces fxp0 unit 0 family inet address
172.19.100.51/24
user@host# set apply-groups “${node}”
{primary:node0}[edit]
user@host# set interfaces fab0 fabric-options member-interfaces xe-5/3/0
user@host# set interfaces fab1 fabric-options member-interfaces xe-17/3/0
{primary:node0}[edit]
user@host# set chassis cluster reth-count 2
user@host# set chassis cluster heartbeat-interval 1000
user@host# set chassis cluster heartbeat-threshold 3
user@host# set chassis cluster node 0
user@host# set chassis cluster node 1
user@host# set chassis cluster redundancy-group 0 node 0 priority 254
user@host# set chassis cluster redundancy-group 0 node 1 priority 1
user@host# set chassis cluster redundancy-group 1 node 0 priority 254
user@host# set chassis cluster redundancy-group 1 node 1 priority 1
user@host# set chassis cluster redundancy-group 1 preempt
user@host# set chassis cluster redundancy-group 1 interface-monitor xe-5/0/0
weight 255
user@host# set chassis cluster redundancy-group 1 interface-monitor xe-5/1/0
weight 255
user@host# set chassis cluster redundancy-group 1 interface-monitor xe-17/0/0
weight 255
user@host# set chassis cluster redundancy-group 1 interface-monitor xe-17/1/0
weight 255
{primary:node0}[edit]
user@host# set interfaces xe-5/1/0 gigether-options redundant-parent reth1
user@host# set interfaces xe-17/1/0 gigether-options redundant-parent reth1
user@host# set interfaces xe-5/0/0 gigether-options redundant-parent reth0
user@host# set interfaces xe-17/0/0 gigether-options redundant-parent reth0
user@host# set interfaces reth0 redundant-ether-options redundancy-group 1
user@host# set interfaces reth0 unit 0 family inet address 10.1.1.60/16
user@host# set interfaces reth1 redundant-ether-options redundancy-group 1
user@host# set interfaces reth1 unit 0 family inet address 10.2.1.60/16
{primary:node0}[edit]
user@host# set interfaces st0 unit 0 multipoint family inet address 10.10.1.1/30
user@host# set security ike policy preShared mode main
user@host# set security ike policy preShared proposal-set standard
user@host# set security ike policy preShared pre-shared-key ascii-text "$ABC123"##
Encrypted password
user@host# set security ike gateway SRX1500-1 ike-policy preShared
user@host# set security ike gateway SRX1500-1 address 10.1.1.90
user@host# set security ike gateway SRX1500-1 external-interface reth0.0
user@host# set security ipsec policy std proposal-set standard
user@host# set security ipsec vpn SRX1500-1 bind-interface st0.0
user@host# set security ipsec vpn SRX1500-1 vpn-monitor optimized
user@host# set security ipsec vpn SRX1500-1 ike gateway SRX1500-1
user@host# set security ipsec vpn SRX1500-1 ike ipsec-policy std
user@host# set security ipsec vpn SRX1500-1 establish-tunnels immediately
{primary:node0}[edit]
user@host# set routing-options static route 0.0.0.0/0 next-hop 10.2.1.1
user@host# set routing-options static route 10.3.0.0/16 next-hop 10.10.1.2
{primary:node0}[edit]
user@host# set security zones security-zone untrust host-inbound-traffic
system-services all
user@host# set security zones security-zone untrust host-inbound-traffic protocols
all
user@host# set security zones security-zone untrust interfaces reth1.0
user@host# set security zones security-zone trust host-inbound-traffic
system-services all
user@host# set security zones security-zone trust host-inbound-traffic protocols
all
user@host# set security zones security-zone trust interfaces reth0.0
user@host# set security zones security-zone vpn host-inbound-traffic
system-services all
user@host# set security zones security-zone vpn host-inbound-traffic protocols all
user@host# set security zones security-zone vpn interfaces st0.0
{primary:node0}[edit]
user@host# set security policies from-zone trust to-zone untrust policy ANY match
source-address any
user@host# set security policies from-zone trust to-zone untrust policy ANY match
destination-address any
user@host# set security policies from-zone trust to-zone untrust policy ANY match
application any
user@host# set security policies from-zone trust to-zone vpn policy vpn-any then
permit
Results From operational mode, confirm your configuration by entering the show configuration
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
unit 0 {
family inet {
address 10.2.1.60/16;
}
}
}
st0 {
unit 0 {
multipoint;
family inet {
address 5.4.3.2/32;
}
}
}
}
routing–options {
static {
route 0.0.0.0/0 {
next–hop 10.2.1.1;
}
route 10.3.0.0/16 {
next–hop 10.10.1.2;
}
}
}
security {
zones {
security–zone trust {
host–inbound–traffic {
system–services {
all;
}
}
interfaces {
reth0.0;
}
}
security–zone untrust
host-inbound-traffic {
system-services {
all;
}
}
protocols {
all;
}
interfaces {
reth1.0;
}
}
security-zone vpn {
host-inbound-traffic {
system-services {
all;
}
}
protocols {
all;
}
interfaces {
st0.0;
}
}
}
policies {
from–zone trust to–zone untrust {
policy ANY {
match {
source–address any;
destination–address any;
application any;
}
then {
permit;
}
}
}
from–zone trust to–zone vpn {
policy vpn {
match {
source–address any;
destination–address any;
application any;
}
then {
permit;
}
}
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Confirm that the configuration is working properly.
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
show chassis cluster status
Cluster ID: 1
Node Priority Status Preempt Manual failover
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link name: fxp1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 1
Interface Monitoring:
Interface Weight Status Redundancy-group
xe-5/0/0 255 Up 1
xe-5/1/0 255 Up 1
xe-17/0/0 255 Up 1
xe-17/1/0 255 Up 1
Purpose Verify information about chassis cluster services and control link statistics (heartbeats
sent and received), fabric link statistics (probes sent and received), and the number of
RTOs sent and received for services.
Action From operational mode, enter the show chassis cluster statistics command.
{primary:node0}
user@host> show chassis cluster statistics
Purpose Verify information about chassis cluster control plane statistics (heartbeats sent and
received) and the fabric link statistics (probes sent and received).
Action From operational mode, enter the show chassis cluster control-panel statistics command.
{primary:node0}
user@host> show chassis cluster control-plane statistics
Purpose Verify information about the number of RTOs sent and received for services.
Action From operational mode, enter the show chassis cluster data-plane statistics command.
{primary:node0}
user@host> show chassis cluster data-plane statistics
Services Synchronized:
Service name RTOs sent RTOs received
Translation context 0 0
Incoming NAT 0 0
Resource manager 6 0
Session create 161 0
Session close 148 0
Session change 0 0
Gate create 0 0
Session ageout refresh requests 0 0
Session ageout refresh replies 0 0
IPSec VPN 0 0
Firewall user authentication 0 0
MGCP ALG 0 0
H323 ALG 0 0
SIP ALG 0 0
SCCP ALG 0 0
PPTP ALG 0 0
RPC ALG 0 0
RTSP ALG 0 0
RAS ALG 0 0
MAC address learning 0 0
GPRS GTP 0 0
Purpose Verify the state and priority of both nodes in a cluster and information about whether
the primary node has been preempted or whether there has been a manual failover.
Action From operational mode, enter the chassis cluster status redundancy-group command.
{primary:node0}
user@host> show chassis cluster status redundancy-group 1
Cluster ID: 1
Node Priority Status Preempt Manual failover
Purpose Use these logs to identify any chassis cluster issues. You must run these logs on both
nodes.
Heartbeat Threshold: 3
Nodes: 0
Group Number: 0
Priorities: 254
Nodes: 0
Group Number: 1
Priorities: 254
Nodes: 1
Group Number: 0
Priorities: 1
Nodes: 1
Group Number: 1
Priorities: 1
• Select xe-5/1/0.
• Click Apply.
• Select xe-17/1/0.
• Click Apply.
• Select xe-5/0/0.
• Click Apply.
• Select xe-17/0/0.
• Click Apply.
• Click Add.
10. Click OK to check your configuration and save it as a candidate configuration, then
click Commit Options>Commit.
Multicast routing support across nodes in a chassis cluster allows multicast protocols,
such as Protocol Independent Multicast (PIM) versions 1 and 2, Internet Group
Management Protocol (IGMP), Session Announcement Protocol (SAP), and Distance
Vector Multicast Routing Protocol (DVMRP), to send traffic across interfaces in the
cluster. Note, however, that the multicast protocols should not be enabled on the chassis
management interface (fxp0) or on the fabric interfaces (fab0 and fab1). Multicast
sessions are synched across the cluster and maintained during redundant group failovers.
During failover, as with other types of traffic, there might be some multicast packet loss.
Multicast data forwarding in a chassis cluster uses the incoming interface to determine
whether or not the session remains active. Packets are forwarded to the peer node if a
leaf session’s outgoing interface is on the peer instead of on the incoming interface’s
node. Multicast routing on a chassis cluster supports tunnels for both incoming and
outgoing interfaces.
Multicast traffic has an upstream (toward source) and downstream (toward subscribers)
direction in traffic flows. The devices replicate (fanout) a single multicast packet to
multiple networks that contain subscribers. In the chassis cluster environment, multicast
packet fanouts can be active on either nodes.
If the incoming interface is active on the current node and backup on the peer node, then
the session is active on the current node and backup on the peer node.
A PIM session encapsulates multicast data into a PIM unicast packet. A PIM session
creates the following sessions:
• Control session
• Data session
The data session saves the control session ID. The control session and the data session
are closed independently. The incoming interface is used to determine whether the PIM
session is active or not. If the outgoing interface is active on the peer node, packets are
transferred to the peer node for transmission.
In PIM sessions, the control session is synchronized to the backup node, and then the
data session is synchronized.
In multicast sessions, the template session is synchronized to the peer node, then all the
leaf sessions are synchronized, and finally the template session is synchronized again.
In this case, chassis cluster makes use of its asymmetric routing capability (see
Figure 57 on page 345). Traffic received by a node is matched against that node’s session
table. The result of this lookup determines whether or not that the node processes the
packet or forwards it to the other node over the fabric link. Sessions are anchored on the
egress node for the first packet that created the session. If traffic is received on the node
in which the session is not anchored, those packets are forwarded over the fabric link to
the node where the session is anchored.
NOTE: The anchor node for the session can change if there are changes in
routing during the session.
In this scenario, two Internet connections are used, with one being preferred. The
connection to the trust zone is done by using a redundant Ethernet interface to provide
LAN redundancy for the devices in the trust zone. This scenario describes two failover
cases in which sessions originate in the trust zone with a destination of the Internet
(untrust zone).
• Understanding Failures in the Trust Zone Redundant Ethernet Interface on page 345
• Understanding Failures in the Untrust Zone Interfaces on page 345
A failure in interface ge-0/0/1 triggers a failover of the redundancy group, causing interface
ge-7/0/1 in node 1 to become active. After the failover, traffic arrives at node 1. After
session lookup, the traffic is sent to node 0 because the session is active on this node.
Node 0 then processes the traffic and forwards it to the Internet. The return traffic follows
a similar process. The traffic arrives at node 0 and gets processed for security
purposes—for example, antispam scanning, antivirus scanning, and application of security
policies—on node 0 because the session is anchored to node 0. The packet is then sent
to node 1 through the fabric interface for egress processing and eventual transmission
out of node 1 through interface ge-7/0/1.
the failure, sessions in node 0 become inactive, and the passive sessions in node 1 become
active. Traffic arriving from the trust zone is still received on interface ge-0/0/1, but is
forwarded to node 1 for processing. After traffic is processed in node 1, it is forwarded to
the Internet through interface ge-7/0/0.
In this chassis cluster configuration, redundancy group 1 is used to control the redundant
Ethernet interface connected to the trust zone. As configured in this scenario, redundancy
group 1 fails over only if interface ge-0/0/1 or ge-7/0/1 fails, but not if the interfaces
connected to the Internet fail. Optionally, the configuration could be modified to permit
redundancy group 1 to monitor all interfaces connected to the Internet and fail over if an
Internet link were to fail. So, for example, the configuration can allow redundancy group
1 to monitor ge-0/0/0 and make ge-7/0/1 active for reth0 if the ge-0/0/0 Internet link
fails. (This option is not described in the following configuration examples.)
This example shows how to configure a chassis cluster pair of devices to allow asymmetric
routing. Configuring asymmetric routing for a chassis cluster allows traffic received on
either device to be processed seamlessly.
Requirements
Before you begin:
1. Physically connect a pair of devices together, ensuring that they are the same models.
This example uses a pair of SRX1500 devices.
a. To create the fabric link, connect a Gigabit Ethernet interface on one device to
another Gigabit Ethernet interface on the other device.
b. To create the control link, connect the control port of the two SRX1500 devices.
2. Connect to one of the devices using the console port. (This is the node that forms the
cluster.)
Overview
In this example, a chassis cluster provides asymmetric routing. As illustrated in
Figure 58 on page 347, two Internet connections are used, with one being preferred. The
connection to the trust zone is provided by a redundant Ethernet interface to provide
LAN redundancy for the devices in the trust zone.
In this example, you configure group (applying the configuration with the apply-groups
command) and chassis cluster information. Then you configure security zones and security
policies. See Table 36 on page 347 through Table 39 on page 349.
Heartbeat threshold – 3
Interface monitoring
• ge-0/0/3
• ge-7/0/3
ge-7/0/1 • Unit 0
• 10.2.1.233/24
ge-0/0/3 •
ge-7/0/3 •
reth0 • Unit 0
• 10.16.8.1/24
untrust The ge-0/0/1 and ge-7/0/1 interfaces are bound to this zone.
This security policy permits traffic from the trust ANY • Match criteria:
zone to the untrust zone. • source-address any
• destination-address any
• application any
• Action: permit
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the [edit] hierarchy level,
and then enter commit from configuration mode.
{primary:node0}[edit]
set groups node0 system host-name srxseries-1
set groups node0 interfaces fxp0 unit 0 family inet address 192.168.100.50/24
set groups node1 system host-name srxseries-2
set groups node1 interfaces fxp0 unit 0 family inet address 192.168.100.51/24
set apply-groups “${node}”
set interfaces fab0 fabric-options member-interfaces ge-0/0/7
set interfaces fab1 fabric-options member-interfaces ge-7/0/7
set chassis cluster reth-count 1
set chassis cluster heartbeat-interval 1000
set chassis cluster heartbeat-threshold 3
set chassis cluster redundancy-group 1 node 0 priority 100
set chassis cluster redundancy-group 1 node 1 priority 1
set chassis cluster redundancy-group 1 interface-monitor ge-0/0/3 weight 255
set chassis cluster redundancy-group 1 interface-monitor ge-7/0/3 weight 255
set interfaces ge-0/0/1 unit 0 family inet address 1.4.0.202/24
set interfaces ge-0/0/3 gigether-options redundant-parent reth0
set interfaces ge-7/0/1 unit 0 family inet address 10.2.1.233/24
set interfaces ge-7/0/3 gigether-options redundant-parent reth0
set interfaces reth0 unit 0 family inet address 10.16.8.1/24
set routing-options static route 0.0.0.0/0 qualified-next-hop 10.4.0.1 metric 10
set routing-options static route 0.0.0.0/0 qualified-next-hop 10.2.1.1 metric 100
set security zones security-zone untrust interfaces ge-0/0/1.0
set security zones security-zone untrust interfaces ge-7/0/1.0
set security zones security-zone trust interfaces reth0.0
set security policies from-zone trust to-zone untrust policy ANY match source-address
any
set security policies from-zone trust to-zone untrust policy ANY match destination-address
any
set security policies from-zone trust to-zone untrust policy ANY match application any
set security policies from-zone trust to-zone untrust policy ANY then permit
{primary:node0}[edit]
user@host# set groups node0 system host-name srxseries-1
user@host# set groups node0 interfaces fxp0 unit 0 family inet address
192.168.100.50/24
user@host# set groups node1 system host-name srxseries-2
user@host#set groups node1 interfaces fxp0 unit 0 family inet address
192.168.100.51/24
user@host# set apply-groups “${node}”
{primary:node0}[edit]
user@host# set interfaces fab0 fabric-options member-interfaces ge-0/0/7
user@host# set interfaces fab1 fabric-options member-interfaces ge-7/0/7
{primary:node0}[edit]
user@host# set chassis cluster reth-count 1
{primary:node0}[edit]
user@host# set chassis cluster heartbeat-interval 1000
user@host# set chassis cluster heartbeat-threshold 3
user@host# set chassis cluster node 0
user@host# set chassis cluster node 1
user@host# set chassis cluster redundancy-group 1 node 0 priority 100
user@host# set chassis cluster redundancy-group 1 node 1 priority 1
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/3
weight 255
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-7/0/3
weight 255
{primary:node0}[edit]
user@host# set interfaces ge-0/0/1 unit 0 family inet address 1.4.0.202/24
user@host# set interfaces ge-0/0/3 gigether-options redundant-parent reth0
user@host# set interfaces ge-7/0/1 unit 0 family inet address 10.2.1.233/24
user@host# set interfaces ge-7/0/3 gigether-options redundant-parent reth0
user@host# set interfaces reth0 unit 0 family inet address 10.16.8.1/24
6. Configure the static routes (one to each ISP, with preferred route through ge-0/0/1).
{primary:node0}[edit]
user@host# set routing-options static route 0.0.0.0/0 qualified-next-hop 10.4.0.1
metric 10
user@host# set routing-options static route 0.0.0.0/0 qualified-next-hop 10.2.1.1
metric 100
{primary:node0}[edit]
{primary:node0}[edit]
user@host# set security policies from-zone trust to-zone untrust policy ANY match
source-address any
user@host# set security policies from-zone trust to-zone untrust policy ANY match
destination-address any
user@host# set security policies from-zone trust to-zone untrust policy ANY match
application any
user@host# set security policies from-zone trust to-zone untrust policy ANY then
permit
Results From operational mode, confirm your configuration by entering the show configuration
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
}
}
apply-groups "${node}";
chassis {
cluster {
reth-count 1;
heartbeat-interval 1000;
heartbeat-threshold 3;
redundancy-group 1 {
node 0 priority 100;
node 1 priority 1;
interface-monitor {
ge-0/0/3 weight 255;
ge-7/0/3 weight 255;
}
}
}
}
interfaces {
ge-0/0/3 {
gigether–options {
redundant–parent reth0;
}
}
ge-7/0/3 {
gigether–options {
redundant–parent reth0;
}
}
ge–0/0/1 {
unit 0 {
family inet {
address 10.4.0.202/24;
}
}
}
ge–7/0/1 {
unit 0 {
family inet {
address 10.2.1.233/24;
}
}
}
fab0 {
fabric–options {
member–interfaces {
ge–0/0/7;
}
}
}
fab1 {
fabric–options {
member–interfaces {
ge–7/0/7;
}
}
}
reth0 {
gigether–options {
redundancy–group 1;
}
unit 0 {
family inet {
address 10.16.8.1/24;
}
}
}
}
...
routing-options {
static {
route 0.0.0.0/0 {
next-hop 10.4.0.1;
metric 10;
}
}
}
routing-options {
static {
route 0.0.0.0/0 {
next-hop 10.2.1.1;
metric 100;
}
}
}
security {
zones {
security–zone untrust {
interfaces {
ge-0/0/1.0;
ge-7/0/1.0;
}
}
security–zone trust {
interfaces {
reth0.0;
}
}
}
policies {
from-zone trust to-zone untrust {
policy ANY {
match {
source-address any;
destination-address any;
application any;
}
then {
permit;
}
}
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Confirm that the configuration is working properly.
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Cluster ID: 1
Node Priority Status Preempt Manual failover
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link name: fxp1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-0/0/3 255 Up 1
ge-7/0/3 255 Up 1
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitored interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster statistics command.
{primary:node0}
user@host> show chassis cluster statistics
Purpose Verify information about chassis cluster control plane statistics (heartbeats sent and
received) and the fabric link statistics (probes sent and received).
Action From operational mode, enter the show chassis cluster control-plane statistics command.
{primary:node0}
user@host> show chassis cluster control-plane statistics
Purpose Verify information about the number of RTOs sent and received for services.
Action From operational mode, enter the show chassis cluster data-plane statistics command.
{primary:node0}
user@host> show chassis cluster data-plane statistics
Services Synchronized:
Service name RTOs sent RTOs received
Translation context 0 0
Incoming NAT 0 0
Resource manager 6 0
Session create 160 0
Session close 147 0
Session change 0 0
Gate create 0 0
Session ageout refresh requests 0 0
Session ageout refresh replies 0 0
IPSec VPN 0 0
Firewall user authentication 0 0
MGCP ALG 0 0
H323 ALG 0 0
SIP ALG 0 0
SCCP ALG 0 0
PPTP ALG 0 0
RPC ALG 0 0
RTSP ALG 0 0
RAS ALG 0 0
MAC address learning 0 0
GPRS GTP 0 0
Purpose Verify the state and priority of both nodes in a cluster and information about whether
the primary node has been preempted or whether there has been a manual failover.
Action From operational mode, enter the chassis cluster status redundancy-group command.
{primary:node0}
user@host> show chassis cluster status redundancy-group 1
Cluster ID: 1
Node Priority Status Preempt Manual failover
Purpose Use these logs to identify any chassis cluster issues. You must run these logs on both
nodes.
Understanding Layer 2 Ethernet Switching Capability in a Chassis Cluster on SRX Series Devices
Ethernet ports support various Layer 2 features such as spanning-tree protocols (STPs),
IEEE 802.1x, Link Layer Discovery Protocol (LLDP), and Multiple VLAN Registration
Protocol (MVRP). With the extension of Layer 2 switching capability to devices in a chassis
cluster, you can use Ethernet switching features on both nodes of a chassis cluster. You
can configure the Ethernet ports on either node for family Ethernet switching. You can
also configure a Layer 2 VLAN domain with member ports from both nodes and the Layer
2 switching protocols on both devices.
Figure 59 on page 360 shows the Layer 2 switching across chassis cluster nodes.
INTERNET
FXP control
FAB
SRX Series SRX Series
L2 (SWfab)
device device
g030695
VLAN A VLAN B VLAN A VLAN C
To ensure that Layer 2 switching works seamlessly across chassis cluster nodes, a
dedicated physical link connecting the nodes is required. This type of link is called a
switching fabric interface. Its purpose is to carry Layer 2 traffic between nodes.
NOTE: The Q-in-Q feature in chassis cluster mode is not supported because
of chip limitation for swfab interface configuration in Broadcom chipsets.
Related • Example: Configuring Switch Fabric Interfaces to Enable Switching in Chassis Cluster
Documentation Mode on a Security Device (CLI) on page 361
• Example: Configuring IRB and VLAN with Members Across Two Nodes on a Security
Device (CLI Procedure) on page 363
• Example: Configuring Aggregated Ethernet Device with LAG and LACP on a Security
Device (CLI Procedure)
This example shows how to configure switching fabric interfaces to enable switching in
chassis cluster mode.
Requirements
The physical link used as the switch fabric member must be directly connected to the
device. Switching supported ports must be used for switching fabric interfaces. See
Ethernet Ports Switching Overview for Security Devices for switching supported ports.
Before you begin, See “Example: Configuring the Chassis Cluster Fabric Interfaces” on
page 109.
Overview
In this example, pseudointerfaces swfab0 and swfab1 are created for Layer 2 fabric
functionality. You also configure dedicated Ethernet ports on each side of the node to
be associated with the swfab interfaces.
Configuration
CLI Quick To quickly configure this section of the example, copy the following commands, paste
Configuration them into a text file, remove any line breaks, change any details necessary to match your
network configuration, copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
{primary:node0} [edit]
user@host# set interfaces swfab0 fabric-options member-interfaces ge-0/0/9
user@host# set interfaces swfab0 fabric-options member-interfaces ge-0/0/10
user@host# set interfaces swfab1 fabric-options member-interfaces ge-7/0/9
user@host# set interfaces swfab1 fabric-options member-interfaces ge-7/0/10
{primary:node0} [edit]
user@host# commit
Results From configuration mode, confirm your configuration by entering the show interfaces
swfab0 command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct the configuration.
[edit]
user@host# show interfaces swfab0
fabric-options{
member-interfaces {
ge-0/0/9;
ge-0/0/10;
}
}
Verification
Purpose Verify that you are able to configure multiple ports as members of switching fabric ports.
Action From configuration mode, enter the show interfaces swfab0 command to view the
configured interfaces for each port.
From configuration mode, enter the show chassis cluster ethernet-switching interfaces
command to view the appropriate member interfaces.
Example: Configuring IRB and VLAN with Members Across Two Nodes on a Security
Device (CLI Procedure)
Requirements
No special configuration beyond device initialization is required before configuring this
feature.
Overview
This example shows the configuration of integrated routing and bridging (IRB) and
configuration of a VLAN with members across node 0 and node 1.
Configuration
CLI Quick To quickly configure this section of the example, copy the following commands, paste
Configuration them into a text file, remove any line breaks, change any details necessary to match your
network configuration, copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
{primary:node0} [edit]
user@host# set interfaces ge-0/0/3 unit 0 family ethernet-switching interface-mode
access
user@host# set interfaces ge-0/0/4 unit 0 family ethernet-switching interface-mode
access
{primary:node0} [edit]
user@host# set interfaces ge-7/0/5 unit 0 family ethernet-switching interface-mode
trunk
{primary:node0} [edit]
user@host# set vlans vlan100 vlan-id 100
{primary:node0} [edit]
user@host# set interfaces ge-0/0/3 unit 0 family ethernet-switching vlan members
vlan100
user@host# set interfaces ge-0/0/4 unit 0 family ethernet-switching vlan members
vlan100
user@host# set interfaces ge-7/0/5 unit 0 family ethernet-switching vlan members
vlan100
user@host# set interfaces irb unit 100 family inet address 192.0.2.100/24
[edit]
user@host# commit
Results From configuration mode, confirm your configuration by entering the show vlans and
show interfaces commands. If the output does not display the intended configuration,
repeat the configuration instructions in this example to correct the configuration.
[edit]
user@host# show vlans
vlan100 {
vlan-id 100;
l3-interface irb.100;
}
[edit]
user@host# show interfaces
ge-0/0/3 {
unit 0 {
family ethernet-switching {
interface-mode access;
vlan {
members vlan100;
}
}
}
}
ge-0/0/4 {
unit 0 {
family ethernet-switching {
interface-mode access;
vlan {
members vlan100;
}
}
}
}
ge-7/0/5 {
unit 0 {
family ethernet-switching {
interface-mode trunk;
vlan {
members vlan100;
}
}
}
}
irb {
unit 100 {
family inet {
address 192.0.2.100/24;
}
}
}
Verification
Purpose Verify that the configurations of VLAN and IRB are working properly.
Action From operational mode, enter the show interfaces terse ge-0/0/3 command to view the
node 0 interface.
From operational mode, enter the show interfaces terse ge-0/0/4 command to view the
node 0 interface.
From operational mode, enter the show interfaces terse ge-7/0/5 command to view the
node1 interface.
From operational mode, enter the show vlans command to view the VLAN interface.
From operational mode, enter the show ethernet-switching interface command to view
the information about Ethernet switching interfaces.
Meaning The output shows the VLAN and IRB are configured and working fine.
MACsec allows you to secure an Ethernet link for almost all traffic, including frames from
the Link Layer Discovery Protocol (LLDP), Link Aggregation Control Protocol (LACP),
Dynamic Host Configuration Protocol (DHCP), Address Resolution Protocol (ARP), and
other protocols that are not typically secured on an Ethernet link because of limitations
with other security solutions. MACsec can be used in combination with other security
protocols such as IP Security (IPsec) and Secure Sockets Layer (SSL) to provide
end-to-end network security.
Starting in Junos OS Release 17.4R1, MACsec is supported on HA control and fabric ports
of SRX4600 devices in chassis cluster mode.
Once MACsec is enabled on a point-to-point Ethernet link, all traffic traversing the link
is MACsec-secured through the use of data integrity checks and, if configured, encryption.
The data integrity checks verify the integrity of the data. MACsec appends an 8-byte
header and a 16-byte tail to all Ethernet frames traversing the MACsec-secured
point-to-point Ethernet link, and the header and tail are checked by the receiving interface
to ensure that the data was not compromised while traversing the link. If the data integrity
check detects anything irregular about the traffic, the traffic is dropped.
MACsec can also be used to encrypt all traffic on the Ethernet link. The encryption used
by MACsec ensures that the data in the Ethernet frame cannot be viewed by anybody
monitoring traffic on the link.
Encryption is enabled for all traffic entering or leaving the interface when MACsec is
enabled using static CAK security mode, by default.
When you enable MACsec using static CAK or dynamic security mode, you have to create
and configure a connectivity association. Two secure channels—one secure channel for
inbound traffic and another secure channel for outbound traffic—are automatically
created. The automatically-created secure channels do not have any user-configurable
parameters; all configuration is done in the connectivity association outside of the secure
channels.
exchanged between both devices on each end of the point-to-point Ethernet link to
ensure link security.
You initially establish a MACsec-secured link using a pre-shared key when you are using
static CAK security mode to enable MACsec. A pre-shared key includes a connectivity
association name (CKN) and it’s own connectivity association key (CAK). The CKN and
CAK are configured by the user in the connectivity association and must match on both
ends of the link to initially enable MACsec.
Once matching pre-shared keys are successfully exchanged, the MACsec Key Agreement
(MKA) protocol is enabled. The MKA protocol is responsible for maintaining MACsec on
the link, and decides which switch on the point-to-point link becomes the key server. The
key server then creates an SAK that is shared with the switch at the other end of the
point-to-point link only, and that SAK is used to secure all data traffic traversing the link.
The key server will continue to periodically create and share a randomly-created SAK
over the point-to-point link for as long as MACsec is enabled.
You enable MACsec using static CAK security mode by configuring a connectivity
association on both ends of the link. All configuration is done within the connectivity
association but outside of the secure channel. Two secure channels—one for inbound
traffic and one for outbound traffic—are automatically created when using static CAK
security mode. The automatically-created secure channels do not have any
user-configurable parameters that cannot already be configured in the connectivity
association.
We recommend enabling MACsec using static CAK security mode. Static CAK security
mode ensures security by frequently refreshing to a new random security key and by only
sharing the security key between the two devices on the MACsec-secured point-to-point
link. Additionally, some optional MACsec features—replay protection, SCI tagging, and
the ability to exclude traffic from MACsec—are only available when you enable MACsec
using static CAK security mode.
MACsec Considerations
All types of Spanning Tree Protocol frames cannot currently be encrypted using MACsec.
The connectivity association can be defined anywhere, either global or node specific or
any other configuration group as long as it is visible to the MACsec interface configuration.
Starting in Junos OS Release 17.4R1, MACsec is supported on control and fabric ports of
SRX4600 devices in chassis cluster mode.
This topic shows how to configure MACsec on control and fabric ports of supported SRX
Series device in chassis cluster to secure point-to-point Ethernet links between the peer
devices in a cluster. Each point-to-point Ethernet link that you want to secure using
MACsec must be configured independently. You can enable MACsec on device-to-device
links using static connectivity association key (CAK) security mode.
The configuration steps for both processes are provided in this document.
1. If the chassis cluster is already up, disable it by using the set chassis cluster disable
command and reboot both nodes.
2. Configure MACsec on the control port with its attributes as described in the following
sections “Configuring Static CAK on the Chassis Cluster Control Port” on page 377.
Both nodes must be configured independently with identical configurations.
3. Enable the chassis cluster by using set chassis cluster cluster-id id on both of the nodes.
Reboot both nodes.
Control port states affect the integrity of a chassis cluster. Consider the following when
configuring MACsec on control ports:
• Any new MACsec chassis cluster port configurations or modifications to existing MACsec
chassis cluster port configurations will require the chassis cluster to be disabled. Once
disabled, you can apply the preceding configurations and reenable the chassis cluster.
NOTE: The ineligible timer is 300 seconds when MACsec on the chassis
cluster control port is enabled on SRX340 and SRX345 devices.
NOTE: If both control link fail, Junos OS changes the operating state of the
secondary node to ineligible for a 180 seconds. When MACsec is enabled on
the control port, the ineligibility duration is 200 seconds for SRX4600 devices.
NOTE: For any change in the MACsec configurations of control ports, the
steps mentioned above must be repeated.
Configuring MACsec leads to link state changes that can affect traffic capability of the
link. When you configure fabric ports, keep the effective link state in mind. Incorrect
MACsec configuration on both ends of the fabric links can move the link to an ineligible
state. Note the following key points about configuring fabric links:
• Both ends of the links must be configured simultaneously when the chassis cluster is
formed.
• Incorrect configuration can lead to fabric failures and errors in fabric recovery logic.
NOTE: For SRX340 and SRX345 devices, ge-0/0/0 is a fabric port and
ge-0/0/1 is a control port for the chassis cluster and assigned as
cluster-control-port 0.
NOTE: For SRX4600 devices, dedicated control and fabric ports are available.
You can configure MACsec on control ports (control port 0 [em0] and port
1 [em1]) and fabric ports 0 [fab 0] and [fab 1] on SRX4600 devices.
1. Create a connectivity association. You can skip this step if you are configuring an
existing connectivity association.
2. Configure the MACsec security mode as static-cak for the connectivity association.
3. Create the preshared key by configuring the connectivity association key name (CKN)
and connectivity association key (CAK).
a 64-digit hexadecimal number and the CAK is a 32-digit hexadecimal number. The
CKN and the CAK must match on both ends of a link to create a MACsec-secured link.
After the preshared keys are successfully exchanged and verified by both ends of the
link, the MACsec Key Agreement (MKA) protocol is enabled and manages the secure
link. The MKA protocol then elects one of the two directly-connected devices as the
key server. The key server then shares a random security with the other device over
the MACsec-secure point-to-point link. The key server will continue to periodically
create and share a random security key with the other device over the MACsec-secured
point-to-point link as long as MACsec is enabled.
To configure a CKN of
11c1c1c11xxx012xx5xx8ef284aa23ff6729xx2e4xxx66e91fe34ba2cd9fe311 and CAK of
228xx255aa23xx6729xx664xxx66e91f on connectivity association ca1:
Specifies the key server priority used by the MKA protocol to select the key server. The
device with the lower priority-number is selected as the key server.
If the key-server-priority is identical on both sides of the point-to-point link, the MKA
protocol selects the interface with the lower MAC address as the key server. Therefore,
if this statement is not configured in the connectivity associations at each end of a
MACsec-secured point-to-point link, the interface with the lower MAC address
becomes the key server.
To change the key server priority to 0 to increase the likelihood that the current device
is selected as the key server when MACsec is enabled on the interface using
connectivity association ca1:
To change the key server priority to 255 to decrease the likelihood that the current
device is selected as the key server in connectivity association ca1:
The MKA transmit interval setting sets the frequency for how often the MKA protocol
data unit (PDU) is sent to the directly connected device to maintain MACsec
connectivity on the link. A lower interval increases bandwidth overhead on the link; a
higher interval optimizes MKA protocol communication.
The default interval is 2000 ms. We recommend increasing the interval to 6000 ms
in high-traffic load environments. The transmit interval settings must be identical on
both ends of the link when MACsec using static CAK security mode is enabled.
NOTE: Starting from Junos OS Release 17.4, for SRX340, SRX345, and
SRX4600, the default MKA transmit interval is 10000 ms on HA links.
For instance, if you wanted to increase the MKA transmit interval to 6000 milliseconds
when connectivity association ca1 is attached to an interface:
Encryption is enabled for all traffic entering or leaving the interface when MACsec is
enabled using static CAK security mode, by default.
When encryption is disabled, traffic is forwarded across the Ethernet link in clear text.
You are able to view unencrypted data in the Ethernet frame traversing the link when
you are monitoring it. The MACsec header is still applied to the frame, however, and
all MACsec data integrity checks are run on both ends of the link to ensure the traffic
sent or received on the link has not been tampered with and does not represent a
security threat.
For instance, if you wanted to set the offset to 30 in the connectivity association
named ca1:
The default offset is 0. All traffic in the connectivity association is encrypted when
encryption is enabled and an offset is not set.
When the offset is set to 30, the IPv4 header and the TCP/UDP header are unencrypted
while encrypting the rest of the traffic. When the offset is set to 50, the IPv6 header
and the TCP/UDP header are unencrypted while encrypting the rest of the traffic.
You would typically forward traffic with the first 30 or 50 octets unencrypted if a
feature needed to see the data in the octets to perform a function, but you otherwise
prefer to encrypt the remaining data in the frames traversing the link. Load balancing
features, in particular, typically need to see the IP and TCP/UDP headers in the first
30 or 50 octets to properly load balance traffic.
When replay protection is enabled, the receiving interface checks the ID number of
all packets that have traversed the MACsec-secured link. If a packet arrives out of
sequence and the difference between the packet numbers exceeds the replay
protection window size, the packet is dropped by the receiving interface. For instance,
if the replay protection window size is set to five and a packet assigned the ID of 1006
arrives on the receiving link immediately after the packet assigned the ID of 1000, the
packet that is assigned the ID of 1006 is dropped because it falls outside the
parameters of the replay protection window.
Replay protection should not be enabled in cases where packets are expected to
arrive out of order.
You can require that all packets arrive in order by setting the replay window size to 0.
To enable replay protection with a window size of five on connectivity association ca1:
For instance, if you did not want Link Level Discovery Protocol (LLDP) to be secured
using MACsec:
When this option is enabled, MACsec is disabled for all packets of the specified
protocol—in this case, LLDP—that are sent or received on the link.
10. Create a connectivity association for enabling MACsec on a chassis cluster control
interface.
11. Create a connectivity association for enabling MACsec on a chassis cluster fabric
interface.
MACsec using static CAK security mode is not enabled until a connectivity association
on the opposite end of the link is also configured, and contains preshared keys that match
on both ends of the link.
1. Configure the MACsec security mode as static-cak for the connectivity association:
2. Create the preshared key by configuring the connectivity association key name (CKN).
3. Create the pre-shared key by configuring the connectivity association key (CAK).
1. Configure the MACsec security mode as static-cak for the connectivity association.
2. Create the preshared key by configuring the connectivity association key name (CKN).
3. Create the preshared key by configuring the connectivity association key (CAK).
Configuring Static CAK on the Chassis Cluster Control Port of SRX4600 Device in Chassis
Cluster
Use this procedure to establish a CA over a chassis cluster control link on two SRX4600
devices.
1. Configure the MACsec security mode as static-cak for the connectivity association:
[edit]
user@host# set security macsec connectivity-association ca1 security-mode static-cak
2. Create the preshared key by configuring the connectivity association key name (CKN).
[edit]
user@host# set security macsec connectivity-association ca1 pre-shared-key ckn
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
3. Create the preshared key by configuring the connectivity association key (CAK).
[edit]
user@host# set security macsec connectivity-association ca1 pre-shared-key cak
"""$9$XX.XXXrXX8XX69X0X1yrevXXX-Xb24oXhXrvX8dXwXgoaXji.Xfz7-XYg4XjHqmf5Xn6Xpu1XXjqmX3n/Xtu0IXhreX8XX"
[edit]
user@host# set security macsec cluster-control-port 0 connectivity-association ca1
user@host# set security macsec cluster-control-port 1 connectivity-association ca1
• Display the Status of Active MACsec Connections on the Device on page 379
• Display MACsec Key Agreement (MKA) Session Information on page 380
• Verifying That MACsec-Secured Traffic Is Traversing Through the Interface on page 380
• Verifying Chassis Cluster Ports Are Secured with MACsec Configuration on page 381
Action From the operational mode, enter the show security macsec connections interface
interface-name command on one or both of the nodes of chassis cluster setup.
{primary:node0}[edit]
user@host# show security macsec connections
Meaning The Interface name and CA name outputs show that the MACsec connectivity association
is operational on the interface em0. The output does not appear when the connectivity
association is not operational on the interface.
Purpose Display MACsec Key Agreement (MKA) session information for all interfaces.
Action From the operational mode, enter the show security mka sessions command.
Action From the operational mode, enter the show security macsec statistics command.
Validated bytes: 0
Decrypted bytes: 0
Meaning The Encrypted packets line under the Secure Channel transmitted field are the values
incremented each time a packet is sent from the interface that is secured and encrypted
by MACsec.
The Accepted packets line under the Secure Association received field are the values
incremented each time a packet that has passed the MACsec integrity check is received
on the interface. The Decrypted bytes line under the Secure Association received output
is incremented each time an encrypted packet is received and decrypted.
Action From operational mode, enter the show chassis cluster interfaces command.
Control interfaces:
Index Interface Monitored-Status Internal-SA Security
0 em0 Up Disabled Secured
Fabric interfaces:
Name Child-interface Status Security
(Physical/Monitored)
fab0 xe-1/1/6 Up / Down Disabled
fab0
fab1 xe-8/1/6 Up / Down Disabled
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 2
reth2 Down Not configured
reth3 Down Not configured
reth4 Down Not configured
reth5 Down Not configured
reth6 Down Not configured
reth7 Down Not configured
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Meaning The Security line under the Control interfaces output for em0 interface shown as Secured
means that the traffic sent from the em0 interface is secured and encrypted by MACsec.
You can also use the show chassis cluster status command to display the current status
of the chassis cluster.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• macsec on page 450
Devices in a chassis cluster can be upgraded separately one at a time; some models
allow one device after the other to be upgraded using failover and an in-service software
upgrade (ISSU) to reduce the operational impact of the upgrade.
4. Repeat Step 2.
Related • Upgrading Both Devices in a Chassis Cluster Using an ISSU on page 398
Documentation
• Upgrading Devices in a Chassis Cluster Using ICU on page 387
For SRX300, SRX320, SRX340, SRX345, and SRX550M devices, the devices in a chassis
cluster can be upgraded with a minimal service disruption of approximately 30 seconds
using ICU with the no-sync option. The chassis cluster ICU feature allows both devices
in a cluster to be upgraded from supported Junos OS versions.
You must use the in-band cluster upgrade (ICU) commands on SRX1500 device to
upgrade following Junos OS Releases:
For SRX300, SRX320, SRX340, SRX345, and SRX550M devices, the impact on traffic is
as follows:
• ICU is available with the no-sync option only for SRX300, SRX320, SRX340, SRX345,
and SRX550M devices.
• Before starting ICU, you should ensure that sufficient disk space is available. See
“Upgrading ICU Using a Build Available Locally on a Primary Node in a Chassis Cluster”
on page 389 and “Upgrading ICU Using a Build Available on an FTP Server” on page 389.
• For SRX300, SRX320, SRX340, SRX345, and SRX550M devices, this feature cannot
be used to downgrade to a build earlier than Junos OS 11.2 R2.
For SRX1500 devices, this feature cannot be used to downgrade to a build earlier than
Junos OS 15.1X49-D50.
The upgrade is initiated with the Junos OS build locally available on the primary node of
the device or on an FTP server.
NOTE:
• The primary node, RG0, changes to the secondary node after an ICU
upgrade.
• During ICU, the chassis cluster redundancy groups are failed over to the
primary node to change the cluster to active/passive mode.
• ICU states can be checked from the syslog or with the console/terminal
logs.
See Also • Upgrading ICU Using a Build Available Locally on a Primary Node in a Chassis Cluster
on page 389
Upgrading ICU Using a Build Available Locally on a Primary Node in a Chassis Cluster
NOTE: Ensure that sufficient disk space is available for the Junos OS package
in the /var/tmp location in the secondary node of the cluster.
To upgrade ICU using a build locally available on the primary node of a cluster:
1. Copy the Junos OS package build to the primary node at any location, or mount a
network file server folder containing the Junos OS build.
See Also • Upgrading ICU Using a Build Available on an FTP Server on page 389
NOTE: Ensure that sufficient disk space is available for the Junos OS package
in the /var/tmp location in both the primary and the secondary nodes of the
cluster.
2. (SRX300, SRX320, SRX340, SRX345, and SRX550M only) Start ICU by entering the
following command:
user@root> request system software in-service-upgrade <ftp url for junos image>
no-sync
This command upgrades the Junos OS and reboots both nodes in turn.
3. (SRX1500 only prior to Junos OS Release 15.1X49-D70) Start ICU by entering the
following command:
user@root> request system software in-service-upgrade <ftp url for junos image>
This command upgrades the Junos OS and reboots both nodes in turn.
WARNING: A reboot is required to load this software correctly. Use the request
system reboot command when software installation is complete.
This warning message can be ignored because the ICU process automatically
reboots both the nodes.
See Also • Aborting an Upgrade in a Chassis Cluster During an ICU on page 390
• Upgrading ICU Using a Build Available Locally on a Primary Node in a Chassis Cluster
on page 389
NOTE: Issuing an abort command during or after the secondary node reboots
puts the cluster in an inconsistent state. The secondary node boots up running
the new Junos OS build, while the primary continues to run the older Junos
OS build.
To recover from the chassis cluster inconsistent state, perform the following actions
sequentially on the secondary node:
NOTE: You must execute the above steps sequentially to complete the
recovery process and avoid cluster instability.
Table 40 on page 391 lists the options and their descriptions for the request system software
in-service-upgrade command.
no-sync Disables the flow state from syncing up when the old secondary node has booted with a new
Junos OS image.
no-tcp-syn-check Creates a window wherein the TCP SYN check for the incoming packets will be disabled. The
default value for the window is 7200 seconds (2 hours).
no-validate Disables the validation of the configuration at the time of the installation. The system behavior
is similar to software add.
unlink Removes the package from the local media after installation.
NOTE:
• During ICU, if an abort command is executed, ICU will abort only after the
current operation finishes. This is required to avoid any inconsistency with
the devices.
• After an abort, ICU will try to roll back the build on the nodes if the upgrading
nodes step was completed.
See Also • Upgrading ICU Using a Build Available on an FTP Server on page 389
• Upgrading ICU Using a Build Available Locally on a Primary Node in a Chassis Cluster
on page 389
In-service software upgrade (ISSU) enables a software upgrade from one Junos OS
version to a later Junos OS version with little or no downtime.
The chassis cluster ISSU feature enables both devices in a cluster to be upgraded from
supported Junos OS versions with a minimal disruption in traffic and without a disruption
in service.
NOTE:
You must use the in-band cluster upgrade (ICU) commands on SRX1500
device to upgrade following Junos OS Releases:
Starting with Junos OS Release 15.1X49-D80, SRX4100 and SRX4200 devices support
ISSU.
NOTE:
You can use the in-band cluster upgrade (ICU) commands on SRX4100 and
SRX4200 devices to upgrade following Junos OS Releases:
NOTE:
ISSU has the following limitations:
• If you upgrade from a Junos OS version that supports only IPv4 to a version
that supports both IPv4 and IPv6, the IPv4 traffic continue to work during
the upgrade process. If you upgrade from a Junos OS version that supports
both IPv4 and IPv6 to a version that supports both IPv4 and IPv6, both the
IPv4 and IPv6 traffic continue to work during the upgrade process. Junos
OS Release 10.2 and later releases support flow-based processing for IPv6
traffic.
• During an ISSU, you cannot bring any PICs online. You cannot perform
operations such as commit, restart, or halt.
• During an ISSU, operations like fabric monitoring, control link recovery, and
RGX preempt are suspended.
NOTE: For details about ISSU support status, see knowledge base article
KB17946.
The following process occurs during an ISSU for devices in a chassis cluster. The
sequences given below are applicable when RG-0 is node 0 (primary node). Note that
you must initiate an ISSU from RG-0 primary. If you initiate the upgrade on node 1 (RG-0
secondary), an error message is displayed.
1. At the beginning of a chassis cluster ISSU, the system automatically fails over all
RG-1+ redundancy groups that are not primary on the node from which the ISSU is
initiated. This action ensures that all the redundancy groups are active on only the
RG-0 primary node.
After the system fails over all RG-1+ redundancy groups, it sets the manual failover
bit and changes all RG-1+ primary node priorities to 255, regardless of whether the
redundancy group failed over to the RG-0 primary node.
2. The primary node (node 0) validates the device configuration to ensure that it can be
committed using the new software version. Checks are made for disk space availability
for the /var file system on both nodes, unsupported configurations, and unsupported
Physical Interface Cards (PICs).
If the disk space available on either of the Routing Engines is insufficient, the ISSU
process fails and returns an error message. However, unsupported PICs do not prevent
the ISSU. The software issues a warning to indicate that these PICs will restart during
the upgrade. Similarly, an unsupported protocol configuration does not prevent the
ISSU. However, the software issues a warning that packet loss might occur for the
protocol during the upgrade.
3. When the validation succeeds, the kernel state synchronization daemon (ksyncd)
synchronizes the kernel on the secondary node (node 1) with the node 0.
4. Node 1 is upgraded with the new software image. Before being upgraded, the node 1
gets the configuration file from node 0 and validates the configuration to ensure that
it can be committed using the new software version. After being upgraded, it is
resynchronized with node 0.
5. The chassis cluster process (chassisd) on the node 0 prepares other software
processes for the lISSU. When all the processes are ready, chassisd sends a message
to the PICs installed in the device.
6. The Packet Forwarding Engine on each Flexible PIC Concentrator (FPC) saves its
state and downloads the new software image from node 1. Next, each Packet
Forwarding Engine sends a message (unified-ISSU ready) to the chassisd.
7. After receiving the message (unified-ISSU ready) from a Packet Forwarding Engine,
the chassisd sends a reboot message to the FPC on which the Packet Forwarding
Engine resides. The FPC reboots with the new software image. After the FPC is
rebooted, the Packet Forwarding Engine restores the FPC state and a high-speed
internal link is established with node 1 running the new software. The chassisd is also
reestablished with node 0.
8. After all Packet Forwarding Engines have sent a ready message using the chassisd on
node 0, other software processes are prepared for a node switchover. The system is
ready for a switchover at this point.
9. Node switchover occurs and node 1 becomes the new primary node (hitherto secondary
node 1).
10. The new secondary node (hitherto primary node 0) is now upgraded to the new
software image.
NOTE: When upgrading a version cluster that does not support encryption
to a version that supports encryption, upgrade the first node to the new
version. Without the encryption configured and enabled, two nodes with
different versions can still communicate with each other and service is not
broken. After upgrading the first node, upgrade the second node to the new
version. Users can decide whether to turn on the encryption feature after
completing the upgrade. Encryption must be deactivated before downgrading
to a version that does not support encryption. This ensures that
communication between an encryption-enabled version node and a
downgraded node does not break, because both are no longer encrypted.
You can use ISSU to upgrade from an ISSU-capable software release to a later release.
To perform an ISSU, your device must be running a Junos OS release that supports ISSU
for the specific platform. See Table 41 on page 397 for platform support.
NOTE: For additional details on ISSU support and limitations, see ISSU/ICU
Upgrade Limitations on SRX Series Devices.
• The ISSU process is aborted if the Junos OS version specified for installation is a version
earlier than the one currently running on the device.
• The ISSU process is aborted if the specified upgrade conflicts with the current
configuration, the components supported, and so forth.
• ISSU does not support the extension application packages developed using the Junos
OS SDK.
• ISSU does not support version downgrading on all supported SRX Series devices.
We strongly recommend that you perform ISSU under the following conditions:
In cases where ISSU is not supported or recommended, while still downtime during the
system upgrade must be minimized, the minimal downtime procedure can be used, see
knowledge base articleKB17947.
Related • Upgrading Both Devices in a Chassis Cluster Using an ISSU on page 398
Documentation
• Troubleshooting Chassis Cluster ISSU-Related Problems on page 407
The chassis cluster ISSU feature enables both devices in a cluster to be upgraded from
supported Junos OS versions with a traffic impact similar to that of redundancy group
failovers.
Before you begin the ISSU for upgrading both the devices, note the following guidelines:
• Back up the software using the request system snapshot command on each Routing
Engine to back up the system software to the device’s hard disk.
• If you are using Junos OS Release 11.4 or earlier, before starting the ISSU, set the failover
for all redundancy groups so that they are all active on only one node (primary). See
“Initiating a Chassis Cluster Manual Redundancy Group Failover” on page 242.
If you are using Junos OS Release 12.1 or later, Junos OS automatically fails over all RGs
to the RG0 primary.
• We recommend that you enable graceful restart for routing protocols before you start
an ISSU.
NOTE: On all supported SRX Series devices, the first recommended ISSU
from release is Junos OS Release 10.4R4.
Starting with Junos OS Release 15.1X49-D80, SRX4100 and SRX4200 devices support
ISSU.
1. Download the software package from the Juniper Networks Support website:
http://www.juniper.net/support/downloads/
2. Copy the package on primary node of the cluster. We recommend that you copy the
package to the/var/tmp directory, which is a large file system on the hard disk. Note
that the node from where you initiate the ISSU must have the software image.
3. Verify the current software version running on both nodes by issuing the show version
command on the primary node.
4. Start the ISSU from the node that is primary for all the redundancy groups by entering
the following command:
NOTE: For SRX1500, SRX4100, and SRX4200 devices, you can optionally
remove the original image file by including unlink in the command.
Wait for both nodes to complete the upgrade (After which you are logged out of the
device).
5. Wait a few minutes, and then log in to the device again. Verify by using the show version
command that both devices in the cluster are running the new Junos OS release.
6. Verify that all policies, zones, redundancy groups, and other real-time objects (RTOs)
return to their correct states.
7. Make node 0 the primary node again by issuing the request chassis cluster failover
node node-number redundancy-group group-number command.
To set the redundancy group priority and enable the preempt option, see
“Example: Configuring Chassis Cluster Redundancy Groups” on page 125.
To manually set the failover for a redundancy group, see “Initiating a Chassis
Cluster Manual Redundancy Group Failover” on page 242.
NOTE: During the upgrade, both devices might experience redundancy group
failovers, but traffic is not disrupted. Each device validates the package and
checks version compatibility before beginning the upgrade. If the system
finds that the new package version is not compatible with the currently
installed version, the device refuses the upgrade or prompts you to take
corrective action. Sometimes a single feature is not compatible, in which
case, the upgrade software prompts you to either abort the upgrade or turn
off the feature before beginning the upgrade.
This feature is available through the CLI. See request system software in-service-upgrade
(Maintenance).
If an ISSU fails to complete and only one device in the cluster is upgraded, you can roll
back to the previous configuration on the upgraded device alone by issuing one of the
following commands on the upgraded device:
Related • Upgrading Both Devices in a Chassis Cluster Using an ISSU on page 398
Documentation
• Troubleshooting Chassis Cluster ISSU-Related Problems on page 407
If you want redundancy groups to automatically return to node 0 as the primary after
the an in-service software upgrade (ISSU), you must set the redundancy group priority
such that node 0 is primary and enable the preempt option. Note that this method works
for all redundancy groups except redundancy group 0. You must manually set the failover
for a redundancy group 0. To set the redundancy group priority and enable the preempt
option, see “Example: Configuring Chassis Cluster Redundancy Groups” on page 125. To
manually set the failover for a redundancy group, see “Initiating a Chassis Cluster Manual
Redundancy Group Failover” on page 242.
Related • Upgrading Both Devices in a Chassis Cluster Using an ISSU on page 398
Documentation
• Troubleshooting Chassis Cluster ISSU-Related Problems on page 407
Supported Platforms SRX1500, SRX4100, SRX4200, SRX4600, SRX5400, SRX5600, SRX5800, vSRX
The following problems might occur during an ISSU upgrade. You can identify the errors
by using the details in the logs. You can also see the details of the error messages in the
System Log Explorer.
Solution Use the error messages to understand the issues related to chassisd.
When ISSU starts, a request is sent to chassisd to check whether there are any problems
related to the ISSU from a chassis perspective. If there is a problem, a log message is
created.
Solution Use the following error messages to understand the issues related to ksyncd:
ISSU checks whether there are any ksyncd errors on the secondary node (node 1) and
displays the error message if there are any problems and aborts the upgrade.
Installation-Related Errors
Problem Description: The install image file does not exist or the remote site is inaccessible.
Solution Use the following error messages to understand the installation-related problems:
ISSU downloads the install image as specified in the ISSU command as an argument.
The image file can be a local file or located at a remote site. If the file does not exist or
the remote site is inaccessible, an error is reported.
Problem Description: Installation failure occurs because of unsupported software and unsupported
feature configuration.
Solution Use the following error messages to understand the compatibility-related problems:
Solution The validation checks fail if the image is not present or if the image file is corrupt. The
following error messages are displayed when initial validation checks fail when the image
is not present and the ISSU is aborted:
/var/tmp/junos-srx1k3k-12.1I20120914_srx_12q1_major2.2-539764-domestic.tgz
Exiting in-service-upgrade window
Exiting in-service-upgrade window
Chassis ISSU Aborted
Chassis ISSU Aborted
Chassis ISSU Aborted
ISSU: IDLE
ISSU aborted; exiting ISSU window.
node1:
--------------------------------------------------------------------------
Initiating in-service-upgrade
ERROR: Cannot use /var/tmp/junos-srx1k3k-11.4X9-domestic.tgz_1:
gzip: stdin: invalid compressed data--format violated
tar: Child returned status 1
tar: Error exit delayed from previous errors
ERROR: It may have been corrupted during download.
ERROR: Please try again, making sure to use a binary transfer.
Exiting in-service-upgrade window
node1:
--------------------------------------------------------------------------
Exiting in-service-upgrade window
Chassis ISSU Aborted
Chassis ISSU Aborted
node1:
--------------------------------------------------------------------------
Chassis ISSU Aborted
ISSU: IDLE
ISSU aborted; exiting ISSU window.
{primary:node0}
The primary node validates the device configuration to ensure that it can be committed
using the new software version. If anything goes wrong, the ISSU aborts and error
messages are displayed.
Problem Description: You might encounter some problems in the course of an ISSU. This section
provides details on how to handle them.
Solution Any errors encountered during an ISSU result in the creation of log messages, and ISSU
continues to function without impact to traffic. If reverting to previous versions is required,
the event is either logged or the ISSU is halted, so as not to create any mismatched
versions on both nodes of the chassis cluster. Table 42 on page 405 provides some of the
common error conditions and the workarounds for them. The sample messages used in
the Table 42 on page 405 are from the SRX1500 device and are also applicable to all
supported SRX Series devices.
Reboot failure on No service downtime occurs, because the primary node continues to provide
the secondary required services. Detailed console messages are displayed requesting that
node you manually clear existing ISSU states and restore the chassis cluster.
2. Make sure that both nodes (will) have the same image
Starting with Junos OS Release 17.4R1, the hold timer for the initial reboot of
the secondary node during the ISSU process is extended from 15 minutes
(900 seconds) to 45 minutes (2700 seconds) in chassis clusters on SRX1500,
SRX4100, SRX4200, and SRX4600 devices.
Secondary node The primary node times out if the secondary node fails to complete the cold
failed to complete synchronization. Detailed console messages are displayed that you manually
the cold clear existing ISSU states and restore the chassis cluster. No service
synchronization downtime occurs in this scenario.
2. Make sure that both nodes (will) have the same image
Failover of newly No service downtime occurs, because the primary node continues to provide
upgraded required services. Detailed console messages are displayed requesting that
secondary failed you manually clear existing ISSU states and restore the chassis cluster.
Upgrade failure No service downtime occurs, because the secondary node fails over as
on primary primary and continues to provide required services.
Reboot failure on Before the reboot of the primary node, devices being out of the ISSU setup,
primary node no ISSU-related error messages are displayed. The following reboot error
message is displayed if any other failure is detected:
Junos OS Release Starting with Junos OS Release 17.4R1, the hold timer for the initial reboot
17.4R1 of the secondary node during the ISSU process is extended from 15
minutes (900 seconds) to 45 minutes (2700 seconds) in chassis clusters
on SRX1500, SRX4100, SRX4200, and SRX4600 devices.
Problem Description: Rather than wait for an ISSU failure, you can display the progress of the
ISSU as it occurs, noting any message indicating that the ISSU was unsuccessful. Providing
such messages to JTAC can help with resolving the issue.
Solution After starting an ISSU, issue the show chassis cluster information issu command. Output
similar to the following is displayed indicating the progress of the ISSU for all Services
Processing Units (SPUs).
Solution If the ISSU fails to complete and only one device in the cluster is upgraded, you can roll
back to the previous configuration on the upgraded device alone by issuing one of the
following commands on the upgraded device:
• request chassis cluster in-service-upgrade abort to abort the ISSU on both nodes.
• request system software rollback node node-id reboot to roll back the image.
Solution Open a new session on the primary device and issue the request chassis cluster
in-service-upgrade abort command.
This step aborts an in-progress ISSU . This command must be issued from a session
other than the one on which you issued the request system in-service-upgrade command
that launched the ISSU. If the node is being upgraded, this command cancels the upgrade.
The command is also helpful in recovering the node in case of a failed ISSU.
When an ISSU encounters an unexpected situation that necessitates an abort, the system
message provides you with detailed information about when and why the upgrade
stopped along with recommendations for the next steps to take.
For example, the following message is issued when a node fails to become RG-0
secondary when it boots up:
If you want to operate the SRX Series device back as a standalone device or to remove
a node from a chassis cluster, you must disable the chassis cluster.
{primary:node1}
user@host> set chassis cluster disable reboot
Successfully disabled chassis cluster. Going to reboot now.
NOTE: After the chassis cluster is disabled using this CLI command, you do
not have a similar CLI option to enable it back.
You can also use the below CLI commands to disable chassis cluster:
Configuration Statements
arp-throttle
Syntax next-hop {
arp-throttle seconds;
}
Description Define the length of time (in seconds) for Address Resolution Protocol (ARP) request
throttling. Set a greater time interval for the Routing Engine to process the request more
slowly and thereby work more efficiently. For example, if a large number of hosts causes
numerous ARP requests, Routing Engine utilization is reduced.
Options seconds—Number of seconds the Routing Engine waits before receiving and processing
an ARP request.
Range: 10 through 100 seconds
Default: 10 seconds
cak
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Description Specifies the connectivity association key (CAK) for a pre-shared key.
A pre-shared key includes a connectivity association key name (CKN) and a CAK. A
pre-shared key is exchanged between two devices at each end of a point-to-point link
to enable MACsec using dynamic security keys. The MACsec Key Agreement (MKA)
protocol is enabled once the pre-shared keys are successfully exchanged. The pre-shared
key—the CKN and CAK—must match on both ends of a link
The key name is 32 hexadecimal characters in length. If you enter a key name that
is less than 32 characters long, the remaining characters are set to 0.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
ckn
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Description Specifies the connectivity association key name (CKN) for a pre-shared key.
A pre-shared key includes a CKN and a connectivity association key (CAK). A pre-shared
key is exchanged between two devices at each end of a point-to-point link to enable
MACsec using dynamic security keys. The MACsec Key Agreement (MKA) protocol is
enabled once the pre-shared keys are successfully exchanged. The pre-shared key—the
CKN and CAK—must match on both ends of a link
The key name is 32 hexadecimal characters in length. If you enter a key name that
is less than 32 characters long, the remaining characters are set to 0.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
cluster (Chassis)
Syntax cluster {
configuration-synchronize {
no-secondary-bootup-auto;
}
control-link-recovery;
heartbeat-interval milliseconds;
heartbeat-threshold number;
network-management {
cluster-master;
}
redundancy-group group-number {
gratuitous-arp-count number;
hold-down-interval number;
interface-monitor interface-name {
weight number;
}
ip-monitoring {
family {
inet {
ipv4-address {
interface {
logical-interface-name;
secondary-ip-address ip-address;
}
weight number;
}
}
}
global-threshold number;
global-weight number;
retry-count number;
retry-interval seconds;
}
node (0 | 1 ) {
priority number;
}
preempt;
}
reth-count number;
traceoptions {
file {
filename;
files number;
match regular-expression;
(world-readable | no-world-readable);
size maximum-file-size;
}
flag flag;
level {
(alert | all | critical | debug | emergency | error | info | notice | warning);
}
no-remote-trace;
}
}
Options The remaining statements are explained separately. See CLI Explorer.
Syntax configuration-synchronize {
no-secondary-bootup-auto;
}
Description Disables the automatic chassis cluster synchronization between the primary and
secondary nodes. To reenable automatic chassis cluster synchronization, use the delete
chassis cluster configuration-synchronize no-secondary-bootup-auto command in
configuration mode.
connectivity-association
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Description Applies a connectivity association to an interface, which enables Media Access Control
Security (MACsec) on that interface.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
control-link-recovery
Syntax control-link-recovery;
Description Enable control link recovery to be done automatically by the system. After the control
link recovers, the system checks whether it receives at least 30 consecutive heartbeats
on the control link. This is to ensure that the control link is not flapping and is perfectly
healthy. Once this criterion is met, the system issues an automatic reboot on the node
that was disabled when the control link failed. When the disabled node reboots, the node
rejoins the cluster. There is no need for any manual intervention.
control-ports
Release Information Statement introduced in Junos OS Release 9.2. Support for dual control ports added in
Junos OS Release 10.0.
Description Enable the specific control port of the Services Processing Card (SPC) for use as a control
link for the chassis cluster. By default, all control ports are disabled. User needs to
configure a minimum of one control port per chassis of the cluster. If user configures port
0 only, the Juniper Services Redundancy Protocol process (jsrpd) does not send control
heartbeats on control link 1 and the counters it sends will show zeroes.
exclude-protocol
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Description Specifies protocols whose packets are not secured using Media Access Control Security
(MACsec) when MACsec is enabled on a link using static connectivity association key
(CAK) security mode.
Default Disabled.
All packets are secured on a link when MACsec is enabled, with the exception of all types
of Spanning Tree Protocol (STP) packets.
Options protocol-name —Specifies the name of the protocol that should not be MACsec-secured.
Options include:
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
Syntax ethernet {
device-count number;
lacp {
link-protection {
non-revertive;
}
system-priority number;
}
}
Options The remaining statements are explained separately. See CLI Explorer.
fabric-options
Syntax fabric-options {
member-interfaces member-interface-name;
}
NOTE: When you run the system autoinstallation command, the command
will configure unit 0 logical interface for all the active state physical interfaces.
However, a few commands such as fabric-options do not allow the physical
interface to be configured with a logical interface. If the system autoinstallation
and the fabric-options commands are configured together, the following
message is displayed:
Related • Example: Configuring the Chassis Cluster Fabric Interfaces on page 109
Documentation
• member-interfaces on page 451
Syntax gigether-options {
802.3ad {
backup | primary | bundle;
lacp {
port-priority priority;
}
}
auto-negotiation {
remote-fault {
local-interface-offline | local-interface-online;
}
}
no-auto-negotiation;
ethernet-switch-profile {
mac-learn-enable;
tag-protocol-id [tpids];
ethernet-policer-profile {
input-priority-map {
ieee802.1p {
premium [values];
}
}
output-priority-map {
classifier {
premium {
forwarding-class class-name {
loss-priority (high | low);
}
}
}
}
policer cos-policer-name {
aggregate {
bandwidth-limit bps;
burst-size-limit bytes;
}
premium {
bandwidth-limit bps;
burst-size-limit bytes;
}
}
}
}
flow-control | no-flow-control;
ieee-802-3az-eee;
ignore-l3-incompletes;
loopback | no-loopback;
mpls {
pop-all-labels {
required-depth (1 | 2);
}
}
redundant-parent (Interfaces Gigabit Ethernet) interface-name;
source-address-filter {
mac-address;
}
}
Options The remaining statements are explained separately. See CLI Explorer.
Related • Example: Configuring Chassis Cluster Redundant Ethernet Interfaces for IPv4 and IPv6
Documentation Addresses on page 133
global-threshold
Description Specify the failover value for all IP addresses monitored by the redundancy group. When
IP addresses with a configured total weight in excess of the threshold have become
unreachable, the weight of IP monitoring is deducted from the redundancy group
threshold.
Options number —Value at which the IP monitoring weight is applied against the redundancy
group failover threshold.
Range: 0 through 255
Default: 0
global-weight
Description Specify the relative importance of all IP address monitored objects to the operation of
the redundancy group. Every monitored IP address is assigned a weight. If the monitored
address becomes unreachable, the weight of the object is deducted from the
global-threshold of IP monitoring objects in its redundancy group. When the
global-threshold reaches 0, the global-weight is deducted from the redundancy group.
Every redundancy group has a default threshold of 255. If the threshold reaches 0, a
failover is triggered. Failover is triggered even if the redundancy group is in manual failover
mode and preemption is not enabled.
Options number —Combined weight assigned to all monitored IP addresses. A higher weight value
indicates a greater importance.
Range: 0 through 255
Default: 255
gratuitous-arp-count
Description Specify the number of gratuitous Address Resolution Protocol (ARP) requests to send
on an active interface after failover.
Options number—Number of gratuitous ARP requests that a newly elected primary device in a
chassis cluster sends out to announce its presence to the other network devices.
Range: 1 through 16
Default: 4
heartbeat-interval
Release Information Statement introduced in Junos OS Release 9. Statement updated in Junos OS Release
10.4.
Description Set the interval between the periodic signals broadcast to the devices in a chassis cluster
to indicate that the active node is operational.
heartbeat-threshold
Release Information Statement introduced in Junos OS Release 9.0. Statement updated in Junos OS Release
10.4.
Description Set the number of consecutive missed heartbeat signals that a device in a chassis cluster
must exceed to trigger failover of the active node.
hold-down-interval
Description Set the minimum interval to be allowed between back-to-back failovers for the specified
redundancy group (affects manual failovers, as well as automatic failovers associated
with monitoring failures).
For redundancy group 0, this setting prevents back-to-back failovers from occurring less
than 5 minutes (300 seconds) apart. Note that a redundancy group 0 failover implies a
Routing Engine failure.
For some configurations, such as ones with a large number of routes or logical interfaces,
the default or specified interval for redundancy group 0 might not be sufficient. In such
cases, the system automatically extends the dampening time in increments of 60 seconds
until the system is ready for failover.
include-sci
Syntax include-sci;
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Description Specify that the SCI tag be appended to each packet on a link that has enabled MACsec.
You must enable SCI tagging on a switch that is enabling MACsec on an Ethernet link
connecting to an SRX device.
You should only use this option when connecting a switch to an SRX device, or to a host
device that requires SCI tagging. SCI tags are eight octets long, so appending an SCI tag
to all traffic on the link adds a significant amount of unneeded overhead.
Default SCI tagging is enabled on an SRX device that have enabled MACsec using static
connectivity association key (CAK) security mode, by default.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
Description Specify the redundant Ethernet interface, including its logical-unit-number, through which
the monitored IP address must be reachable. The specified redundant Ethernet interface
can be in any redundancy group. Likewise specify a secondary IP address to be used as
a ping source for monitoring the IP address through the secondary node’s redundant
Ethernet interface link.
interfaces (MACsec)
Syntax interface-name {
connectivity-association connectivity-association-name;
}
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Description Specify chassis cluster fabric interface on which MACsec is enabled. For SRX340, and
SRX345 devices, the fabric interface can be any 1 G Ethernet interface. Use this
configuration to apply a connectivity association to an interface, which enables Media
Access Control Security (MACsec) on that interface.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
interface-monitor
Description Specify a redundancy group interface to be monitored for failover and the relative weight
of the interface.
Syntax internal {
security-association {
manual {
encryption {
algorithm 3des-cbc;
iked-encryption enable;
key ascii-text ascii-text;
}
}
}
}
Description Enable secure login and to prevent attackers from gaining privileged access through this
control port by configuring the internal IP security (IPsec) security association (SA).
When the internal IPsec is configured, IPsec-based rlogin and remote command (rcmd)
are enforced, so an attacker cannot gain unauthorized information.
manual encryption—Specify a manual SA. Manual SAs require no negotiation; all values,
including the keys, are static and specified in the configuration.
key—Specify the encryption key. You must ensure that the manual encryption key is in
ASCII text and 24 characters long; otherwise, the configuration will result in a commit
failure.
ip-monitoring
Syntax ip-monitoring {
family {
inet {
ipv4-address {
interface {
logical-interface-name;
secondary-ip-address ip-address;
}
weight number;
}
}
}
global-threshold number;
global-weight number;
retry-count number;
retry-interval seconds;
}
Description Specify a global IP address monitoring threshold and weight, and the interval between
pings (retry-interval) and the number of consecutive ping failures (retry-count) permitted
before an IP address is considered unreachable for all IP addresses monitored by the
redundancy group. Also specify IP addresses, a monitoring weight, a redundant Ethernet
interface number, and a secondary IP monitoring ping source for each IP address, for the
redundancy group to monitor.
• weight
key-server-priority (MACsec)
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Description Specifies the key server priority used by the MACsec Key Agreement (MKA) protocol to
select the key server when MACsec is enabled using static connectivity association key
(CAK) security mode.
The switch with the lower priority-number is selected as the key server.
If the priority-number is identical on both sides of a point-to-point link, the MKA protocol
selects the device with the lower MAC address as the key server.
The priority-number can be any number between 0 and 255. The lower the number,
the higher the priority.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
lacp (Interfaces)
Syntax lacp {
(active | passive);
periodic;
}
Description For redundant Ethernet interfaces in a chassis cluster only, configure Link Aggregation
Control Protocol (LACP).
Default: If you do not specify lacp as either active or passive, LACP remains off (the
default).
Syntax link-protection {
non-revertive;
}
Description Enable Link Aggregation Control Protocol (LACP) link protection at the global (chassis)
level.
Options non-revertive—Disable the ability to switch to a better priority link (if one is available)
after a link is established as active and a collection or distribution is enabled.
macsec
Syntax macsec {
cluster-control-port <idx> {
connectivity-association connectivity-association-name;
}
cluster-data-port interface-name {
connectivity-association connectivity-association-name;
}
connectivity-association connectivity-association-name {
exclude-protocol protocol-name;
include-sci;
mka {
key-server-priority priority-number;
must-secure;
transmit-interval milliseconds;
}
no-encryption;
offset (0|30|50);
pre-shared-key {
cak hexadecimal-number;
ckn hexadecimal-number;
}
replay-protect {
replay-window-size number-of-packets;
}
security-mode security-mode;
}
traceoptions {
file {
filename;
files number;
match regular-expression;
(world-readable | no-world-readable);
size maximum-file-size;
}
flag flag;
}
}
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
member-interfaces
Description Specify the member interface name. Member interfaces that connect to each other must
be of the same type.
mka
Syntax mka {
must-secure;
key-server-priority priority-number;
transmit-interval interval;
}
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Description Specify parameters for the MACsec Key Agreement (MKA) protocol.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
must-secure
Syntax must-secure;
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Description Specifies that all traffic traversing the MACsec-secured link must be forwarded onward.
When the must-secure is enabled, all traffic that is not MACsec-secured that is received
on the interface is dropped.
When the must-secure is disabled, all traffic from devices that support MACsec is
MACsec-secured while traffic received from devices that do no support MACsec is
forwarded through the network.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
network-management
Syntax network-management {
cluster-master;
}
Description Define parameters for network management. To manage an SRX Series Services Gateway
cluster through a non-fxp0 interface, use this command to define the node as a virtual
chassis in NSM. This command establishes a single DMI connection from the primary
node to the NSM server. This connection is used to manage both nodes in the cluster.
Note that the non-fxp0 interface (regardless of which node it is present on) is always
controlled by the primary node in the cluster. The output of a <get-system-information>
RPC returns a <chassis-cluster> tag in all SRX Series devices. When NSM receives this
tag, it models SRX Series clusters as devices with autonomous control planes.
Options cluster-master—Enable in-band management on the primary cluster node through NSM.
no-encryption (MACsec)
Syntax no-encryption;
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
You can enable MACsec without enabling encryption. If a connectivity association with
a secure channel that has not enabled MACsec encryption is associated with an interface,
traffic is forwarded across the Ethernet link in clear text. You are, therefore, able to view
this unencrypted traffic when you are monitoring the link. The MACsec header is still
applied to the frame, however, and all MACsec data integrity checks are run on both ends
of the link to ensure the traffic has not been tampered with and does not represent a
security threat.
Traffic traversing a MAC-enabled point-to-point Ethernet link traverses the link at the
same speed regardless of whether encryption is enabled or disabled. You cannot increase
the speed of traffic traversing a MACsec-enabled Ethernet link by disabling encryption.
When MACsec is configuring using static connectivity association key (CAK) security
mode, the encryption setting is configured outside of the secure channel using the
no-encryption configuration statement.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
Syntax node (0 | 1 ) {
priority number;
}
Description Identify each cluster node in a redundancy group and set its relative priority for mastership.
Options —
node—Cluster node number, set with the set chassis cluster node node-number statement.
priority number—Priority value of the node. The eligible node with the highest priority is
elected master.
ntp
Supported Platforms ACX Series, EX Series, M Series, MX Series, PTX Series, SRX Series, T Series
Syntax ntp {
authentication-key number type type value password;
boot-server address;
broadcast <address> <key key-number> <routing-instance-name routing-instance-name>
<version value> <ttl value>;
broadcast-client;
multicast-client <address>;
peer address <key key-number> <version value> <prefer>;
server address <key key-number> <version value> <prefer>;
source-address source-address <routing-instance routing-instance-name>;
trusted-key [ key-numbers ];
}
ntp threshold
Syntax ntp {
threshold {
value;
action (accept | reject);
}
Description Assign a threshold value for Network Time Protocol (NTP) adjustment that is outside of
the acceptable NTP update and specify whether to accept or reject NTP synchronization
when the proposed time from the NTP server exceeds the configured threshold value. If
accept is the specified action, the system synchronizes the device time with the NTP
server, but logs the time difference between the configured threshold and the time
proposed by the NTP server; if reject is the specified action, synchronization with the
time proposed by the NTP server is rejected, but the system provides the option of
manually synchronizing the device time with the time proposed by the NTP server and
logs the time difference between the configured threshold and the time proposed by the
NTP server. By logging the time difference and rejecting synchronization when the
configured threshold is exceeded, this feature helps improve security on the NTP service.
Options value—Specify the maximum value in seconds allowed for NTP adjustment.
Range: 1 through 600.
Default: The default value is 400.
• accept—Enable log mode for abnormal NTP adjustment. When the proposed time
from the NTP server is outside of the configured threshold value, the device time
synchronizes with the NTP server, but the system logs the time difference between
the configured threshold and the time proposed by the NTP server.
• reject—Enable log and reject mode for abnormal NTP adjustment. When the
proposed time from the NTP server is outside of the configured threshold value,
the system rejects synchronization, but provides the option for manually
synchronizing the time and logs the time difference between the configured
threshold and the time proposed by the NTP server.
offset
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Description Specifies the number of octets in an Ethernet frame that are sent in unencrypted plain-text
when encryption is enabled for MACsec.
Setting the offset to 30 allows a feature to see the IPv4 header and the TCP/UDP header
while encrypting the remaining traffic. Setting the offset to 50 allows a feature to see
the IPv6 header and the TCP/UDP header while encrypting the remaining traffic.
You would typically forward traffic with the first 30 or 50 octets unencrypted if a feature
needed to see the data in the octets to perform a function, but you otherwise prefer to
encrypt the remaining data in the frames traversing the link. Load balancing features, in
particular, typically need to see the IP and TCP/UDP headers in the first 30 or 50 octets
to properly load balance traffic.
You configure the offset in the [edit security macsec connectivity-association] hierarchy
when you are enabling MACsec using static connectivity association key (CAK) or dynamic
security mode.
Default 0
Options 0—Specifies that no octets are unencrypted. When you set the offset to 0, all traffic on
the interface where the connectivity association or secure channel is applied is
encrypted.
30—Specifies that the first 30 octets of each Ethernet frame are unencrypted.
NOTE: In IPv4 traffic, setting the offset to 30 allows a feature to see the
IPv4 header and the TCP/UDP header while encrypting the rest of the
traffic. An offset of 30, therefore, is typically used when a feature needs
this information to perform a task on IPv4 traffic.
50—Specified that the first 50 octets of each Ethernet frame are unencrypted.
NOTE: In IPv6 traffic, setting the offset to 50 allows a feature to see the
IPv6 header and the TCP/UDP header while encrypting the rest of the
traffic. An offset of 50, therefore, is typically used when a feature needs
this information to perform a task on IPv6 traffic.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
Syntax preempt {
delay <seconds>;
limit <limit>;
period <seconds>;
}
Release Information Statement introduced in Junos OS Release 9.0. Support for delay, limit, and period options
are added in Junos OS Release 17.4R1.
Description Allow preemption of primaryship based on the priority within a redundancy group.
By configuring the preemptive delay timer and failover rate limit, you can limit the flapping
of the redundancy group state between the secondary and the primary in a preemptive
failover.
Options delay—Time to wait before the node in secondary state transitions to primary state in a
preemptive failover.
Range: 1 to 21,600 seconds
Default: 1
pre-shared-key
Syntax pre-shared-key {
cak hexadecimal-number;
ckn hexadecimal-number;
}
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Description Specifies the pre-shared key used to enable MACsec using static connectivity association
key (CAK) security mode.
A pre-shared key includes a connectivity association key name (CKN) and a connectivity
association key (CAK). A pre-shared key is exchanged between two devices at each end
of a point-to-point link to enable MACsec using static CAK security mode. The MACsec
Key Agreement (MKA) protocol is enabled after the pre-shared keys are successfully
verified and exchanged. The pre-shared key—the CKN and CAK—must match on both
ends of a link.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
Description Define the priority of a node (device) in a redundancy group. Initiating a failover with the
request chassis cluster failover node or request chassis cluster failover redundancy-group
command overrides the priority settings.
Options number—Priority value of the node. The eligible node with the highest priority is elected
master.
Range: 1 through 254
Description Define a redundancy group. Except for redundancy group 0, a redundancy group is a
logical interface consisting of two physical Ethernet interfaces, one on each chassis. One
interface is active, and the other is on standby. When the active interface fails, the standby
interface becomes active. The logical interface is called a redundant Ethernet interface
(reth).
Redundancy group 0 consists of the two Routing Engines in the chassis cluster and
controls which Routing Engine is primary. You must define redundancy group 0 in the
chassis cluster configuration.
redundancy-interface-process
Syntax redundancy-interface-process {
command binary-file-path;
disable;
failover (alternate-media | other-routing-engine);
}
• failover—Configure the device to reboot if the software process fails four times within
30 seconds, and specify the software to use during the reboot.
redundant-ether-options
Syntax redundant-ether-options {
(flow-control | no-flow-control);
lacp {
(active | passive);
periodic (fast | slow);
}
link-speed speed;
(loopback | no-loopback);
minimum-links number;
redundancy-group number;
source-address-filter mac-address;
(source-filtering | no-source-filtering);
}
Options The remaining statements are explained separately. See CLI Explorer.
Related • Example: Enabling Eight Queue Class of Service on Redundant Ethernet Interfaces on
Documentation page 154
• Example: Configuring Chassis Cluster Redundant Ethernet Interfaces for IPv4 and IPv6
Addresses on page 133
redundant-parent (Interfaces)
Description Assign local (child) interfaces to the redundant Ethernet (reth) interfaces. A redundant
Ethernet interface contains a pair of Fast Ethernet interfaces or a pair of Gigabit Ethernet
interfaces that are referred to as child interfaces of the redundant Ethernet interface (the
redundant parent).
Related • Example: Configuring Chassis Cluster Redundant Ethernet Interfaces for IPv4 and IPv6
Documentation Addresses on page 133
redundant-pseudo-interface-options
Syntax redundant-pseudo-interface-options {
redundancy-group redundancy-group-number;
}
An Internet Key Exchange (IKE) gateway operating in chassis cluster, needs an external
interface to communicate with a peer device. When an external interface (a reth interface
or a standalone interface) is used for communication; the interface might go down when
the physical interfaces are down. Instead, use loopback interfaces as an alternative to
physical interfaces.
replay-protect
Syntax replay-protect {
replay-window-size number-of-packets;
}
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
replay-window-size
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
When replay protection is enabled, the sequence of the ID number of received packets
are checked. If the packet arrives out of sequence and the difference between the packet
numbers exceeds the replay protection window size, the packet is dropped by the receiving
interface. For instance, if the replay protection window size is set to five and a packet
assigned the ID of 1006 arrives on the receiving link immediately after the packet assigned
the ID of 1000, the packet that is assigned the ID of 1006 is dropped because it falls
outside the parameters of the replay protection window.
Replay protection should not be enabled in cases where packets are expected to arrive
out of order.
Options number-of-packets —Specifies the size of the replay protection window, in packets.
When this variable is set to 0, all packets that arrive out-of-order are dropped. The
maximum out-of-order number-of-packets that can be configured is 65535.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
Description Specify the number of redundant Ethernet (reth) interfaces allowed in the chassis cluster.
Note that the number of reth interfaces configured determines the number of redundancy
groups that can be configured.
Description Specify the number of consecutive ping attempts that must fail before an IP address
monitored by the redundancy group is declared unreachable. (See retry-interval for a
related redundancy group IP address monitoring variable.)
Description Specify the ping packet send frequency (in seconds) for each IP address monitored by
the redundancy group. (See retry-count for a related IP address monitoring configuration
variable.)
Options interval—Pause time between each ping sent to each IP address monitored by the
redundancy group.
Range: 1 to 30 seconds
Default: 1 second
route-active-on
Description For chassis cluster configurations, identify the device (node) on which a route is active.
security-mode
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Description Configure the MACsec security mode for the connectivity association.
• dynamic—Dynamic mode.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
Syntax traceoptions {
file {
filename;
files number;
match regular-expression;
(world-readable | no-world-readable);
size maximum-file-size;
}
flag flag;
level {
(alert | all | critical | debug | emergency | error | info | notice | warning);
}
no-remote-trace;
}
Options • file filename —Name of the file to receive the output of the tracing operation. Enclose
the name within quotation marks. All files are placed in the directory /var/log.
• files number —(Optional) Maximum number of trace files. When a trace file named
trace-file reaches its maximum size, it is renamed to trace-file .0, then trace-file.1 , and
so on, until the maximum number of trace files is reached. The oldest archived file is
overwritten.
• If you specify a maximum number of files, you also must specify a maximum file size
with the size option and a filename.
• match regular-expression —(Optional) Refine the output to include lines that contain
the regular expression.
• size maximum-file-size —(Optional) Maximum size of each trace file, in kilobytes (KB),
megabytes (MB), or gigabytes (GB). When a trace file named trace-file reaches this
size, it is renamed trace-file .0. When the trace-file again reaches its maximum size,
trace-file .0 is renamed trace-file .1 and trace-file is renamed trace-file .0. This
renaming scheme continues until the maximum number of trace files is reached. Then
the oldest trace file is overwritten.
• If you specify a maximum file size, you also must specify a maximum number of trace
files with the files option and filename.
transmit-interval (MACsec)
Release Information Statement introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Description Specifies the transmit interval for MACsec Key Agreement (MKA) protocol data units
(PDUs).
The MKA transmit interval setting sets the frequency for how often the MKA PDU is sent
to the directly connected device to maintain MACsec on a point-to-point Ethernet link.
A lower interval increases bandwidth overhead on the link; a higher interval optimizes the
MKA protocol data unit exchange process.
The transmit interval settings must be identical on both ends of the link when MACsec
using static connectivity association key (CAK) security mode is enabled.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
weight
Description Specify the relative importance of the object to the operation of the redundancy group.
This statement is primarily used with interface monitoring and IP address monitoring
objects. The failure of an object—such as an interface—with a greater weight brings the
group closer to failover. Every monitored object is assigned a weight.
• interface-monitor objects—If the object fails, its weight is deducted from the threshold
of its redundancy group;
Every redundancy group has a default threshold of 255. If the threshold reaches 0, a
failover is triggered. Failover is triggered even if the redundancy group is in manual failover
mode and preemption is not enabled.
Options number —Weight assigned to the interface or monitored IP address. A higher weight value
indicates a greater importance.
Range: 0 through 255
Operational Commands
List of Sample Output clear chassis cluster control-plane statistics on page 483
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
List of Sample Output clear chassis cluster data-plane statistics on page 484
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
The following example displays the redundancy groups before and after the
failover-counts are cleared.
Cluster ID: 3
Node name Priority Status Preempt Manual failover
Cluster ID: 3
Node name Priority Status Preempt Manual failover
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
user@host> clear chassis cluster ip-monitoring failure-count
node0:
--------------------------------------------------------------------------
Cleared failure count for all IPs
node1:
--------------------------------------------------------------------------
Cleared failure count for all IPs
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
user@host> clear chassis cluster ip-monitoring failure-count ip-address 1.1.1.1
node0:
--------------------------------------------------------------------------
Cleared failure count for IP: 1.1.1.1
node1:
--------------------------------------------------------------------------
Cleared failure count for IP: 1.1.1.1
Description Clear the preemptive failover counter for all redundancy groups.
When a preemptive rate limit is configured, the counter starts with a first preemptive
failover and the count is reduced; this process continues until the count reaches zero
before the timer expires. You can use this command to clear the preemptive failover
counter and reset it to start again.
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
Description Clear the control plane and data plane statistics of a chassis cluster.
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
request chassis cb
Description Control the operation (take the CB offline or bring online) of the Control Board (CB).
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
Description Synchronize the configuration from the primary node to the secondary node when the
secondary node joins the primary node in a cluster.
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
Description For chassis cluster configurations, initiate manual failover in a redundancy group from
one node to the other, which becomes the primary node, and automatically reset the
priority of the group to 255. The failover stays in effect until the new primary node becomes
unavailable, the threshold of the redundancy group reaches 0, or you use the request
chassis cluster failover reset command.
After a manual failover, you must use the request chassis cluster failover reset command
before initiating another failover.
Options • node node-number—Number of the chassis cluster node to which the redundancy
group fails over.
• Range: 0 through 1
List of Sample Output request chassis cluster failover node on page 493
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
Description For chassis cluster configurations, initiate manual failover in a redundancy group from
one node to the other, which becomes the primary node, and automatically reset the
priority of the group to 255. The failover stays in effect until the new primary node becomes
unavailable, the threshold of the redundancy group reaches 0, or you use the request
chassis cluster failover reset command.
After a manual failover, you must use the request chassis cluster failover reset command
before initiating another failover.
Options • node node-number—Number of the chassis cluster node to which the redundancy
group fails over.
• Range: 0 or 1
Related • Initiating a Chassis Cluster Manual Redundancy Group Failover on page 242
Documentation
• Verifying Chassis Cluster Failover Status on page 244
List of Sample Output request chassis cluster failover redundancy-group on page 494
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
Description In chassis cluster configurations, undo the previous manual failover and return the
redundancy group to its original settings.
List of Sample Output request chassis cluster failover reset on page 496
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
Description Abort an upgrade in a chassis cluster during an in-service software upgrade (ISSU). Use
this command to end the ISSU on any nodes in a chassis cluster followed by reboot to
abort the ISSU on that device.
List of Sample Output request chassis cluster in-service-upgrade abort on page 498
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
Release Information Command introduced before Junos OS Release 9.0. The options master and backup are
introduced in Junos OS Release 15.1X49-D50.
Description CLI command to install AI-Script install packages on SRX Series devices in a chassis
cluster.
Additional Information This command eliminates the AI script installation on both primary node and secondary
node separately.
List of Sample Output request system scripts add package-name on page 500
request system scripts add package-name on page 501
Sample Output
Options • at time (Optional)— Specify the time at which to reboot the device. You can specify
time in one of the following ways:
• +minutes— Reboot the device in the number of minutes from now that you specify.
• yymmddhhmm— Reboot the device at the absolute time on the date you specify.
Enter the year, month, day, hour (in 24-hour format), and minute.
• hh:mm— Reboot the device at the absolute time you specify, on the current day.
Enter the time in 24-hour format, using a colon (:) to separate hours from minutes.
• in minutes(Optional)— Specify the number of minutes from now to reboot the device.
This option is a synonym for the at +minutes option
• media type(Optional)— Specify the boot device to boot the device from:
• message “text” (Optional)— Provide a message to display to all system users before
the device reboots.
Supported Platforms SRX1500, SRX300, SRX320, SRX340, SRX345, SRX4100, SRX4200, SRX5400, SRX550M,
SRX5600, SRX5800
Release Information For SRX5400, SRX5600, and SRX5800 devices, command introduced in Junos OS
Release 9.6 and support for reboot as a required parameter added in Junos OS Release
11.2R2. For SRX5400 devices, the command is introduced in Junos OS Release
12.1X46-D20. For SRX300, SRX320, SRX340, and SRX345 devices, command introduced
in Junos OS Release 15.1X49-D40. For SRX1500 devices, command introduced in Junos
OS Release 15.1X49-D50.
Description The in-service software upgrade (ISSU) feature allows a chassis cluster pair to be
upgraded from supported Junos OS versions with a traffic impact similar to that of
redundancy group failovers. Before upgrading, you must perform failovers so that all
redundancy groups are active on only one device. We recommend that graceful restart
for routing protocols be enabled before you initiate an ISSU.
For SRX300, SRX320, SRX340, SRX345, and SRX550M devices, you must use the no-sync
parameter to perform an in-band cluster upgrade (ICU). This allows a chassis cluster
pair to be upgraded with a minimal service disruption of approximately 30 seconds.
For SRX1500, SRX4100, and SRX4200 devices, the no-sync parameter is not supported
when using ISSU to upgrade. The no-sync option specifies that the state is not
synchronized from the primary node to the secondary node.
For SRX1500 devices, the no-tcp-syn-check parameter is not supported when using ISSU
to upgrade.
Options • image_name—Specify the location and name of the software upgrade package to be
installed.
• no-copy—(Optional) Install the software upgrade package but do not save the copies
of package files.
• no-sync—(Optional) Stop the flow state from synchronizing when the old secondary
node has booted with a new Junos OS image.
This parameter applies to SRX300, SRX320, SRX340, SRX345, and SRX550M devices
only. It is required for an ICU.
• no-tcp-syn-check—(Optional) Create a window wherein the TCP SYN check for the
incoming packets is disabled. The default value for the window is 7200 seconds (2
hours).
This parameter applies to SRX300, SRX320, SRX340, SRX345, and SRX550M devices
only.
This parameter applies to SRX300, SRX320, SRX340, SRX345, and SRX550M devices
only.
• reboot—(Optional) Reboot each device in the chassis cluster pair after installation is
completed.
This parameter applies to SRX5400, SRX5600, and SRX5800 devices only. It is required
for an ISSU. (The devices in a cluster are automatically rebooted following an ICU.)
List of Sample Output request system software in-service-upgrade (SRX300, SRX320, SRX340, SRX345,
and SRX550M Devices) on page 506
request system software in-service-upgrade (SRX1400) on page 507
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
request system software in-service-upgrade (SRX300, SRX320, SRX340, SRX345, and SRX550M Devices)
user@host> request system software in-service-upgrade
/var/tmp/junos-srxsme-15.1I20160520_0757-domestic.tgz no-sync
Shutdown NOW!
{primary:node1}
user@host>
Sample Output
node0:
--------------------------------------------------------------------------
Inititating in-service-upgrade
Checking compatibility with configuration
mgd: commit complete
Validation succeeded
ISSU: Preparing Backup RE
Finished upgrading secondary node node0
Rebooting Secondary Node
node0:
--------------------------------------------------------------------------
Shutdown NOW!
[pid 3257]
ISSU: Backup RE Prepare Done
Waiting for node0 to reboot.
node0 booted up.
Waiting for node0 to become secondary
node0 became secondary.
Waiting for node0 to be ready for failover
ISSU: Preparing Daemons
Secondary node0 ready for failover.
Failing over all redundancy-groups to node0
ISSU: Preparing for Switchover
Initiated failover for all the redundancy groups to node1
Waiting for node0 take over all redundancy groups
node0:
--------------------------------------------------------------------------
Exiting in-service-upgrade window
Exiting in-service-upgrade window
Chassis ISSU Aborted
node0:
--------------------------------------------------------------------------
Chassis ISSU Ended
ISSU completed successfully, rebooting...
Shutdown NOW!
[pid 4294]
Description Revert to the software that was loaded at the last successful request system software
add command. The FreeBSD 11 Junos OS image provides an option to save a recovery
image in an Operation, Administration, and Maintenance (OAM) partition, but that option
will save only the Junos OS image, not the Linux image. If a user saves the Junos OS image
and recovers it later, it might not be compatible with the Linux software loaded on the
system.
Release Information Support for extended cluster identifiers (more than 15 identifiers) added in Junos OS
Release 12.1X45-D10.
Description Sets the chassis cluster identifier (ID) and node ID on each device, and reboots the devices
to enable clustering. The system uses the chassis cluster ID and chassis cluster node ID
to apply the correct configuration for each node (for example, when you use the
apply-groups command to configure the chassis cluster management interface). The
chassis cluster ID and node ID statements are written to the EPROM, and the statements
take effect when the system is rebooted.
NOTE: If you have a cluster set up and running with an earlier release of Junos
OS, you can upgrade to Junos OS Release 12.1X45-D10 or later and re-create
a cluster with cluster IDs greater than 16. If for any reason you decide to revert
to the previous version of Junos OS that did not support extended cluster
IDs, the system comes up with standalone devices after you reboot. If the
cluster ID set is less than 16 and you roll back to a previous release, the system
comes back with the previous setup.
Related • Example: Setting the Chassis Cluster Node ID and Cluster ID on page 92
Documentation
• Understanding the Interconnect Logical System and Logical Tunnel Interfaces
Output Fields When you enter this command, you are provided feedback on the status of your request.
Release Information Command introduced in Junos OS Release 9.3. Output changed to support dual control
ports in Junos OS Release 10.0.
List of Sample Output show chassis cluster control-plane statistics on page 512
show chassis cluster control-plane statistics (SRX5000 Line Devices) on page 512
Output Fields Table 43 on page 511 lists the output fields for the show chassis cluster control-plane
statistics command. Output fields are listed in the approximate order in which they appear.
Control link statistics Statistics of the control link used by chassis cluster traffic. Statistics for Control link 1 are
displayed when you use dual control links (SRX5600 and SRX5800 devices only).
Fabric link statistics Statistics of the fabric link used by chassis cluster traffic. Statistics for Child Link 1 are
displayed when you use dual fabric links.
Switch fabric link statistics Statistics of the switch fabric link used by chassis cluster traffic.
Sample Output
Sample Output
Description Display the status of the data plane interface (also known as a fabric interface) in a
chassis cluster configuration.
List of Sample Output show chassis cluster data-plane interfaces on page 513
Output Fields Table 44 on page 513 lists the output fields for the show chassis cluster data-plane
interfaces command. Output fields are listed in the approximate order in which they
appear.
Sample Output
List of Sample Output show chassis cluster data-plane statistics on page 515
Output Fields Table 45 on page 515 lists the output fields for the show chassis cluster data-plane statistics
command. Output fields are listed in the approximate order in which they appear.
Sample Output
Description Display the status of the switch fabric interfaces (swfab interfaces) in a chassis cluster.
List of Sample Output show chassis cluster ethernet-switching interfaces on page 517
Output Fields Table 46 on page 517 lists the output fields for the show chassis cluster ethernet-switching
interfaces command. Output fields are listed in the approximate order in which they
appear.
Sample Output
List of Sample Output show chassis cluster ethernet-switching status on page 519
Output Fields Table 47 on page 518 lists the output fields for the show chassis cluster ethernet-switching
status command. Output fields are listed in the approximate order in which they appear.
Redundancy-Group You can create up to 128 redundancy groups in the chassis cluster.
Sample Output
Cluster ID: 1
Node Priority Status Preempt Manual Monitor-failures
Description Display chassis cluster messages. The messages indicate each node's health condition
and details of the monitored failure.
Output Fields Table 48 on page 520 lists the output fields for the show chassis cluster information
command. Output fields are listed in the approximate order in which they appear.
Redundancy Group Information • Redundancy Group—ID number (0 - 255) of a redundancy group in the cluster.
• Current State—State of the redundancy group: primary, secondary, hold, or
secondary-hold.
• Weight—Relative importance of the redundancy group.
• Time—Time when the redundancy group changed the state.
• From—State of the redundancy group before the change.
• To—State of the redundancy group after the change.
• Reason—Reason for the change of state of the redundancy group.
Chassis cluster LED information • Current LED color—Current color state of the LED.
• Last LED change reason—Reason for change of state of the LED.
Sample Output
node0:
--------------------------------------------------------------------------
Redundancy Group Information:
node1:
--------------------------------------------------------------------------
Redundancy Group Information:
Sample Output
node0:
--------------------------------------------------------------------------
Redundancy Group Information:
Failure Information:
node1:
--------------------------------------------------------------------------
Redundancy Group Information:
Failure Information:
Sample Output
node0:
--------------------------------------------------------------------------
Redundancy Group Information:
node1:
--------------------------------------------------------------------------
Redundancy Group Information:
Description Display chassis cluster messages. The messages indicate the redundancy mode,
automatic synchronization status, and if automatic synchronization is enabled on the
device.
List of Sample Output show chassis cluster information configuration-synchronization on page 526
Output Fields Table 49 on page 525 lists the output fields for the show chassis cluster information
configuration-synchronization command. Output fields are listed in the approximate order
in which they appear.
Events The timestamp of the event, the automatic configuration synchronization status, and
the number of synchronization attempts.
Sample Output
node0:
--------------------------------------------------------------------------
Configuration Synchronization:
Status:
Activation status: Enabled
Last sync operation: Auto-Sync
Last sync result: Not needed
Last sync mgd messages:
Events:
Feb 25 22:21:49.174 : Auto-Sync: Not needed
node1:
--------------------------------------------------------------------------
Configuration Synchronization:
Status:
Activation status: Enabled
Last sync operation: Auto-Sync
Last sync result: Succeeded
Last sync mgd messages:
mgd: rcp: /config/juniper.conf: No such file or directory
Network security daemon: warning: You have enabled/disabled inet6 flow.
Network security daemon: You must reboot the system for your change to
take effect.
Network security daemon: If you have deployed a cluster, be sure to reboot
all nodes.
mgd: commit complete
Events:
Feb 25 23:02:33.467 : Auto-Sync: In progress. Attempt: 1
Feb 25 23:03:13.200 : Auto-Sync: Succeeded. Attempt: 1
Description Display chassis cluster messages. The messages indicate the progress of the in-service
software upgrade (ISSU).
List of Sample Output show chassis cluster information issu on page 527
Output Fields Table 50 on page 527 lists the output fields for the show chassis cluster information issu
command. Output fields are listed in the approximate order in which they appear.
Sample Output
node0:
--------------------------------------------------------------------------
Cold Synchronization Progress:
CS Prereq 10 of 10 SPUs completed
node1:
--------------------------------------------------------------------------
Cold Synchronization Progress:
CS Prereq 10 of 10 SPUs completed
1. if_state sync 10 SPUs completed
2. fabric link 10 SPUs completed
3. policy data sync 10 SPUs completed
4. cp ready 10 SPUs completed
5. VPN data sync 10 SPUs completed
CS RTO sync 10 of 10 SPUs completed
CS Postreq 10 of 10 SPUs completed
Release Information Command modified in Junos OS Release 9.0. Output changed to support dual control
ports in Junos OS Release 10.0. Output changed to support control interfaces in Junos
OS Release 11.2. Output changed to support redundant pseudo interfaces in Junos OS
Release 12.1X44-D10. For SRX5000 line devices, output changed to support the internal
security association (SA) option in Junos OS Release 12.1X45-D10. Output changed to
support MACsec status on control and fabric interfaces in Junos OS Release 15.1X49-D60.
Description Display the status of the control interface in a chassis cluster configuration.
Output Fields Table 51 on page 529 lists the output fields for the show chassis cluster interfaces command.
Output fields are listed in the approximate order in which they appear.
Control link status State of the chassis cluster control interface: up or down.
Sample Output
Control interfaces:
Index Interface Monitored-Status Security
0 em0 Up Disabled
1 em1 Down Disabled
Fabric interfaces:
Name Child-interface Status Security
fab0 ge-0/1/0 Up Disabled
fab0
fab1 ge-6/1/0 Up Disabled
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 2
reth2 Down Not configured
reth3 Down Not configured
reth4 Down Not configured
reth5 Down Not configured
reth6 Down Not configured
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 1
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-0/1/9 100 Up 0
ge-0/1/9 100 Up
Sample Output
Control interfaces:
Index Interface Monitored-Status Internal-SA Security
0 em0 Up Disabled Disabled
1 em1 Down Disabled Disabled
Fabric link status: Up
Fabric interfaces:
Name Child-interface Status Security
(Physical/Monitored)
fab0 xe-1/0/3 Up / Down Disabled
fab0
fab1 xe-7/0/3 Up / Down Disabled
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 2
reth2 Down Not configured
reth3 Down Not configured
reth4 Down Not configured
reth5 Down Not configured
reth6 Down Not configured
reth7 Down Not configured
reth8 Down Not configured
reth9 Down Not configured
reth10 Down Not configured
reth11 Down Not configured
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 1
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-0/1/9 100 Up 0
ge-0/1/9 100 Up
Sample Output
Control interfaces:
Index Interface Monitored-Status Internal-SA Security
0 fxp1 Up Disabled Disabled
Fabric interfaces:
Name Child-interface Status Security
(Physical/Monitored)
fab0 ge-0/0/2 Down / Down Disabled
fab0
fab1 ge-9/0/2 Up / Up Disabled
fab1
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Sample Output
Control interfaces:
Index Interface Monitored-Status Internal-SA Security
0 em0 Up Disabled Disabled
1 em1 Down Disabled Disabled
Fabric interfaces:
Name Child-interface Status Security
(Physical/Monitored)
fab0 <<< fab child missing once PIC off lined Disabled
fab0
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up Not configured
reth1 Down 1
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Release Information Command introduced in Junos OS Release 9.6. Support for global threshold, current
threshold, and weight of each monitored IP address added in Junos OS Release
12.1X47-D10.
Description Display the status of all monitored IP addresses for a redundancy group.
Options • none— Display the status of monitored IP addresses for all redundancy groups on the
node.
List of Sample Output show chassis cluster ip-monitoring status on page 535
show chassis cluster ip-monitoring status redundancy-group on page 536
Output Fields Table 52 on page 534 lists the output fields for the show chassis cluster ip-monitoring
status command.
Global threshold Failover value for all IP addresses monitored by the redundancy group.
Current threshold Value equal to the global threshold minus the total weight of the unreachable IP address.
Values for this field are: reachable, unreachable, and unknown. The status is “unknown”
if Packet Forwarding Engines (PFEs) are not yet up and running.
Table 52: show chassis cluster ip-monitoring status Output Fields (continued)
Field Name Field Description
Reason Explanation for the reported status. See Table 53 on page 535.
Weight Combined weight (0 - 255) assigned to all monitored IP addresses. A higher weight value
indicates greater importance.
Expanded reason output fields for unreachable IP addresses added in Junos OS Release
10.1. You might see any of the following reasons displayed.
Table 53: show chassis cluster ip-monitoring status redundancy group Reason Fields
Reason Reason Description
No route to host The router could not resolve the ARP, which is needed to send the ICMP packet to the
host with the monitored IP address.
No auxiliary IP found The redundant Ethernet interface does not have an auxiliary IP address configured.
redundancy-group state unknown Unable to obtain the state (primary, secondary, secondary-hold, disable) of a
redundancy-group.
No reth child MAC address Could not extract the MAC address of the redundant Ethernet child interface.
Secondary link not monitored The secondary link might be down (the secondary child interface of a redundant Ethernet
interface is either down or non-functional).
Unknown The IP address has just been configured and the router still does not know the status of
this IP.
or
Sample Output
Redundancy group: 1
Global threshold: 200
Current threshold: -120
node1:
--------------------------------------------------------------------------
Redundancy group: 1
Global threshold: 200
Current threshold: -120
Sample Output
Redundancy group: 1
node1:
--------------------------------------------------------------------------
Redundancy group: 1
Release Information Command modified in Junos OS Release 9.0. Output changed to support dual control
ports in Junos OS Release 10.0.
Output Fields Table 54 on page 537 lists the output fields for the show chassis cluster statistics command.
Output fields are listed in the approximate order in which they appear.
Control link statistics Statistics of the control link used by chassis cluster traffic. Statistics for Control link 1 are
displayed when you use dual control links (SRX5000 lines only). Note that the output
for the SRX5000 lines will always show Control link 0 and Control link 1 statistics, even
though only one control link is active or working.
Fabric link statistics Statistics of the fabric link used by chassis cluster traffic. Statistics for Child Link 1 are
displayed when you use dual fabric links.
Sample Output
Sample Output
Sample Output
Release Information Support for monitoring failures added in Junos OS Release 12.1X47-D10.
Description Display the current status of the Chassis Cluster. You can use this command to check
the status of chassis cluster nodes, redundancy groups, and failover status.
Options • none—Display the status of all redundancy groups in the chassis cluster.
Output Fields Table 55 on page 541 lists the output fields for the show chassis cluster status command.
Output fields are listed in the approximate order in which they appear.
Cluster ID ID number (1-15) of a cluster is applicable for releases upto Junos OS Release 12.1X45-D10.
ID number (1-255) is applicable for Releases 12.1X45-D10 and later. Setting a cluster ID
to 0 is equivalent to disabling a cluster.
Redundancy-Group You can create up to 128 redundancy groups in the chassis cluster.
Manual failover • Yes: Mastership is set manually through the CLI with the request chassis cluster failover
node or request chassis cluster failover redundancy-group command. This overrides
Priority and Preempt.
• No: Mastership is not set manually through the CLI.
Sample Output
Cluster ID: 1
Node Priority Status Preempt Manual Monitor-failures
Sample Output
Cluster ID: 1
Node Priority Status Preempt Manual Monitor-failures
Redundancy group: 0, Failover count: 1
node0 200 primary no no None
node1 100 secondary no no None
Redundancy group: 1, Failover count: 3
node0 200 primary-preempt-hold yes no None node1 100 secondary
yes no None
Sample Output
Cluster ID: 1
Node Priority Status Preempt Manual Monitor-failures
Description Display environmental information about the services gateway chassis, including the
temperature and information about the fans, power supplies, and Routing Engine.
Output Fields Table 56 on page 544 lists the output fields for the show chassis environment command.
Output fields are listed in the approximate order in which they appear.
Temp Temperature of air flowing through the chassis in degrees Celsius (C) and Fahrenheit (F).
Fan Fan status: OK, Testing (during initial power-on), Failed, or Absent.
Sample Output
Description Display environmental information about the Control Boards (CBs) installed on SRX
Series devices.
List of Sample Output show chassis environment cb (SRX5600 devices with SRX5K-SCB3 [SCB3] and
Enhanced Midplanes) on page 549
show chassis environment cb node 1 (SRX5600 devices with SRX5K-SCB3 [SCB3]
and Enhanced Midplanes) on page 549
Output Fields Table 57 on page 548 lists the output fields for the show chassis environment cb command.
Output fields are listed in the approximate order in which they appear.
State Status of the CB. If two CBs are installed and online, one is functioning as the master, and the other
is the standby.
Temperature Temperature in Celsius (C) and Fahrenheit (F) of the air flowing past the CB.
• Temperature Intake—Measures the temperature of the air intake to cool the power supplies.
• Temperature Exhaust—Measures the temperature of the hot air exhaust.
Power Power required and measured on the CB. The left column displays the required power, in volts. The
right column displays the measured power, in millivolts.
PMBus device Enhanced SCB on SRX Series devices allows the system to save power by supplying only the amount
of voltage that is required. Configurable PMBus devices are used to provide the voltage for each
individual device. There is one PMBus device for each XF ASIC so that the output can be customized
to each device. The following PMBus device information is displayed for devices with Enhanced MX
SCB:
• Expected voltage
• Measured voltage
• Measured current
• Calculated power
Sample Output
show chassis environment cb (SRX5600 devices with SRX5K-SCB3 [SCB3] and Enhanced Midplanes)
user@host> show chassis environment cb node 0
node0:
--------------------------------------------------------------------------
--------------------------------------------------------------------------
CB 0 status:
State Online Master
Temperature 34 degrees C / 93 degrees F
Power 1
1.0 V 1002
1.2 V 1198
1.5 V 1501
1.8 V 1801
2.5 V 2507
3.3 V 3300
5.0 V 5014
5.0 V RE 4982
12.0 V 11988
12.0 V RE 11930
Power 2
4.6 V bias MidPlane 4801
11.3 V bias PEM 11292
11.3 V bias FPD 11272
11.3 V bias POE 0 11214
11.3 V bias POE 1 11253
Bus Revision 96
FPGA Revision 16
PMBus Expected Measured Measured Calculated
device voltage voltage current power
XF ASIC A 1033 mV 1033 mV 15500 mA 16011 mW
XF ASIC B 1034 mV 1033 mV 15000 mA 15495 mW
show chassis environment cb node 1 (SRX5600 devices with SRX5K-SCB3 [SCB3] and Enhanced Midplanes)
user@host> show chassis environment cb node 1
node1:
--------------------------------------------------------------------------
CB 0 status:
State Online Master
Temperature 35 degrees C / 95 degrees F
Power 1
1.0 V 1002
1.2 V 1198
1.5 V 1504
1.8 V 1801
2.5 V 2507
3.3 V 3325
5.0 V 5014
5.0 V RE 4943
12.0 V 12007
12.0 V RE 12007
Power 2
4.6 V bias MidPlane 4814
11.3 V bias PEM 11272
11.3 V bias FPD 11330
11.3 V bias POE 0 11176
11.3 V bias POE 1 11292
Bus Revision 96
FPGA Revision 16
PMBus Expected Measured Measured Calculated
device voltage voltage current power
XF ASIC A 958 mV 959 mV 13500 mA 12946 mW
XF ASIC B 1033 mV 1031 mV 16500 mA 17011 mW
Description Display information about the ports on the Control Board (CB) Ethernet switch on an
SRX Series device.
Output Fields Table 58 on page 551 lists the output fields for the show chassis ethernet-switch command.
Output fields are listed in the approximate order in which they appear.
Link is good on port n Information about the link between each port on the CB's Ethernet switch and one of the following
connected to device devices:
Autonegotiate is By default, built-in Fast Ethernet ports on a PIC autonegotiate whether to operate at 10 Mbps or 100
Enabled (or Disabled) Mbps. All other interfaces automatically choose the correct speed based on the PIC type and whether
the PIC is configured to operate in multiplexed mode.
Sample Output
node1:
--------------------------------------------------------------------------
Displaying summary for switch 0
Link is good on GE port 0 connected to device: FPC0
Speed is 1000Mb
Duplex is full
Autonegotiate is Enabled
Flow Control TX is Disabled
Flow Control RX is Disabled
List of Sample Output show chassis fabric plane(SRX5600 and SRX5800 Devices with SRX5000 Line SCB
II [SRX5K-SCBE] and SRX5K-RE-1800X4) on page 556
Output Fields Table 59 on page 555 lists the output fields for the show chassis fabric plane command.
Output fields are listed in the approximate order in which they appear.
PFE Slot number of each Packet Forwarding Engine and the state of the none
links to the FPC:
Sample Output
PFE 0 :Links ok
Plane 1
Plane state: ACTIVE
FPC 0
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 9
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 2
Plane state: ACTIVE
FPC 0
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 9
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 3
Plane state: ACTIVE
FPC 0
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 9
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 4
Plane state: SPARE
FPC 0
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 9
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 5
Plane state: SPARE
FPC 0
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 9
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
node1:
--------------------------------------------------------------------------
Fabric management PLANE state
Plane 0
Plane state: ACTIVE
FPC 0
PFE 0 :Links ok
FPC 1
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 1
Plane state: ACTIVE
FPC 0
PFE 0 :Links ok
FPC 1
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 2
Plane state: ACTIVE
FPC 0
PFE 0 :Links ok
FPC 1
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 3
Plane state: ACTIVE
FPC 0
PFE 0 :Links ok
FPC 1
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 4
Plane state: SPARE
FPC 0
PFE 0 :Links ok
FPC 1
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 5
Plane state: SPARE
FPC 0
PFE 0 :Links ok
FPC 1
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
List of Sample Output show chassis fabric plane-location(SRX5600 and SRX5800 Devices with SRX5000
Line SCB II [SRX5K-SCBE] and SRX5K-RE-1800X4) on page 561
Output Fields Table 60 on page 561 lists the output fields for the show chassis fabric plane-location
command. Output fields are listed in the approximate order in which they appear.
Sample Output
node1:
--------------------------------------------------------------------------
------------Fabric Plane Locations-------------
Plane 0 Control Board 0
Plane 1 Control Board 0
List of Sample Output show chassis fabric summary(SRX5600 and SRX5800 devices with SRX5000 line
SCB II (SRX5K-SCBE) and SRX5K-RE-1800X4) on page 564
Output Fields Table 61 on page 563 lists the output fields for the show chassis fabric summary command.
Output fields are listed in the approximate order in which they appear.
For information about link and destination errors, issue the show
chassis fabric fpc commands.
• Spare—SIB is redundant and will move to active state if one of
the working SIBs fails.
• None—No errors
• Link Errors—Fabric link errors were found on the SIB RX link.
• Cell drops—Fabric cell drops were found on the SIB ASIC.
• Link, Cell drops—Both link errors and cell drops were detected on
at least one of the FPC’s fabric links.
NOTE: The Errors column is empty only when the FPC or SIB is
offline.
Sample Output
node1:
--------------------------------------------------------------------------
Plane State Uptime
0 Online 14 minutes, 7 seconds
1 Online 14 minutes, 2 seconds
2 Online 13 minutes, 57 seconds
3 Online 13 minutes, 51 seconds
4 Spare 13 minutes, 46 seconds
5 Spare 13 minutes, 41 seconds
Release Information Command introduced in Junos OS Release 9.2. Command modified in Junos OS Release
9.2 to include node option.
• models—(Optional) Display model numbers and part numbers for orderable FRUs.
Output Fields Table 62 on page 566 lists the output fields for the show chassis hardware command.
Output fields are listed in the approximate order in which they appear.
Item Chassis component—Information about the backplane; power supplies; fan trays; Routing
Engine; each Physical Interface Module (PIM)—reported as FPC and PIC—and each fan,
blower, and impeller.
Serial Number Serial number of the chassis component. The serial number of the backplane is also the
serial number of the device chassis. Use this serial number when you need to contact
Juniper Networks Customer Support about the device chassis.
CLEI code Common Language Equipment Identifier code. This value is displayed only for hardware
components that use ID EEPROM format v2. This value is not displayed for components
that use ID EEPROM format v1.
EEPROM Version ID EEPROM version used by hardware component: 0x01 (version 1) or 0x02 (version 2).
• There are three SCB slots in SRX5800 devices. The third slot can be used for an
SCB or an FPC. When an SRX5K-SCB was used , the third SCB slot was used as an
FPC. SCB redundancy is provided in chassis cluster mode.
• With an SCB2, a third SCB is supported. If a third SCB is plugged in, it provides
intra-chassis fabric redundancy.
• The Ethernet switch in the SCB2 provides the Ethernet connectivity among all the
FPCs and the Routing Engine. The Routing Engine uses this connectivity to distribute
forwarding and routing tables to the FPCs. The FPCs use this connectivity to send
exception packets to the Routing Engine.
• Fabric connects all FPCs in the data plane. The Fabric Manager executes on the
Routing Engine and controls the fabric system in the chassis. Packet Forwarding
Engines on the FPC and fabric planes on the SCB are connected through HSL2
channels.
• SCB2 supports HSL2 with both 3.11 Gbps and 6.22 Gbps (SerDes) link speed and
various HSL2 modes. When an FPC is brought online, the link speed and HSL2 mode
are determined by the type of FPC.
Starting with Junos OS Release 15.1X49-D10 and Junos OS Release 17.3R1, the
SRX5K-SCB3 (SCB3) with enhanced midplane is introduced.
• Type of Flexible PIC Concentrator (FPC), Physical Interface Card (PIC), Modular
Interface Cards (MICs), and PIMs.
• IOCs
Starting with Junos OS Release 15.1X49-D10 and Junos OS Release 17.3R1, the
SRX5K-MPC3-100G10G (IOC3) and the SRX5K-MPC3-40G10G (IOC3) are introduced.
• IOC3 has two types of IOC3 MPCs, which have different built-in MICs: the 24x10GE
+ 6x40GE MPC and the 2x100GE + 4x10GE MPC.
• IOC3 supports SCB3 and SRX5000 line backplane and enhanced backplane.
• IOC3 can only work with SRX5000 line SCB2 and SCB3. If an SRX5000 line SCB is
detected, IOC3 is offline, an FPC misconfiguration alarm is raised, and a system log
message is generated.
• IOC3 interoperates with SCB2 and SCB3.
• IOC3 interoperates with the SRX5K-SPC-4-15-320 (SPC2) and the SRX5K-MPC
(IOC2).
• The maximum power consumption for one IOC3 is 645W. An enhanced power
module must be used.
• The IOC3 does not support the following command to set a PIC to go offline or
online:
request chassis pic fpc-slot <fpc-slot> pic-slot <pic-slot> <offline | online> .
• IOC3 supports 240 Gbps of throughput with the enhanced SRX5000 line backplane.
• Chassis cluster functions the same as for the SRX5000 line IOC2.
• IOC3 supports intra-chassis and inter-chassis fabric redundancy mode.
• IOC3 supports ISSU and ISHU in chassis cluster mode.
• IOC3 supports intra-FPC and and Inter-FPC Express Path (previously known as
services offloading) with IPv4.
• NAT of IPv4 and IPv6 in normal mode and IPv4 for Express Path mode.
• All four PICs on the 24x10GE + 6x40GE cannot be powered on. A maximum of two
PICs can be powered on at the same time.
Use the set chassis fpc <slot> pic <pic> power off command to choose the PICs you
want to power on.
NOTE: The RE2 provides significantly better performance than the previously used
Routing Engine, even with a single core.
node1:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis JN124FE77AGB SRX5600
Midplane REV 01 760-063936 ACRE2970 Enhanced SRX5600 Midplane
FPM Board REV 01 710-024631 CABY3552 Front Panel Display
PEM 0 Rev 03 740-034701 QCS133809028 PS 1.4-2.6kW; 90-264V
AC in
PEM 1 Rev 03 740-034701 QCS133809027 PS 1.4-2.6kW; 90-264V
AC in
Routing Engine 0 REV 02 740-056658 9009218294 SRX5k RE-1800X4
Routing Engine 1 REV 02 740-056658 9013104758 SRX5k RE-1800X4
CB 0 REV 01 750-062257 CAEB8180 SRX5k SCB3
CB 1 REV 01 750-062257 CADZ3334 SRX5k SCB3
FPC 0 REV 18 750-054877 CACJ9834 SRX5k SPC II
CPU BUILTIN BUILTIN SRX5k DPC PPC
PIC 0 BUILTIN BUILTIN SPU Cp
PIC 1 BUILTIN BUILTIN SPU Flow
PIC 2 BUILTIN BUILTIN SPU Flow
PIC 3 BUILTIN BUILTIN SPU Flow
FPC 1 REV 01 750-062243 CAEB0981 SRX5k IOC3 24XGE+6XLG
CPU REV 02 711-062244 CAEA4644 RMPC PMB
PIC 0 BUILTIN BUILTIN 12x 10GE SFP+
Xcvr 0 REV 01 740-031980 AP41BLH SFP+-10G-SR
Xcvr 1 REV 01 740-031980 AQ400SL SFP+-10G-SR
Xcvr 2 REV 01 740-031980 AP422LJ SFP+-10G-SR
Xcvr 3 REV 01 740-021308 AMG0RBT SFP+-10G-SR
Xcvr 9 REV 01 740-021308 MUC2FRG SFP+-10G-SR
PIC 1 BUILTIN BUILTIN 12x 10GE SFP+
PIC 2 BUILTIN BUILTIN 3x 40GE QSFP+
PIC 3 BUILTIN BUILTIN 3x 40GE QSFP+
WAN MEZZ REV 15 750-049136 CAEA4837 MPC5E 24XGE OTN Mezz
FPC 3 REV 11 750-043157 CACA8784 SRX5k IOC II
CPU REV 04 711-043360 CACA8820 SRX5k MPC PMB
MIC 0 REV 05 750-049488 CADF0521 10x 10GE SFP+
PIC 0 BUILTIN BUILTIN 10x 10GE SFP+
Xcvr 0 REV 01 740-030658 AD1130A00PV SFP+-10G-USR
Xcvr 1 REV 01 740-031980 AN40MVV SFP+-10G-SR
Xcvr 2 REV 01 740-021308 CF36KM37B SFP+-10G-SR
Xcvr 3 REV 01 740-021308 AD153830DSZ SFP+-10G-SR
MIC 1 REV 01 750-049487 CABB5961 2x 40GE QSFP+
PIC 2 BUILTIN BUILTIN 2x 40GE QSFP+
Xcvr 1 REV 01 740-032986 QB160513 QSFP+-40G-SR4
FPC 5 REV 02 750-044175 ZY2569 SRX5k SPC II
CPU BUILTIN BUILTIN SRX5k DPC PPC
PIC 0 BUILTIN BUILTIN SPU Flow
PIC 1 BUILTIN BUILTIN SPU Flow
PIC 2 BUILTIN BUILTIN SPU Flow
PIC 3 BUILTIN BUILTIN SPU Flow
Fan Tray Enhanced Fan Tray
C in
PEM 1 Rev 03 740-034701 QCS13090904T PS 1.4-2.6kW; 90-264V A
C in
Routing Engine 0 REV 01 740-056658 9009196496 SRX5k RE-1800X4
CB 0 REV 01 750-062257 CAEC2501 SRX5k SCB3
FPC 0 REV 10 750-056758 CADC8067 SRX5k SPC II
CPU BUILTIN BUILTIN SRX5k DPC PPC
PIC 0 BUILTIN BUILTIN SPU Cp
PIC 1 BUILTIN BUILTIN SPU Flow
PIC 2 BUILTIN BUILTIN SPU Flow
PIC 3 BUILTIN BUILTIN SPU Flow
FPC 2 REV 01 750-062243 CAEE5924 SRX5k IOC3 24XGE+6XLG
CPU REV 01 711-062244 CAEB4890 SRX5k IOC3 PMB
PIC 0 BUILTIN BUILTIN 12x 10GE SFP+
PIC 1 BUILTIN BUILTIN 12x 10GE SFP+
PIC 2 BUILTIN BUILTIN 3x 40GE QSFP+
Xcvr 0 REV 01 740-038623 MOC13156230449 QSFP+-40G-CU1M
Xcvr 2 REV 01 740-038623 MOC13156230449 QSFP+-40G-CU1M
PIC 3 BUILTIN BUILTIN 3x 40GE QSFP+
WAN MEZZ REV 01 750-062682 CAEE5817 24x 10GE SFP+ Mezz
FPC 4 REV 11 750-043157 CACY1595 SRX5k IOC II
CPU REV 04 711-043360 CACZ8879 SRX5k MPC PMB
MIC 1 REV 04 750-049488 CACM6062 10x 10GE SFP+
PIC 2 BUILTIN BUILTIN 10x 10GE SFP+
Xcvr 7 REV 01 740-021308 AD1439301TU SFP+-10G-SR
Xcvr 8 REV 01 740-021308 AD1439301SD SFP+-10G-SR
Xcvr 9 REV 01 740-021308 AD1439301TS SFP+-10G-SR
FPC 5 REV 05 750-044175 ZZ1371 SRX5k SPC II
CPU BUILTIN BUILTIN SRX5k DPC PPC
PIC 0 BUILTIN BUILTIN SPU Flow
PIC 1 BUILTIN BUILTIN SPU Flow
PIC 2 BUILTIN BUILTIN SPU Flow
PIC 3 BUILTIN BUILTIN SPU Flow
Fan Tray Enhanced Fan Tray
node1:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis JN124FEC0AGB SRX5600
Midplane REV 01 760-063936 ACRE2946 Enhanced SRX5600 Midplane
FPM Board test 710-017254 test Front Panel Display
PEM 0 Rev 01 740-038514 QCS114111003 DC 2.6kW Power Entry
Module
PEM 1 Rev 01 740-038514 QCS12031100J DC 2.6kW Power Entry
Module
Routing Engine 0 REV 01 740-056658 9009186342 SRX5k RE-1800X4
CB 0 REV 01 750-062257 CAEB8178 SRX5k SCB3
FPC 0 REV 07 750-044175 CAAD0769 SRX5k SPC II
CPU BUILTIN BUILTIN SRX5k DPC PPC
PIC 0 BUILTIN BUILTIN SPU Cp
PIC 1 BUILTIN BUILTIN SPU Flow
PIC 2 BUILTIN BUILTIN SPU Flow
PIC 3 BUILTIN BUILTIN SPU Flow
FPC 4 REV 11 750-043157 CACY1592 SRX5k IOC II
CPU REV 04 711-043360 CACZ8831 SRX5k MPC PMB
MIC 1 REV 04 750-049488 CACN0239 10x 10GE SFP+
PIC 2 BUILTIN BUILTIN 10x 10GE SFP+
Xcvr 7 REV 01 740-031980 ARN23HW SFP+-10G-SR
Xcvr 8 REV 01 740-031980 ARN2FVW SFP+-10G-SR
Xcvr 9 REV 01 740-031980 ARN2YVM SFP+-10G-SR
FPC 5 REV 10 750-056758 CADA8736 SRX5k SPC II
CPU BUILTIN BUILTIN SRX5k DPC PPC
PIC 0 BUILTIN BUILTIN SPU Flow
PIC 1 BUILTIN BUILTIN SPU Flow
PIC 2 BUILTIN BUILTIN SPU Flow
PIC 3 BUILTIN BUILTIN SPU Flow
Fan Tray Enhanced Fan Tray
Hardware inventory:
Item Version Part number Serial number Description
Chassis DK2816AR0020 SRX4200
Mainboard REV 01 650-071675 16061032317 SRX4200
Routing Engine 0 BUILTIN BUILTIN SRX Routing Engine
FPC 0 BUILTIN BUILTIN FEB
PIC 0 BUILTIN BUILTIN 8x10G-SFP
Xcvr 0 REV 01 740-038153 MOC11511530020 SFP+-10G-CU3M
Xcvr 1 REV 01 740-038153 MOC11511530020 SFP+-10G-CU3M
Xcvr 2 REV 01 740-038153 MOC11511530020 SFP+-10G-CU3M
Xcvr 3 REV 01 740-038153 MOC11511530020 SFP+-10G-CU3M
Xcvr 4 REV 01 740-021308 04DZ06A00364 SFP+-10G-SR
Xcvr 5 REV 01 740-031980 233363A03066 SFP+-10G-SR
Xcvr 6 REV 01 740-021308 AL70SWE SFP+-10G-SR
Xcvr 7 REV 01 740-031980 ALN0N6C SFP+-10G-SR
Xcvr 8 REV 01 740-030076 APF16220018NK1 SFP+-10G-CU1M
Power Supply 0 REV 04 740-041741 1GA26241849 JPSU-650W-AC-AFO
Power Supply 1 REV 04 740-041741 1GA26241846 JPSU-650W-AC-AFO
Fan Tray 0 SRX4200 0, Front to Back
Airflow - AFO
Fan Tray 1 SRX4200 1, Front to Back
Airflow - AFO
Fan Tray 2 SRX4200 2, Front to Back
Airflow - AFO
Fan Tray 3 SRX4200 3, Front to Back
Airflow - AFO
List of Sample Output show chassis routing-engine (Sample 1 - SRX550M) on page 578
show chassis routing-engine (Sample 2 - vSRX) on page 578
Output Fields Table 63 on page 577 lists the output fields for the show chassis routing-engine command.
Output fields are listed in the approximate order in which they appear.
NOTE: Starting with Junos OS Release 15.1x49-D70 and Junos OS Release 17.3R1, there
is a change in the method for calculating the memory utilization by a Routing Engine.
The inactive memory is now subtracted from the total available memory. There is thus,
a decrease in the reported value for used memory; as the inactive memory is now
considered as free.
CPU utilization Current CPU utilization statistics on the control plane core.
User Current CPU utilization in user mode on the control plane core.
Background Current CPU utilization in nice mode on the control plane core.
Kernel Current CPU utilization in kernel mode on the control plane core.
Interrupt Current CPU utilization in interrupt mode on the control plane core.
Idle Current CPU utilization in idle mode on the control plane core.
Uptime Length of time the Routing Engine has been up (running) since the last start.
Last reboot reason Reason for the last reboot of the Routing Engine.
Load averages The average number of threads waiting in the run queue or currently executing over 1-,
5-, and 15-minute periods.
Sample Output
Sample Output
Interrupt 6 percent
Idle 88 percent
Model VSRX RE
Start time 2015-03-03 07:04:18 UTC
Uptime 2 days, 11 hours, 51 minutes, 11 seconds
Last reboot reason Router rebooted after a normal shutdown.
Load averages: 1 minute 5 minute 15 minute
0.07 0.04 0.06
Description Display tracing options for the chassis cluster redundancy process.
List of Sample Output show configuration chassis cluster traceoptions on page 580
Output Fields Table 64 on page 580 lists the output fields for the show configuration chassis cluster
traceoptions command. Output fields are listed in the approximate order in which they
appear.
file Name of the file that receives the output of the tracing operation.
Sample Output
Description Set the date and local time. If reject mode is enabled and the system rejected the update
from the NTP server because it exceeds the configured threshold value, an administrator
has two options to overrule the reject mode action: manually set the date and time in
YYYYMMDDhhmm.ss format, or force synchronization of device time with the NTP server
update by specifying the force option.
Options ntp—Use a NTP server to synchronize the current date and time setting on the SRX series
devices.
force —Force system date and time to update to NTP server values. The device date and
time are synchronized with the NTP proposed date and time even if reject is set as
the action and the difference between the device time and NTP proposed time
exceeds the default or the configured threshold value.
key <key>—Specify a key number to authenticate the NTP server used to synchronize
the date and time. You must specify the same key number used to authenticate the
server, configured at the [edit system ntp authentication-key number] hierarchy level.
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
Release Information Command introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Description Display status information about the specified Gigabit Ethernet interface.
Additional Information In a logical system, this command displays information only about the logical interfaces
and not about the physical interfaces.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
List of Sample Output show interfaces (Gigabit Ethernet) (for Fabric) on page 593
show interfaces detail for Fabric on page 594
Output Fields Table 65 on page 584 describes the output fields for the show interfaces (Gigabit Ethernet)
command. Output fields are listed in the approximate order in which they appear. For
Gigabit Ethernet IQ and IQE PICs, the traffic and MAC statistics vary by interface type.
For more information, see Table 66 on page 593.
Physical Interface
Physical interface Name of the physical interface. All levels
Enabled State of the interface. Possible values are described in the “Enabled Field” All levels
section under .
Interface index Index number of the physical interface, which reflects its initialization sequence. detail none
SNMP ifIndex SNMP index number for the physical interface. detail none
Link-level type Encapsulation being used on the physical interface. All levels
MTU Maximum transmission unit size on the physical interface. All levels
Loopback Loopback status: Enabled or Disabled. If loopback is enabled, type of loopback: All levels
Local or Remote.
LAN-PHY mode 10-Gigabit Ethernet interface operating in Local Area Network Physical Layer All levels
Device (LAN PHY) mode. LAN PHY allows 10-Gigabit Ethernet wide area links
to use existing Ethernet applications.
Auto-negotiation (Gigabit Ethernet interfaces) Autonegotiation status: Enabled or Disabled. All levels
Last flapped Date, time, and how long ago the interface went from down to up. The format detail none
is Last flapped: year-month-day hour:minute:second:timezone (hour:minute:second
ago). For example, Last flapped: 2002-04-26 10:52:40 PDT (04:33:20 ago).
Input Rate Input rate in bits per second (bps) and packets per second (pps). The value in None
this field also includes the Layer 2 overhead bytes for ingress traffic on Ethernet
interfaces if you enable accounting of Layer 2 overhead at the PIC level or the
logical interface level.
Output Rate Output rate in bps and pps. The value in this field also includes the Layer 2 None
overhead bytes for egress traffic on Ethernet interfaces if you enable accounting
of Layer 2 overhead at the PIC level or the logical interface level.
Statistics last cleared Time when the statistics for the interface were last set to zero. detail extensive
Egress account Layer 2 overhead in bytes that is accounted in the interface statistics for egress detail extensive
overhead traffic.
Ingress account Layer 2 overhead in bytes that is accounted in the interface statistics for ingress detail extensive
overhead traffic.
Traffic statistics Number and rate of bytes and packets received and transmitted on the physical detail
interface.
• Input bytes—Number of bytes received on the interface. The value in this field
also includes the Layer 2 overhead bytes for ingress traffic on Ethernet
interfaces if you enable accounting of Layer 2 overhead at the PIC level or
the logical interface level.
• Output bytes—Number of bytes transmitted on the interface. The value in
this field also includes the Layer 2 overhead bytes for egress traffic on Ethernet
interfaces if you enable accounting of Layer 2 overhead at the PIC level or
the logical interface level.
• Input packets—Number of packets received on the interface.
• Output packets—Number of packets transmitted on the interface.
Gigabit Ethernet and 10-Gigabit Ethernet IQ PICs count the overhead and CRC
bytes.
For Gigabit Ethernet IQ PICs, the input byte counts vary by interface type.
Egress queues Total number of egress queues supported on the specified interface. detail
NOTE: In DPCs that are not of the enhanced type, such as DPC 40x 1GE R, DPCE
20x 1GE + 2x 10GE R, or DPCE 40x 1GE R, you might notice a discrepancy in the
output of the show interfaces command because incoming packets might be
counted in the Egress queues section of the output. This problem occurs on
non-enhanced DPCs because the egress queue statistics are polled from IMQ
(Inbound Message Queuing) block of the I-chip. The IMQ block does not
differentiate between ingress and egress WAN traffic; as a result, the combined
statistics are displayed in the egress queue counters on the Routing Engine. In
a simple VPLS scenario, if there is no MAC entry in DMAC table (by sending
unidirectional traffic), traffic is flooded and the input traffic is accounted in IMQ.
For bidirectional traffic (MAC entry in DMAC table), if the outgoing interface is
on the same I-chip then both ingress and egress statistics are counted in a
combined way. If the outgoing interface is on a different I-chip or FPC, then only
egress statistics are accounted in IMQ. This behavior is expected with
non-enhanced DPCs
Queue counters CoS queue number and its associated user-configured forwarding class name. detail extensive
(Egress)
• Queued packets—Number of queued packets.
• Transmitted packets—Number of transmitted packets.
• Dropped packets—Number of packets dropped by the ASIC's RED mechanism.
Active alarms and Ethernet-specific defects that can prevent the interface from passing packets. detail none
Active defects When a defect persists for a certain amount of time, it is promoted to an alarm.
Based on the router configuration, an alarm can ring the red or yellow alarm
bell on the router, or turn on the red or yellow alarm LED on the craft interface.
These fields can contain the value None or Link.
Input
OTN FEC statistics The forward error correction (FEC) counters provide the following statistics: detail
PCS statistics (10-Gigabit Ethernet interfaces) Displays Physical Coding Sublayer (PCS) fault detail extensive
conditions from the WAN PHY or the LAN PHY device.
• Bit errors—Number of seconds during which at least one bit error rate (BER)
occurred while the PCS receiver is operating in normal mode.
• Errored blocks—Number of seconds when at least one errored block occurred
while the PCS receiver is operating in normal mode.
MAC statistics Receive and Transmit statistics reported by the PIC's MAC subsystem, including extensive
the following:
• Total octets and total packets—Total number of octets and packets. For
Gigabit Ethernet IQ PICs, the received octets count varies by interface type.
• Unicast packets, Broadcast packets, and Multicast packets—Number of unicast,
broadcast, and multicast packets.
• CRC/Align errors—Total number of packets received that had a length
(excluding framing bits, but including FCS octets) of between 64 and 1518
octets, inclusive, and had either a bad FCS with an integral number of octets
(FCS Error) or a bad FCS with a nonintegral number of octets (Alignment
Error).
• FIFO error—Number of FIFO errors that are reported by the ASIC on the PIC.
If this value is ever nonzero, the PIC or a cable is probably malfunctioning.
• MAC control frames—Number of MAC control frames.
• MAC pause frames—Number of MAC control frames with pause operational
code.
• Oversized frames—There are two possible conditions regarding the number
of oversized frames:
• Jabber frames—Number of frames that were longer than 1518 octets (excluding
framing bits, but including FCS octets), and had either an FCS error or an
alignment error. This definition of jabber is different from the definition in
IEEE-802.3 section 8.2.1.5 (10BASE5) and section 10.3.1.4 (10BASE2). These
documents define jabber as the condition in which any packet exceeds 20
ms. The allowed range to detect jabber is from 20 ms to 150 ms.
• Fragment frames—Total number of packets that were less than 64 octets in
length (excluding framing bits, but including FCS octets) and had either an
FCS error or an alignment error. Fragment frames normally increment because
both runts (which are normal occurrences caused by collisions) and noise
hits are counted.
• VLAN tagged frames—Number of frames that are VLAN tagged. The system
uses the TPID of 0x8100 in the frame to determine whether a frame is tagged
or not.
NOTE: The 20-port Gigabit Ethernet MIC (MIC-3D-20GE-SFP) does not have
hardware counters for VLAN frames. Therefore, the VLAN tagged frames field
displays 0 when the show interfaces command is executed on a 20-port
Gigabit Ethernet MIC. In other words, the number of VLAN tagged frames
cannot be determined for the 20-port Gigabit Ethernet MIC.
OTN Received APS/PCC0: 0x02, APS/PCC1: 0x11, APS/PCC2: 0x47, APS/PCC3: 0x58 Payload extensive
Overhead Bytes Type: 0x08
OTN Transmitted APS/PCC0: 0x00, APS/PCC1: 0x00, APS/PCC2: 0x00, APS/PCC3: 0x00 extensive
Overhead Bytes Payload Type: 0x08
Filter statistics Receive and Transmit statistics reported by the PIC's MAC address filter extensive
subsystem. The filtering is done by the content-addressable memory (CAM)
on the PIC. The filter examines a packet's source and destination MAC addresses
to determine whether the packet should enter the system or be rejected.
PMA PHY (10-Gigabit Ethernet interfaces, WAN PHY mode) SONET error information: extensive
Subfields are:
WIS section (10-Gigabit Ethernet interfaces, WAN PHY mode) SONET error information: extensive
Subfields are:
Logical Interface
Logical interface Name of the logical interface. All levels
Index Index number of the logical interface, which reflects its initialization sequence. detail none
SNMP ifIndex SNMP interface index number for the logical interface. detail none
Generation Unique number for use by Juniper Networks technical support only. detail
VLAN-Tag Rewrite profile applied to incoming or outgoing frames on the outer (Out) VLAN brief detail none
tag or for both the outer and inner (In) VLAN tags.
• push—An outer VLAN tag is pushed in front of the existing VLAN tag.
• pop—The outer VLAN tag of the incoming frame is removed.
• swap—The outer VLAN tag of the incoming frame is overwritten with the
user-specified VLAN tag information.
• push—An outer VLAN tag is pushed in front of the existing VLAN tag.
• push-push—Two VLAN tags are pushed in from the incoming frame.
• swap-push—The outer VLAN tag of the incoming frame is replaced by a
user-specified VLAN tag value. A user-specified outer VLAN tag is pushed in
front. The outer tag becomes an inner tag in the final frame.
• swap-swap—Both the inner and the outer VLAN tags of the incoming frame
are replaced by the user-specified VLAN tag value.
• pop-swap—The outer VLAN tag of the incoming frame is removed, and the
inner VLAN tag of the incoming frame is replaced by the user-specified VLAN
tag value. The inner tag becomes the outer tag in the final frame.
• pop-pop—Both the outer and inner VLAN tags of the incoming frame are
removed.
Demux IP demultiplexing (demux) value that appears if this interface is used as the detail none
demux underlying interface. The output is one of the following:
ACI VLAN: Dynamic Name of the dynamic profile that defines the agent circuit identifier (ACI) brief detail none
Profile interface set. If configured, the ACI interface set enables the underlying Ethernet
interface to create dynamic VLAN subscriber interfaces based on ACI
information.
MTU Maximum transmission unit size on the logical interface. detail none
Neighbor Discovery NDP statistics for protocol inet6 under logical interface statistics. All levels
Protocol (NDP)Queue
Statistics • Max nh cache—Maximum interface neighbor discovery nexthop cache size.
• New hold nh limit—Maximum number of new unresolved nexthops.
• Curr nh cnt—Current number of resolved nexthops in the NDP queue.
• Curr new hold cnt—Current number of unresolved nexthops in the NDP queue.
• NH drop cnt—Number of NDP requests not serviced.
Dynamic Profile Name of the dynamic profile that was used to create this interface configured detail none
with a Point-to-Point Protocol over Ethernet (PPPoE) family.
Service Name Table Name of the service name table for the interface configured with a PPPoE family. detail none
Max Sessions Maximum number of PPPoE logical interfaces that can be activated on the detail none
underlying interface.
Duplicate Protection State of PPPoE duplicate protection: On or Off. When duplicate protection is detail none
configured for the underlying interface, a dynamic PPPoE logical interface cannot
be activated when an existing active logical interface is present for the same
PPPoE client.
Direct Connect State of the configuration to ignore DSL Forum VSAs: On or Off. When configured, detail none
the router ignores any of these VSAs received from a directly connected CPE
device on the interface.
Maximum labels Maximum number of MPLS labels configured for the MPLS protocol family on detail none
the logical interface.
Traffic statistics Number and rate of bytes and packets received and transmitted on the specified detail
interface set.
Local statistics Number and rate of bytes and packets destined to the router. detail
Generation Unique number for use by Juniper Networks technical support only. detail
Transit statistics Number and rate of bytes and packets transiting the switch.
NOTE: For Gigabit Ethernet intelligent queuing 2 (IQ2) interfaces, the logical
interface egress statistics might not accurately reflect the traffic on the wire
when output shaping is applied. Traffic management output shaping might
drop packets after they are tallied by the Output bytes and Output packets
interface counters. However, correct values display for both of these egress
statistics when per-unit scheduling is enabled for the Gigabit Ethernet IQ2
physical interface, or when a single logical interface is actively using a shared
scheduler.
Route Table Route table in which the logical interface address is located. For example, 0 detail none
refers to the routing table inet.0.
Donor interface (Unnumbered Ethernet) Interface from which an unnumbered Ethernet interface detail none
borrows an IPv4 address.
Preferred source (Unnumbered Ethernet) Secondary IPv4 address of the donor loopback interface detail none
address that acts as the preferred source address for the unnumbered Ethernet interface.
Input Filters Names of any input filters applied to this interface. If you specify a precedence detail
value for any filter in a dynamic profile, filter precedence values appear in
parentheses next to all interfaces.
Output Filters Names of any output filters applied to this interface. If you specify a precedence detail
value for any filter in a dynamic profile, filter precedence values appear in
parentheses next to all interfaces.
Mac-Validate Failures Number of MAC address validation failures for packets and bytes. This field is detail none
displayed when MAC address validation is enabled for the logical interface.
protocol-family Protocol family configured on the logical interface. If the protocol is inet, the IP brief
address of the interface is also displayed.
Destination IP address of the remote side of the connection. detail extensive none
Generation Unique number for use by Juniper Networks technical support only. detail extensive
Table 66: Gigabit Ethernet IQ PIC Traffic and MAC Statistics by Interface Type
Interface Type Sample Command Byte and Octet Counts Include Comments
Inbound physical show interfaces Traffic statistics: The additional 4 bytes are
interface ge-0/3/0 extensive for the CRC.
Input bytes: 496 bytes per packet, representing
the Layer 2 packet
MAC statistics:
Outbound physical show interfaces Traffic statistics: For input bytes, the
interface ge-0/0/0 extensive additional 12 bytes include
Input bytes: 490 bytes per packet, representing 6 bytes for the destination
the Layer 3 packet + 12 bytes MAC address plus 4 bytes
for VLAN plus 2 bytes for
MAC statistics: the Ethernet type.
Sample Output
0 134121 134121 0
1 0 0 0
2 0 0 0
3 0 0 0
Logical interface ge-0/0/2.0 (Index 77) (SNMP ifIndex 537) (Generation 144)
Flags: Up SNMP-Traps 0x0 Encapsulation: ENET2
Traffic statistics:
Input bytes : 20300152
Output bytes : 19149160
Input packets: 139190
Output packets: 134116
Local statistics:
Input bytes : 748678
Output bytes : 871206
Input packets: 5273
Output packets: 5379
Transit statistics:
Input bytes : 19551474 2328 bps
Output bytes : 18277954 2264 bps
Input packets: 133917 1 pps
Output packets: 128737 1 pps
Security: Zone: Null
Flow Statistics :
Flow Input statistics :
Self packets : 0
ICMP packets : 0
VPN packets : 0
Multicast packets : 0
Bytes permitted by policy : 0
Connections established : 0
Flow Output statistics:
Multicast packets : 0
Bytes permitted by policy : 0
Flow error statistics (Packets dropped due to):
Address spoofing: 0
Authentication failed: 0
Incoming NAT errors: 0
Invalid zone received packet: 0
Multiple user authentications: 0
Multiple incoming NAT: 0
No parent for a gate: 0
No one interested in self packets: 0
No minor session: 0
No more sessions: 0
No NAT gate: 0
No route present: 0
No SA for incoming SPI: 0
No tunnel found: 0
No session for a gate: 0
Description Display the current threshold and reject mode configured information.
Output Fields lists the output fields for the Table 67 on page 597 show system ntp threshold command.
Output fields are listed in the approximate order in which they appear.
NTP threshold Assign a threshold value for Network Time Protocol (NTP) adjustment that is outside of the
acceptable NTP update and specify whether to accept or reject NTP synchronization when the
proposed time from the NTP server exceeds the configured threshold value.
Success Criteria Verifies the NTP threshold and provide the status of NTP adjustment mode (accept or reject).
Sample Output
Release Information Command introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Description Display the status of the active MACsec connections on the device.
Options none—Display MACsec connection information for all interfaces on the device.
Output Fields Table 68 on page 598 lists the output fields for the show security macsec connections
command. Output fields are listed in the approximate order in which they appear.
The offset is set using the offset statement when configuring the connectivity association
when using static connectivity association key (CAK) or dynamic security mode.
Replay protect Replay protection setting. Replay protection is enabled when this output is on and disabled
when this output is off.
You can enable replay protection using the replay-protect statement in the connectivity
association.
Sample Output
Release Information Command introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Options none—Display MACsec statistics in brief form for all interfaces on the switch.
brief | detail—(Optional) Display the specified level of output. Using the brief option is
equivalent to entering the command with no options (the default). The detail option
displays additional fields that are not visible in the brief output.
NOTE: The field names that only appear in this command output when
you enter the detail option are mostly useful for debugging purposes by
Juniper Networks support personnel.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
List of Sample Output show security macsec statistics interface on page 603
Output Fields Table 69 on page 601 lists the output fields for the show security macsec statistics
command. Output fields are listed in the approximate order in which they appear.
The field names that appear in this command output only when you enter the detail
option are mostly useful for debugging purposes by Juniper Networks support personnel.
Those field names are, therefore, not included in this table.
Encrypted bytes Total number of bytes transmitted out of the interface in the secure All levels
channel that were secured and encrypted using MACsec.
Protected packets Total number of packets transmitted out of the interface in the All levels
secure channel that were secured but not encrypted using MACsec.
Protected bytes Total number of bytes transmitted out of the interface in the secure All levels
channel that were secured but not encrypted using MACsec.
Protected packets Total number of packets transmitted out of the interface in the All levels
connectivity association that were secured but not encrypted using
MACsec.
Accepted packets The number of received packets that have been accepted by the All levels
secure channel on the interface. The secure channel is used to send
all data plane traffic on a MACsec-enabled link.
This counter increments for traffic that is and is not encrypted using
MACsec.
Validated bytes The number of bytes that have been validated by the MACsec All levels
integrity check and received on the secure channel on the interface.
The secure channel is used to send all data plane traffic on a
MACsec-enabled link.
Decrypted bytes The number of bytes received in the secure channel on the interface All levels
that have been decrypted. The secure channel is used to send all
data plane traffic on a MACsec-enabled link.
Validated bytes The number of bytes that have been validated by the MACsec All levels
integrity check and received on the connectivity association on the
interface. The counter includes all control and data plane traffic
accepted on the interface.
Decrypted bytes The number of bytes received in the connectivity association on the All levels
interface that have been decrypted. The counter includes all control
and data plane traffic accepted on the interface.
Sample Output
Release Information Command introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
The output for this command does not include statistics for MACsec data traffic. For
MACsec data traffic statistics, see show security macsec statistics for SRX device.
Options • interface interface-name—(Optional) Display the MKA information for the specified
interface only.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
Output Fields Table 70 on page 604 lists the output fields for the show security mka statistics command.
Output fields are listed in the approximate order in which they appear.
This counter increments for received MKA control packets only. This counter does not
increment when data packets are received.
This counter increments for transmitted MKA control packets only. This counter does
not increment when data packets are transmitted.
CAK mismatch packets Number of Connectivity Association Key (CAK) mismatch packets.
This counter increments when the connectivity association key (CAK) and connectivity
association key name (CKN), which are user-configured values that have to match to
enable MACsec, do not match for an MKA control packet.
This counter increments when the connectivity association key (CAK) value does not
match on both ends of a MACsec-secured Ethernet link.
Invalid destination address packets Number of invalid destination MAC address packets.
Old Replayed message number Number of old replayed message number packets.
packets
Sample Output
Release Information Command introduced in Junos OS Release 15.1X49-D60 for SRX340 and SRX345 devices.
Options • interface interface-name—(Optional) Display the MKA information for the specified
interface only.
Related • Understanding Media Access Control Security (MACsec) for SRX Series on page 367
Documentation
• Configuring Media Access Control Security (MACsec) on page 370
Output Fields Table 71 on page 606 lists the output fields for the show security mka sessions command.
Output fields are listed in the approximate order in which they appear.
The CAK is configured using the cak keyword when configuring the pre-shared key.
The switch is the key server when this output is yes. The switch is not the key server when
this output is no.
The key server priority can be set using the key-server-priority statement.
Latest SAK AN Name of the latest secure association key (SAK) association number.
Latest SAK KI Name of the latest secure association key (SAK) key identifier.
Previous SAK AN Name of the previous secure association key (SAK) association number.
Previous SAK KI Name of the previous secure association key (SAK) key identifier.
Sample Output
Description Provide secure login by enabling the internal security association in a chassis cluster
configuration.
Output Fields Table 72 on page 608 lists the output fields for the show security
internal-security-association command. Output fields are listed in the approximate order
in which they appear.
Internal SA Status State of the internal SA option on the chassis cluster control link: enabled or disabled.
Sample Output
node0:
--------------------------------------------------------------------------
Internal SA Status : Enabled
Iked Encryption Status : Enabled
node1:
--------------------------------------------------------------------------
Internal SA Status : Enabled
Iked Encryption Status : Enabled
Release Information Command introduced in Junos OS Release 9.5. Logical system status option added in
Junos OS Release 11.2.
Description Display licenses and information about how licenses are used.
keys—(Optional) Display a list of license keys. Use this information to verify that each
expected license key is present.
status—(Optional) Display license status for a specified logical system or for all logical
systems.
Output Fields Table 73 on page 609 lists the output fields for the show system license command. Output
fields are listed in the approximate order in which they appear.
Feature name Name assigned to the configured feature. You use this information to verify that all the features for
which you installed licenses are present.
Licenses used Number of licenses used by the device. You use this information to verify that the number of licenses
used matches the number configured. If a licensed feature is configured, the feature is considered
used.
Licenses needed Number of licenses required for features being used but not yet properly licensed.
Expiry Time remaining in the grace period before a license is required for a feature being used.
Logical system license Displays whether a license is enabled for a logical system.
status
Sample Output
License usage:
Licenses Licenses Licenses Expiry
Feature name used installed needed
av_key_kaspersky_engine 1 1 0 2012-03-30
01:00:00 IST
wf_key_surfcontrol_cpa 0 1 0 2012-03-30
01:00:00 IST
dynamic-vpn 0 1 0 permanent
ax411-wlan-ap 0 2 0 permanent
Licenses installed:
License identifier: JUNOS301998
License version: 2
Valid for device: AG4909AA0080
Features:
av_key_kaspersky_engine - Kaspersky AV
date-based, 2011-03-30 01:00:00 IST - 2012-03-30 01:00:00 IST
Features:
av_key_kaspersky_engine - Kaspersky AV
date-based, 2011-03-30 01:00:00 IST - 2012-03-30 01:00:00 IST